Isn't the tool doing its job in that case? I wouldn't generally expect it to independently determine that an otherwise reliable source made a mistake. In fact I feel like that would be a really bad idea.
Imagine if a relatively clueless intern left something out of a report because the textbook "seemed wrong".
Saying that the input data is wrong and the AI didn't hallucinate that data is also kind of a "trust me bro" statement.
The Mandiant feed is not public, so I cannot check what was fed to it.
I don't really care why its wrong. It is wrong. And using that as the example prompt in your announcement is an interesting choice.
Imagine if a relatively clueless intern left something out of a report because the textbook "seemed wrong".