They actively discussed taking more steps to prevent it, instead of relying on failed predictions to justify inaction. As for why they failed to predict it in the first place: worldview bubbles strongly affected perception. No need to really examine data on something you and everyone around you believes is an impossibility.
That's all part and parcel of what that All-hands meeting discussed; I'd highly recommend finding it and listening to it.
I'm not sure that's true. There's a leaked email between two developers joking about it, but that's far from proof the organization was considering it.
They've demonetized/dropped ads for satire news websites, which also includes The Onion, but that was a response to the criticism that they are enabling fake news to become more prominent.
Veritas completely misunderstands and twists what 'fairness' and 'cognitive bias' in AI is. AI can only learn what you feed it. Choices in what data is given to AI can bias the outputs, and then bias people who become blinded that the inputs are biased.
So for example, if I only ever feed an AI training data about doctors and nurses in which the male examples are only ever used for doctor, and the female examples are only ever used nurse, then when you or anyone else ask's what a doctor or nurse is, the output has a gender bias. This is Google Search becoming an unwitting amplifier in cognitive bias and stereotypes.
Now you might counter that "most" doctors are male, and "most" nurses are female and that's reality, but when I ask for the definition of what a doctor is and what it does, I'm not asking for demographics.
These guys are taking concerns that are openly discussed in AI conventions among experts and ethicists as a worry, and twisting it to fit their own scared political agenda.
We've had examples of this already with image classification networks classifying dark-skinned people are gorillas, which is clearly because the training dataset itself is unconsciously biased.
How do you detect when these biases occur? You need a diverse group of human evaluations to review the outcomes to make sure the AI hasn't learned the wrong biases. Or at the very least, you need that group of people to construct a good integration test set to look for misclassifications.
Seems what's going on, is someone taking complex technical issues, twisting them, and feeding them to politicians like Gohmert who clearly won't understand any nuance.
If Veritas wants government to regulate AI to protect conservative viewpoints, he may end up regretting getting the government involved.
That's all part and parcel of what that All-hands meeting discussed; I'd highly recommend finding it and listening to it.