Not able to spot the odd one in this set. Can someone help figure which sentence doesn’t fit?

A. Machine learning models are prone to learning human-like biases from the training data that feeds these algorithms.
B. Hate speech detection is part of the on-going effort against oppressive and abusive language on social media.
C. The current automatic detection models miss out on something vital: context.
D. It uses complex algorithms to flag racist or violent speech faster and better than human beings alone.
E. For instance, algorithms struggle to determine if group identifiers like "gay" or "black" are used in offensive or prejudiced ways because they're trained on imbalanced datasets with unusually high rates of hate speech.
Read more

Asked by Arjun T 5 months ago

2 Answers