2 min read
Will AI "Fairness" Corrupt AI Itself?
HBR recently ran a thought-provoking article about "AI Fairness". Ofqual is a UK-based company that administers college entrance exams. Ofqual was unable to hold live exams due to Covid-19 and decided to use an algorithm that based scores on the historical performance of high schools that students attended. This means students who were attending historically poorly performing schools would be unable to distinguish themselves via their own individual performance. The article uses this point to launch into a discussion about human biases impacting AI algorithms, causing these algorithms to be unfair. While this is a fine point it has nothing to do with AI and totally bypasses the question of whether AI is an appropriate tool to use for the problem at hand. The algorithm itself was fine - the problem was the invalid assumption about how to solve the issue (pre-judging student performance based upon historically under-performing schools). AI has no concept of "fairness", it's ultimately just a bunch of rules that are executed based upon historic data. Attempting to modify algorithms in the name of "fairness" requires altering the data that drives the rules. This means your data will eventually no longer reflect reality in the interest of "fairness". Fairness and reality can be very different things. Attempting to alter historic data to pursue "fairness" could result in unintended consequences. I'm sure we'll eventually learn our lesson someday.
Suddenly I want a Surface Duo
This is an incredible demo, better than any marketing demo I've ever seen.