According to a recent article published on Scientific American, intentional bias in AI systems could introduce "a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news". This is the conclusion reached by Douglas Yeung, a behavioral scientist at the nonprofit, nonpartisan RAND Corporation and on the faculty of the Pardee RAND Graduate School.
Yeung, in fact, shifts the focus of the debate from unconscious bias in AI, which is usually caused by algorithms unintentionally perpetuating discriminatory patters, to intentional bias, deliberately introduced in order to take advantage of its distorting effects.
This could be done for a number of different reasons, from Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them
ational security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization