e-SIDES at EBDVF - Dealing with weapons of math destruction

in General

By Gabriella Cattaneo, Associate Vice President at IDC (Coordinator of e-SIDES)


The debate on big data potential risks for privacy, social justice and human rights sometimes feels overwhelming, with an endless list of threats (I know, just read our e-SIDES project last deliverable). That’s why Nuria Oliver’s keynote speech at the European Data Forum[1] in Versailles was refreshing, presenting a passionate but synthetic overview of these same risks counterbalanced with concrete suggestions on what to do about them. Nuria Oliver[2] knows what she’s talking about – she’s a top-level computer scientist, director of data science research at Vodafone and a member of the Data-Pop Alliance[3], the global coalition created by Harvard and the MIT promoting a people-centered Big Data revolution. Oliver summarized the main ways in which data technologies may affect negatively personal rights in 5 broad categories, computational violations of privacy, reproducing discrimination bias, information asymmetry, opacity and unintended ethical consequences. I particularly appreciated her neat classification of opacity issues (more often called lack of transparency) in 3 typologies, intentional (to protect IPRs), illiterate (people don’t understand) and intrinsic (too complex to be transparent, specifically the algorithms created by neural networks in the deep learning processes).This classification by itself hints at what could be the countermeasures. And Oliver suggested a set of guidelines to manage these risks without losing the value added of data-driven innovation, inspired by the principles of responsible research and innovation. Some are relatively obvious and well-known (but not always practiced): pursue user-centric design approaches, experiment with users to take into account their concerns, work with multidisciplinary and diverse teams (to avoid the prevalence of the male, white, geek point of view). A definite challenge is the recommendation to develop algorithmic transparency, as at least one project (OPAL[4]) is doing. Transparent algorithms do not seem likely to become a dominant technology trend anytime soon, even though they could certainly play a positive role in the big data environment. Quite challenging, but certainly effective is another suggestion to develop discrimination-aware decision-making processes, where the trade-off between fairness and utility or performance of specific data tools is clearly assessed through a series of steps. However, none of these ideas will work without a clear recognition of the importance of the ethical dimension and the definition of ethical principles for research and development, such as those authored by Matthew Zook[5] for big data or the Menlo Report[6] on ICT ethical principles. Perhaps we’ll know this is happening when companies will start to name a “Chief ethics officer” to oversee the development of these potential weapons of math destruction.


If you are interested, stay tuned: e-SIDES is also working to develop guidelines for responsible research and innovation in big data and will share them.




[1] http://www.european-big-data-value-forum.eu/progra...

[2] http://www.nuriaoliver.com/

[3] http://datapopalliance.org/

[4] http://www.opalproject.org/about-us/

[5] http://journals.plos.org/ploscompbiol/article?id=1...

[6] http://ethics.iit.edu/ecodes/node/6132

Comments