Today the European Commission has presented the next steps for building trust in Artificial Intelligence (AI) by taking forward the work of the High-Level Expert Group.
AI can benefit a wide-range of sectors, from healthcare, energy consumption and cars safety, to farming, climate change and financial risk management. However, it also brings new challenges for the future of work, and raises legal and ethical questions.
Following its European strategy on AI published in April 2018, the EC set up the High-Level Expert Group on AI consisting of 52 independent experts from academia, industry, and civil society in June 2018. The Group published a first draft of the ethics guidelines in December 2018, followed by a stakeholder consultation and meetings with representatives from Member States to gather feedback.
A pilot phase has been launched today by the EC to ensure that the ethical guidelines for AI development and use can be implemented in practice. To reach this objective, the EC is inviting industry, research institutes and public authorities to test the detailed assessment list drafted by the High-Level Expert Group, which complements the guidelines.
The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.
Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight
- Robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental well-being