The European Commission (EC) has announced the publication of a new guidelines pilot aimed at ensuring the future ethical use of artificial intelligence (AI).
Organisations across Europe are witnessing a growing take-up of AI solutions in many areas of their enterprise, and as a result there is a growing need to build trust around this technology and to create future safeguards concerning its use.
EC vice-president for the digital single market Andrus Ansip explained: "The ethical dimension of AI is not a luxury feature or an add-on. Ethical AI … can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust."
The new guidelines set out seven specific principles for the ethical use of AI:
- AI systems should support human agency and not decrease, limit or misguide human autonomy
- Trustworthy AI requires algorithms to be secure, reliable and robust
- Citizens should have full control over their own data
- Full traceability for all AI systems must be ensured
- AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility
- AI systems should enhance positive social change, sustainability and environmental responsibility
- Full accountability for all AI systems and their outcomes must be ensured
Now drafted in principle, the EC is set to embark on a pilot phase of the new guidelines and will be working alongside industry and academia during the coming months to ensure the new guidelines can be implemented effectively in practice.
Responding to the announcement, Martin Jetter, senior vice president and chairman at IBM Europe, said: "The EU's new Ethics Guidelines for Trustworthy AI set a global standard for efforts to advance AI that is ethical and responsible."
He concluded that the development of these new guidelines on the ethical use of AI will provide a strong example for other countries and regions to follow.