The Seven Guidelines of Ethical AI System

AI ethics is a set of values, ideas, and strategies designed to guide the creation and application of artificial intelligence technologies for human safety, security, and progress. According to His Holiness Sri Amit Ray, “The basic guidelines of an ethical AI system refer to those values which can be implemented at the core of every AI algorithm to bring out the safety, security, and fundamental goodness of artificial intelligence for all beings and human society at large.”

The guidelines urge AI developers and users to make every code count. The ethics of artificial intelligence is a distinct field unto itself, and it is one that requires the collaboration of numerous stakeholders in order to eliminate any prejudice that may arise at any step.

According to the Australian Government’s Artificial Intelligence Ethics Framework, the AI Ethics Principles are designed to ensure AI is safe, secure and reliable for all Australians.

The existence of bias in AI algorithms is a reality, and it can lead to devastating results, including racial discrimination, the incorrect interpretation of context, gender bias, and many other problems. It has the potential to have a negative impact on many different businesses, including banking, medical research, military applications, education, consumer technology, and insurance, amongst others.

Following the publication of a draught on ethics standards in December 2018, the European Union had released seven criteria for trustworthy AI [1]. These recommendations are based on the approximately 500 comments that were received following the release of the draught. To compile this list of seven standards for responsible AI, around 52 industry professionals were contacted.

The seven guidelines are as follows:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

These guidelines are not legally obligatory, but they may play a role in any future legislation on the subject prepared by the EU.

The Seven Guidelines of Ethical AI System
Scroll to top