The Ethical Artificial Intelligence Standards and Protocols (EAI) is an international group of experts who aim to improve AI safety in all spheres of life and system design and who assist in developing responsible, smart, and livable AI communities. Our mission is to make the world a better place to live for us and for future generations.
These guidelines are not legally obligatory, but they are intended to help the AI community develop better systems to protect humanity from the misuse of artificial intelligence.
We follow Isaac Asimov’s three basic principles of safe robotics. In our safe AI protocol and standard design, we deeply focus on the 21 safe AI principles of compassionate artificial intelligence of His Holiness Sri Amit Ray.
“The basic guidelines of an ethical AI system refer to those values which can be implemented at the core of every AI algorithm to bring out the safety, security, and fundamental goodness of artificial intelligence for all beings and human society at large.” – His Holiness Sri Amit Ray
We provide all types of help and support to follow ethical and safe AI principles. We provide auditing and safe AI certificates if they fulfil our standards and practices.
Our Ethical AI Institute promotes professional growth and career advancement for its members. We promote and support education; identify necessary research; create technical resources such as standards, protocols, and suggestions for responsible AI practices; create public awareness campaigns; and serve as a gateway for the exchange of professional information.
We provide safe AI protocols for all aspects of artificial intelligence applications. Presently, our main areas of expertise are in machine learning, responsible medical research, design of safe medical AI systems, ethical drug design, social robotics, ethical banking AI, data privacy, health care robotics and AI systems, child care robotics, and safe AI systems for social robotics.
What exactly is AI ethics?
AI ethics is a set of moral ideas and strategies designed to guide the creation and appropriate application of artificial intelligence technologies. Organizations are beginning to adopt AI codes of ethics as AI becomes more integrated into products and services.
An AI code of ethics, also known as an AI value platform, is a policy statement that formally specifies the role of artificial intelligence in the advancement of humanity. An AI code of ethics’ objective is to guide stakeholders when presented with an ethical decision involving the use of artificial intelligence.
The science fiction writer Isaac Asimov recognized the possible perils of autonomous AI agents long before their emergence. Isaac Asimov devised The Three Laws of Robotics to mitigate such risks. It includes:
- The first law of Asimov’s code of ethics prohibits robots from deliberately harming humans or permitting harm to happen to humans by refusing to act.
- The second law requires robots to obey humans unless the orders violate the first law.
- The third law requires robots to safeguard themselves in accordance with the first two principles.
The rapid growth of AI in the last ten years has prompted expert groups to design protections against the risk of AI to humans.