artificial intelligence and moral construction:
In 2016, Amazon, Microsoft, Google, IBM and Facebook jointly established a non-profit artificial intelligence partnership (AI partnership), and Apple joined the organization in January 2017.
In the consensus of the technology giants, instead of relying on the outside world to impose restrictions on AI development, it is better to set up an ethics committee to supervise independently. The AI Ethics Committee is a kind of supervision and protection, and it is a risk assessment standard. It can help AI companies make ethical decisions that benefit society.
“Ethical Machines: How to Make Robots Distinguish Right and Wrong” The American cognitive science philosopher Colin Allen and technical ethicist Wendell Wallach emphasize two dimensions – the autonomy dimension and the sensitivity dimension for moral-related facts, An understanding framework for the increasingly complex AMAS (Artificial Ethical Intelligence) trajectory is given to “evaluate the ethical acceptability of the options they face.” They believe that the combination of “top-down” and “bottom-up” is the best choice for the model of machine moral development. The bottom-up from the bottom of the data learning experience and the pre-programming with certain rules are combined from top to bottom.
On December 12, 2017, the Institute of Electrical and Electronics Engineers (IEEE) issued the “Ethical Guidelines for Artificial Intelligence Design” (Second Edition), which proposed to ethically design, develop and apply artificial intelligence technology should follow the following general principles: Human Rights – – Ensuring that they do not infringe on internationally recognized human rights; well-being – prioritizing indicators of human well-being in their design and use; accountability – ensuring that their designers and operators are accountable and accountable; transparent – ensuring that they are transparent The way to run.
On April 9th, the European Union issued the AI ethics guidelines. As a company and government agencies to develop AI in the future, they should follow the 7 principles and call for “trustworthy artificial intelligence”. “Reliable AI” is a moral reconstruction of the artificial intelligence era, which indicates an ethical direction for the development of artificial intelligence.
The draft EU AI Code of Ethics states that a trusted AI has two components: First, it should respect basic rights, apply regulations, core principles and values to ensure “moral purposes” (moral purposes), and second, be technically robust. Sex (robustness) and reliability, because even with good intentions, lack of technical mastery can cause unintentional harm.
The EU proposes a draft code of ethics and a framework for trustworthy AI
The seven key principles are as follows:
- Human role and supervision: artificial intelligence should not trample on human autonomy People should not be manipulated or coerced by the AI system, and humans should be able to intervene or supervise every decision made by the software.
- Robustness and security of technology: Artificial intelligence should be secure, accurate and should not be susceptible to external attacks (such as antagonistic instances), and should be fairly reliable.
- Privacy and data management: Personal data collected by artificial intelligence systems should be secure. Privately, it should not be accessible to anyone and should not be easily stolen.
- Transparency: The data and algorithms used to create an artificial intelligence system should be accessible, and the decisions made by the software should be “understood and tracked by humans.” In other words, the operator should be able to explain the decisions made by the AI system.
- Diversity, non-discrimination and fairness: The services provided by artificial intelligence should be open to all, regardless of age, gender, ethnicity or other characteristics. The system should not be biased in these respects.
- Environmental and social well-being: Artificial intelligence systems should be sustainable (ie they should be ecologically responsible) and “promote positive social change”.
- Accountability: Artificial intelligence systems should be auditable and included in the corporate reportable form so that they can be protected by existing rules and should be notified and reported in advance by the system.
For more articles you can follow us on: