Why do companies develop their codes of ethics? During the session “Codes of ethics as a form of self-regulation” at the St. Petersburg International Legal Forum 9 ¾, experts discussed ethical problems related to the application of artificial intelligence.
Discussion moderator Anna Serebryanikova, President of the Big Data Association, noted that the introduction of codes of ethics in modern economic sectors helps popularize principles of work with data and sets a standard of public dialog.
Igor Drozdov, Chairman of the Board of the Skolkovo Foundation, noted that such codes are needed when government intervention is not required. “Customs existed already in ancient times: people agreed on accepted modes of behavior, and the state did not interfere with these relationships. Why does this happen? Technology is changing rapidly, and the law fails to keep up, as it takes a long time to agree on the rules of conduct. Then, a new technology emerges, and the law has to be adapted again. In private legal relations, self-regulation is easier to organize, the main prerequisite for that being the enthusiasm of business participants. If there is no need to involve the state and everyone realizes the importance of applying the rules, they work fine. It is a different matter, however, in case the state is concerned about something. Various robots and drones appear, resulting in the question of security arising; simple self-regulation is not enough,” the expert noted.
Andrei Neznamov, Director of the AI Regulatory Center at PAO SberBank, also supported this position. “Self-regulation is a vital element for the creation of new technologies. Let me remind you: just over 10 years ago, we did not use smartphones. This shows that a single generation witnesses multiple shifts of technological paradigms. As a result, social relations change, and the question of how to regulate them arises. Passing laws can have unpredictable consequences. A parallel might be drawn here with the 1865 Red flag law, when cars were just appearing in Great Britain and people had to walk in front of them with red flags to warn moving vehicles,” the expert commented.
Jeff Bullwinkel, Director of Corporate, External & Legal Affairs at Microsoft Europe, spoke about the experience of implementing and enforcing a code of ethics. “We are actively engaged in self-regulation in various contexts, including artificial intelligence (AI). Four years ago, we developed our code of ethics, a set of principles that seems universal to us. These include reliability, as many important areas of our lives depend on AI; privacy, protection of personal data, fighting crime; fairness, i.e. reducing the risk of bias, stereotypes; building sustainable systems; transparency, i.e. informing everyone about what and how we do; and finally, the accountability of AI, which should be controlled by people — that is probably the fundamental element of the code. Employees must understand these principles and are guided by them. So, Microsoft developers communicate to developers at all levels the overall strategy and the principles. And the special committee supervises all the ethical issues concerning the use of AI: it includes lawyers, engineers, and scientists,” said Jeff Bullwinkel.
Alexander Kraynov, Head of Computer Vision and Artificial Intelligence Technologies at Yandex, noted that, first of all, codes of ethics are compiled for users, not for companies. “Artificial intelligence differs from other technologies in its dramaturgy: it is the perfect opponent for humans. People have a legitimate fear — why not? What if it’s dangerous? It turns out that companies that make AI products have to allay these fears — we’re not making killer robots. That’s where codes of ethics come into play. Yes, you can ask lawyers to say, “We’re not going to make systems that harm people,” and we weren’t going to either way. But these principles are important in reassuring people, as the information background is so bad — how can you not be alarmed? <…> I haven’t seen a single principle that I couldn’t agree with in any code of ethics. Their function is always the same: to show that we, as a company, care about what people think, and what we do. The main thing is that there shouldn’t be a discrepancy between what we declare on the outside and what happens on the inside,” the expert explained.
Igor Drozdov noted that codes help create a positive perception of technology among those who consume it. Companies do not want to take action or create products that will harm their consumers. It is different when the government feels responsible for new developments — what if something happens to a driverless car? The line between company developments and the public interest can be very thin and constantly changing, experts summarized.
The Big Data Association, together with the AI Alliance Russia, held a discussion day called "Topical issues of data regulation and the...
Data is becoming more valuable every year. Companies use it to optimize current processes and create new business models. However, using it...