Artificial Intelligence: Ethics Lessons for AI Systems

This is a translation of an article that was published in the Austrian newspaper Kurier on November 25, 2023.

 

By now, robots and similar systems are making their own decisions, but which ones are these?

“As an AI model, I have no personal opinion, conviction or emotions,” writes a well-known AI-based text program in response to the question of how it deals with ethics. It sees the main function of the program as providing information and answering queries based on the underlying training data set. Seeing that AI-based systems are already making critical decisions, this view may be a bit too simplistic.

To provide a specific example, self-driving cars could decide not to stop if they encounter an object on the road that does not appear to be a living being. How do you deal with this? And what does it mean when an AI-supported text program provides a user with instructions on how to build a bomb? Should it be allowed to do that? If intelligent systems are increasingly supporting humans, and in some cases even replacing them, we need to address these questions.

Extra lessons

Agata Ciabattoni intends to close this gap with the WWTF-funded project TAIGER (Training and guiding AI aGents with Ethical Rules). “TAIGER aims to lay the foundations for AI agents to work in a legally sound, ethically sensitive and socially acceptable way,” explains Ciabattoni. For so-called autonomous agents, such as self-driving cars or care robots that are supposed to act independently and without human supervision, “this undertaking is particularly crucial, but also particularly difficult,” says Ciabattoni.

This will become possible with the integration of deontic logic, a special form of logic, in combination with so-called reinforcement learning. “This is a type of machine learning in which computers are trained to make decisions based on the principle of trial and error. They receive feedback in the form of rewards or punishments, which imitates how humans learn from experience,” explains Ciabattoni. This allows them to master complex as well as new situations. However, this method does not ensure that autonomous agents will indeed always act ethically. Ciabattoni calls to mind a recent incident where a chess robot broke a finger of its human opponent.

In contrast to classical logic, deontic logic does not refer to a current state, but to a target state. What should or should not be done? These are questions that you would normally find in ethics or jurisdiction. In order to be able to make statements about such decisions, mathematical and computer-aided tools are needed, as this is the only way in which machines can be “taught”.

[Agata Ciabattoni is co-chair of the Vienna Center for Logic and Algorithms; picture (c) Luiza Puiu]

Logical ethics

AI has become synonymous with machine learning, which uses real-world data to help agents learn new behaviors. However, there is also another approach to generating intelligent behavior, namely by focusing on the processing and manipulation of symbols rather than data. Both approaches have their strengths and weaknesses. Ciabattoni and her project partners Ezio Bartocci and Thomas Eiter are working on integrating these two different approaches. “By bringing together reinforcement learning and deontic logic, we can teach autonomous agents how to act correctly.

“This allows us to combine the best of both worlds,” says Ciabattoni. However, it remains unclear which norms and values should form the basis of programming such autonomous agents. “The behavior of the agent must indeed be compatible with a range of potentially contradictory and ambiguous norms from the fields of law, ethics, society, etc.,” says Ciabattoni. “The project TAIGER aims to develop frameworks for putting these requirements into practice. But we refrain from making statements about which standards AI agents should follow. This delicate question should be tackled by ethicists, lawyers, philosophers and practitioners.”

https://www.wwtf.at/funding/programmes/ict/ICT22-023/

Comments are closed.