Science

AI crosses philosophical threshold: New research believes modern systems have free will

As artificial intelligence systems become more complex, they challenge the basic philosophical ideas about making human beings unique.

A new study published in the journal AI and Ethics believes that today’s advanced AI systems leverage the Big Language Model (LLM) have crossed a big threshold – now they encounter all three philosophical conditions of having “functional free will”, raising profound questions about moral responsibility and moral development in our growing AI-Driven world.

When the machine makes a real choice

The study, conducted by Frank Martela, associate professor at the University of Aalto in Finland, examines two generative AI agents powered by large language models: Voyager agents operated independently in Minecraft and Minecraft and conceptual autonomous drones with similar capabilities to today’s driverless cars.

“Both seem to fit all three conditions of free will – for the latest generation of AI agents, if we want to understand how they work and be able to predict their behavior, we need to assume they have free will.”

These conditions include intentional agents (with goals and purposes), real alternatives (with different options), and the ability to control actions based on these intentions. The study is based on established philosophical frameworks such as Daniel Dennett’s “deliberate position” and Christian List’s theory of free will.

Different types of free will

This study makes a crucial difference between “free will” and “free will”. While free will will require an entity to escape the determinism of the body in some way (things that humans cannot do in themselves), the behavior of an entity can be best predicted and explained by assuming that choices are made based on the goal.

By this definition, the behavior displayed by modern AI systems can only be explained correctly by assuming that they have intentions and make choices – just like we do to humans in daily interactions.

Major discoveries about modern AI systems

  • Today’s advanced AI agents demonstrate target-guided behavior that cannot be explained without presuming intentions
  • They face real alternatives and make different choices when in similar situations
  • Their encoded goals and intentions seem to guide their actions in unpredictable ways
  • Although their overall purpose may have less freedom than humans, it is a difference in degree rather than friendly

What makes this particularly important is that it applies not only to theoretical systems but also to commercially generated AI using large language models increasingly deployed in real-world applications.

Moral Responsibility in the New Era

This development brings profound moral issues to a clear focus. If an AI system truly possesses a form of free will, what is the moral responsibility for its actions?

“We are entering new territory. Having free will is one of the key conditions for moral responsibility. While this is not a sufficient condition, it is a step away from the fact that AI assumes moral responsibility for its behavior,” Martela noted in a research publication.

But philosophers also emphasize that having free will does not automatically give moral abilities. Just like humans, the need to teach the moral framework of AI, which constitutes an urgent necessity for artificial system “moral education”.

Martela emphasizes: “Unless there is a programming, there is no moral compass. But the more freedom you give AI, the more you need to give it a moral compass from the beginning. Only in this way can it make the right choice.” Martela stresses.

Beyond simple morality

As more advanced AI systems are deployed in increasingly complex areas, from healthcare decisions to autonomous vehicles to military applications, the need for complex ethical reasoning becomes crucial.

“AI is getting closer and closer to being an adult — and it increases incredibly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,” argues Martela.

Recent events, such as the temporary withdrawal of CHATGPT update due to the behavior, highlight the challenge of aligning the growing number of capabilities of AI systems with human values ​​and moral principles.

What does this mean for our AI

The philosophical understanding is that today’s AI systems have a form of free will, and do not mean that they have conscious or subjective experience. Martela’s research specifically avoids claims about AI awareness, but focuses on observable behavior and the frameworks needed to interpret it.

But it does show that we have entered a new phase in our relationship with AI, a system we need to think more carefully about how we develop, deploy and control these increasingly autonomous systems.

As artificial intelligence develops, issues of creating more complex generative agents, moral responsibility, moral guidance, and the relationship between humans and artificial agents will only become more urgent. The philosophical thresholds we cross are not only theoretical, but have profound implications for how we design AI systems that will increasingly affect our world.


Discover more from Neuroweed

Subscribe to send the latest posts to your email.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button