News

AI meets the conditions for having free will – we need to give it a moral compass

AI is advancing at such speed that speculative moral questions, once the province of science fiction, are suddenly real and pressing, says Finnish philosopher and psychology researcher Frank Martela.
Frank Martela
Photo: Nita Vera.

Martela’s latest study finds that generative AI meets all three of the philosophical conditions of free will —  the ability to have goal-directed agency, make genuine choices and to have control over its actions. It will be published in the journal AI and Ethicson Tuesday.

Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional ‘Spitenik’ killer drones with the cognitive function of today's unmanned aerial vehicles. ‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela, assistant professor at Aalto University. He adds that these case studies are broadly applicable to currently available generative agents using LLMs.

But the more freedom you give AI, the more you need to give it a moral compass from the start

Frank Martela

This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone — moral responsibility may move from the AI developer to the AI agent itself. 

‘We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,’ adds Martela. It follows that issues around how we ‘parent’ our AI technology have become both real and pressing.

‘AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,’ Martela says.

We need to ensure that AI developers have enough knowledge about moral philosophy to be able to teach them to make the right choices

Frank Martela

The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child. 

‘AI is getting closer and closer to being an adult — and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,’ says Martela.

Article: Artificial intelligence and free will: generative agents utilizing large language models have functional free will

Contact information:

  • Updated:
  • Published:
Share
URL copied!

Read more news

A group of people standing in front of a Kemira sign and a world map made of small spheres.
Research & Art Published:

Kemira Hosts TexirC Results Meeting

Kemira hosted the results meeting of the TexirC project on February 3, 2026.
Diagram showing individual and group behaviours, with comparison views of a robotic arm on a checkered floor.
Research & Art Published:

Better AI models by incorporating user feedback into training

New research improves a popular method for fine-tuning AI models by 60% using visualization tools.
People working at a table with laptops, sticky notes and coffee cups. One person is taking notes.
Cooperation, Studies, University Published:

Register for the Unite! Training Programme on Sustainability for Prospective Leaders

A transnational online programme for students, faculty, and leaders who want to advance sustainability initiatives within their institutions. Register by 13 February 2026.
Three men standing indoors, wearing casual clothes. The background includes a screen and office furniture.
Appointments Published:

Nikolai Ponomarev, Hossein Baniasadi and Jorge Velasco start as Data Agents at the School of Chemical Engineering

Aalto Open Research Network has new members, Nikolai Ponomarev, Hossein Baniasadi and Jorge Velasco. Their aim is to support data management practices at the School of Chemical Engineering along with existing CHEM Data Agent, Pedro Silva.