Can ChatGPT be trusted? True or false - five myths about the reliability of artificial intelligence

ChatGPT, Dall-e and similar AI systems evoke both excitement and fear. Matti Nelimarkka, visiting scholar at Aalto University and University Lecturer in Social Data Science at the University of Helsinki, clarifies some common misconceptions about these systems.
Kuvituskuva, henkilö seisoo vihreässä pikselöidyssä ympäristössä
Answers from AI services shouldn’t be thought of as the truth but as a point of view. They have their own interpretations of things, just like humans do. Photo: Aki-Pekka Sinikoski / Aalto University

Text: Antti Kivimäki

1. ChatGPT and other AI services are objective


 ChatGPT always gives a slightly different answer to the same question. Like other generative AI algorithms, it has randomness built into it. But the truth doesn’t change, so its answers can’t really be ‘true’ if they keep changing.

The same problem exists in image recognition services. We tried prompting different image recognition systems with the same images, and we found that they gave different answers and recognized a different numbers of things in the images. One service predicted with certainty that there was a fire engine in one picture, but the other services didn’t detect it –and neither did I.

Answers from AI services shouldn’t be thought of as the truth but as a point of view. They have their own interpretations of things, just like humans do.

2. AI generates answers by itself


Humans are involved in many ways in the production and processing of information by AI services. Moderators screen ChatGPT’s responses and remove ones that are deemed inappropriate. Humans also annotate material for machine learning, marking different features (like cats or fire engines) so the AI systems can learn to recognise them. Many annotators and moderators work in the Global South for a low wage and often in poor conditions.

Generative AI companies are also hiring poets to make the responses flow better and sound more beautiful, and users also influence AI systems. For example, when they tweak ChatGPT to get a better answer, that helps the machine learning system calibrate its responses. 

3. AI is political


Few things in the world are truly apolitical. As AI is used more and more in society, there are lots of ideas and discussions about where it should and shouldn’t be applied. For example, screenwriters in Hollywood went on strike because they were concerned that AI would be used to replace them.

But we also have a tendency to see AI as more aware and more human than it is. Although ChatGPT produces politically charged sentences, it doesn’t ‘realise’ that it’s talking about a politically sensitive topic. It simply organizes and processes data statistically. 

4. ChatGPT is politically left-leaning


Many studies have investigated the values in ChatGPT’s responses, and they’ve found that its responses skew to the left – though ‘left’ and ‘right’ were measured by US standards. There are a couple of theories to explain these findings.

One possibility is that there are more left-wing articles and posts on the internet, so the data used to train ChatGPT might have been biased. The skew could also come from moderation if right-wing responses by ChatGPT are more likely to seen as politically incorrect and get flagged by moderators. Or it could be something else entirely – the truth is that it’s actually very difficult for us to say anything exact about these complex algorithmic systems.

But I’m also not very convinced by these studies because of some weaknesses in how they were done. Even small differences in how you ask a question can get very different responses from ChatGPT, and the studies weren’t designed to deal with this. Some of them also didn’t repeat things enough to account for the random variation in ChatGPT’s responses.

5. Artificial intelligence dramatically increases productivity


AI has proven useful for processing large data sets – for example, an image recognition system can quickly distinguish the contents of millions of images. AI systems can also help with many routine tasks, like formulating an email with a friendly tone. But these benefits are partly illusory.

AI does a good job of sifting, classifying and aggregating, but it often doesn’t produce anything very useful. If you ask a machine learning algorithm to find ten groups in the data, it will find ten groups – but they might not be sensible groups. It’s up to the human user to assess the meaningfulness of the responses. If you include the time needed for fact-checking, traditional processes might be quicker than using AI.

When people are hired for expert work, the hope is that they’ll be so proficient in their field that there won’t be much need to supervise their work. AI certainly doesn’t yet have the depth of expertise or the ability to make overall judgements. That means the AI always has to be monitored by a human with the skills to sceptically evaluate its output. 

A study by Matti Nelimarkka and his colleagues addresses the workings of AI and algorithms through the concept of bureaucracy

  • Published:
  • Updated:

Read more news

Lara Ejtehadian, Patrick Rinke, and Ilari Lähteenmäki sitting with coffee mugs and smiling to the camera.
Awards and Recognition, Research & Art Published:

Aalto Open Science Award Winner 2023 - Aalto Materials Digitalization Platform (AMAD)

We interviewed the AMAD team, winners of the first Aalto Open Science Award.
People at the campus
Cooperation, Research & Art Published:

Aalto University to host CESAER Task Force Openness of Science and Technology

Aalto University is proud to host the CESAER Task Force Openness of Science and Technology on 16–17 April 2024.
Otaniemi seafront pictured in the summer with the Aalto logo and event title, and VTT and Open Science logos overlayed.
Campus, Cooperation, Research & Art Published:

Open Science and Research Summer Conference 2024 will be hosted by Aalto University

Aalto University is co-organising the Open Science and Research Summer Conference 2024 with the National Coordination of Open Science and Research, and VTT.
Two hexagonal arrays of prisms with a blue lattice inbetween.
Research & Art Published:

New quantum entangled material could pave way for ultrathin quantum technologies

Researchers reveal the microscopic nature of the quantum entangled state of a new monolayer van der Waals material