News

Does ChatGPT make us lazy?

FCAI’s Ethics Advisory Board hosted a discussion on using ChatGPT for research at Tiedekulma in August. An in-person audience of 55 was joined by 250 viewers online.
Four seated people on stage having a panel discussion.
Panel discussion at Tiedekulma. L-R Karoliina Snell, Hannu Toivonen, Perttu Hämäläinen, Arash Hajikhani. Photo: Katri Karhunen

Professor Hannu Toivonen from the University of Helsinki opened the discussion by emphasizing that a language model is a model of language, not a model of the world. ChatGPT can help researchers carry out useful tasks in their everyday work. It can be a tool for checking the language of a research article or for extracting key points from texts written by other researchers. However, it cannot act as an author for the researcher. Its use should always be mentioned in any publications.

Can anything be “original” in the future?

It is the responsibility of the researcher to check the accuracy of text produced by ChatGPT. Even if articles written by AI are not complete nonsense, they are often full of errors and appear to be considerably plagiarized. The essential question is thus where and how to use opaque software, like ChatGPT, to support research while maintaining the transparency and integrity of the research.

ChatGPT feels human

It talks to you politely and convincingly, and it is unflagging. People will get tired of answering endless survey questions, but an AI will never stop responding. Associate professor Perttu Hämäläinen from Aalto University, who specializes in computer game design, introduced a study where GPT was used to conduct a research interview. Artificial interview responses generated by ChatGPT can be used to test a research design quickly and cheaply. The researchers harnessed GPT-3 to produce open-ended answers to questions about game players' experiences with video games. The study found that people recruited to evaluate the answers often found the AI-generated answers were even more convincing than real, human-generated answers.

Is the value of human knowledge diminishing?

Certainly not, but Arash Hajikhani, research team leader at VTT, acknowledges the challenges of using language models. Are we beginning to devalue our own cognitive abilities as we become more dependent on language models?  Hajikhani presented the productivity benefits of using language models. While he has welcomed the benefits of ChatGPT, for example in learning a new language and recommending new literature, he also sees ethical and social threats in the use of language models. While LLMs enable collaboration with technology, how do we avoid AI-generated replications and maintain a variety of diverse human perspectives? Hajikhani calls for a debate on how the development of language models can be guided by social values.

What about trust?

Moderator Karoliina Snell from the University of Helsinki led the panelists in a discussion on the social impact of language models and the importance of trust. Does the use of language models erode trust in science, other people, and society? If we accept that language models produce misleading content, how can we prevent the spread of this distorted information? When language models inherit biases from training material, how can we prevent these biases from continuing?

While the discussion was stimulating and enriching, with many questions from the public, we may have only scratched the surface of the what large language models have in store for science and society. Let the debate continue!

Watch the event livestream here. This article was originally posted on the FCAI website

About the author

Jaana Leikas is an associate professor and principal scientist at VTT, where she studies the ethics and responsibility of innovations.

More from the researchers:

FCAI

Finnish Center for Artificial Intelligence (external link)

The Finnish Center for Artificial Intelligence FCAI is a research hub initiated by Aalto University, the University of Helsinki, and the Technical Research Centre of Finland VTT. The goal of FCAI is to develop new types of artificial intelligence that can work with humans in complex environments, and help modernize Finnish industry. FCAI is one of the national flagships of the Academy of Finland.

illustration of a green chat bubble against a yello background with yellow round objects in the middle portraying a "person is writing" prompt.

Chat AIs can role-play humans in surveys and pilot studies

Synthetic data from large language models can mimic human responses in interviews and questionnaires. Research data from popular crowdsourcing platforms may now contain fake responses that cannot be reliably detected, raising the risk of poisoned data

News
  • Published:
  • Updated:
Share
URL copied!

Read more news

Group Picture
Cooperation Published:

DeployAI Partners Gather for Heart Beat Meeting in Helsinki

The European DeployAI project's partners gathered for the Heart Beat meeting hosted by Aalto University Executive Education in Helsinki.
Professori Maria Sammalkorpi
Research & Art Published:

Get to know us: Associate Professor Maria Sammalkorpi

Sammalkorpi received her doctorate from Helsinki University of Technology 2004. After her defence, she has worked as a researcher at the Universities of Princeton, Yale and Aalto.
AI applications
Research & Art Published:

Aalto computer scientists in ICML 2024

Computer scientists in ICML 2024
Photo: Tima Miroschnichenko, Pexels.
Press releases Published:

In low-hierarchy organisations, even key policy issues are discussed in Slack

In a recent study, Aalto University alumn Lauri Pietinalho, a visiting scholar at New York University's Stern School of Business, and Frank Martela, an assistant professor at Aalto University, investigated how low-hierarchy organisations deal with shared policies in confrontational situations and how authority functions within them.