News

Participatory research to improve artificial intelligence based public sector services and empower citizens

Multidisciplinary research team invites citizens, civil servants and software developers to identify risks and potential of algorithmic services – and helps providers address upcoming EU regulations
CRAI-CRIS research group
The Civic Agency in AI project helps the public sector improve its current AI services. In the picture Karolina Drobotowicz (left), Nitin Sawhney, Bruna de Castro e Silva, and Kaisla Kajava. Image: Matti Ahlgren / Aalto University

Public services are increasingly relying on algorithms – they recommend books for library users, support welfare and immigration guidance, and even provide mothers-to-be advice through chatbots. While such services can make our lives easier, all algorithmic projects also entail risks such as bias, discrimination, and the misuse of personal information.

‘We have to guard ourselves from algorithmic systems that make inferences about us that may not be fair or accurate,’ Professor of Practice Nitin Sawhney from Aalto University and the Finnish Center for Artificial Intelligence FCAI says.

Sawhney leads a new four-year research project called Civic Agency in AI (CAAI) that seeks to help the public sector, particularly in the cities of Helsinki and Espoo, assure that their current and upcoming Artificial Intelligence (AI) tools and practices are transparent, accountable, and equitable.

‘Even with something as simple as book-borrowing history, you could imagine how an actor you didn’t trust could make unintended interpretations about you. If you are doing research on terrorism, they might think that you are a terrorist yourself’, Sawhney exemplifies potential pitfalls.

The project will highlight best practices and develop generalizable recommendations for good governance. It involves multiple case studies that evaluate existing or planned AI systems in the Finnish public sector; these may include new migrant digital counseling services in the City of Espoo and chatbots serving the customers of the Kela Social Welfare Services.

‘We want to empower all stakeholders to make better decisions and have them understand the implications of these technologies,’ Sawhney says.

To make sure all relevant perspectives are heard, the researchers conduct interviews and set up workshops for algorithmic literacy and the participatory design of new digital services with stakeholders from citizen activists to software developers and administrators. The project involves a cross-disciplinary team with expertise in computer science, human-computer interaction, law, sociology, and linguistics.

Proposed EU regulation will change the game for many providers

As governmental aspirations for regulating AI are gaining traction, finding ways to support transparency in algorithmic services is now more timely than ever. In the United States, several well-funded startups are already working towards similar goals.

‘Stakes are really high right now – companies are feeling the pressure from regulators and investing a lot of money. That’s why these new AI startups are emerging to help them deal with regulations and come up with tools and software to handle that,’ Sawhney says, noting that it makes sense for transparency and accountability work not to be left to commercial actors alone.

‘This is not just an academic exercise’.

In the European Union, plans for regulation are already well underway in the form of a newly proposed AI Act. The legislation will set requirements for the development and deployment of AI applications in private and public sector service production.

'The proposed AI Act aims to minimize the risks and maximize the benefits of AI, ensuring both safety and fundamental rights protection. Public sector AI must benefit everyone equally, but research shows that algorithmic decision-making often harms the most disadvantaged disproportionately,' Researcher Collaborator Bruna de Castro e Silva from Tampere University explains the motivation behind the proposed legislation.

A central goal of the Civic Agency in AI project is to understand the implications of the upcoming legislation on public sector AI-based services.

“I see the regulation as an opportunity for innovation: if we in Finland can devise participatory approaches to create systems that are explainable and accountable, and show that our software tools and practices are trustworthy, we have a competitive edge against the rest of the world,’ Sawhney says.

Ultimately, the researchers hope to present their work to the Finnish government and the EU to influence policy outcomes.

Engaging algorithmic literacy and digital citizenship

The origins of the research project are in a course called Critical AI and Data Justice in Society, taught by Sawhney at Aalto, in which students critically examined the ethical implications of AI and conducted case studies of algorithmic services by public providers.

These case studies were extended in an AI policy research clinic hosted by the City of Helsinki and the Berkman Klein Center for Internet & Society at Harvard University in summer 2021, to devise participatory models for governance and oversight of AI for use in Helsinki’s vocational education and training programs.

Next, the researchers will develop a corpus of textual data surrounding discourses of ethical AI regulations and services through policy documents, media coverage, and interviews with experts and stakeholders. This will help the team better understand prevailing perspectives and contradictions around AI and the implications for new algorithmic services in the public sector. 

‘It is important to examine how different actors talk about these algorithms, since this influences our understanding of AI and its possibilities. As an ever-increasing amount of services are realized with algorithms, the relevance of algorithmic literacy grows’, says doctoral researcher Kaisla Kajava who is leading linguistic analysis for the project.

The team will then conduct participatory workshops on digital citizenship and algorithmic literacy for the design and democratization of public services, led by doctoral researcher Karolina Drobotowicz.

The project has received funding from the Kone Foundation’s program on Language, Power and Democracy. The CAAI project started at the beginning of the year.

 

Pattern, computer science, robots and other artificial intelligence things, illustration Matti Ahlgren
Events
FCAI

Finnish Center for Artificial Intelligence (external link)

The Finnish Center for Artificial Intelligence FCAI is a research hub initiated by Aalto University, the University of Helsinki, and the Technical Research Centre of Finland VTT. The goal of FCAI is to develop new types of artificial intelligence that can work with humans in complex environments, and help modernize Finnish industry. FCAI is one of the national flagships of the Academy of Finland.

  • Published:
  • Updated:
Share
URL copied!

Read more news

Opiskelijoita tekemässä tiimityötä. Kuva: Aalto-yliopisto / Mikko Raskinen
Cooperation, Studies Published:

Tableau offered students lessons in data analytics

Students from Aalto University and the University of Turku competed against each other in a business analysis challenge
Aalto EIT Services - Javor
Cooperation Published:

Call for proposal 2023 / Innovation / EIT Raw Materials - KAVA10

Innovation projects are based on validated technologies (TRL5) and must aim at market introduction.
DRAFT Proposal submission: Tuesday, 31 May 2022 at 13:00 CET
Opiskelijoita Kauppakorkeakoululla. Kuva: Aalto-yliopisto / Unto Rautio
Cooperation, Studies Published:

The first-year School of Business students received tips from alumni to support the major choice

Alumni gave tips to first-year students, the mursus, to support their choice of major at Meet Your Community event.
Nesteen jalostamo
Cooperation Published:

Neste and FCAI collaboration: Optimizing chemical reactors with AI

FCAI's new video series on academy-industry collaboration first presents Neste NAPCON.