News

Designing sustainable societies through trust

A broad understanding of digital technologies is of utmost importance in the information era. How do we make sure that humans stay in the loop in a socially sustainable way as societies become ever more digitalised?
United Nations Sustainable Development Goals logo
Johanna Ylipulli standing on a bridge
‘There is a sense of determinism and solutionism in the smart city agenda,’ says Johanna Ylipulli (in the photo), an academy research fellow at the Department of Computer Science who studies digital inequality. Image: Matti Ahlgren/Aalto University

Our societies must strive for a future that is not only green and digital but also just. The United Nations offers a blueprint for this transition in its Sustainable Development Goals (SDGs), including SDG16, which aims to promote inclusive, transparent, and responsible governance. The emerging information era will continue to test how technology affects the societal relationships between individuals and organisations. These relationships are forged, and lost, through experiences of trust – the willingness to become vulnerable. Without trust, the promise of digital technologies will slip away. 

This challenge hasn’t gone unnoticed in the EU, which is encouraging a green and digital transition to support progress in the SDGs. The EU has dubbed the 2020s Europe’s Digital Decade and set targets to accelerate progress in digitalisation.

‘The digital decade is our last chance to follow through on the 2030 Sustainable Development Goals,’ said Margrethe Vestager, the European Union’s chief digital official, in March 2021. ‘We know that digital technology has the potential to facilitate inclusion and access to public services around the world.’

Yet the asymmetry between technological progress and the rights and skills of digitalized citizens threatens to undermine such goals. Autocratic governments are increasingly using artificial intelligence (AI) to build surveillance capabilities which feed into social scoring systems and enable the suppression of dissident voices. Democratic states are grappling with regulating big tech, controversial surveillance and the myriad problems of content moderation. The list of negative effects seems endless.

Fortunately, it’s not all doom and gloom. The discipline of computer science has come a long way since its inception and now takes a more human-centric approach to technological development. Recent work has critiqued how technologies are applied and developed novel approaches for tackling the societal challenges left in the wake of technological progress.

‘Increasingly, computer science is no longer just about what happens inside the machine,’ explains Nitin Sawhney, professor of practice at Aalto’s Department of Computer Science. ‘It’s about our experiences of everyday devices we use and the digital services we interact with. The landscape has changed dramatically over the last 20 years in terms of how computer science affects society, especially with the increasing prevalence of big data and AI.’

Bringing humans into the loop has been a key driver in transforming the field.

‘Technology development has tended to operate in a top-down manner, with little involvement or agency for end users in the process,’ explains Sawhney. ‘We need to create better mechanisms of inclusion and participatory design to engage civil society.’

The Department of Computer Science at Aalto University has several research areas that engage with the societal impact of technologies, digital ethics, and human-centred design. ‘We have a rich history of socially conscious research, stretching back to the 80s, in topics such as software engineering, law and technology, and user-centered design,’ says Casper Lassenius, associate professor of computer science and vice head of the department, in charge of research. ‘The processes, actors, tools and ethics of digital transformations are widely studied.’

Lassenius’ own research has focused on digital transformations in organisations. Public sector organisations, in particular, have to integrate ethical and legal discussions into their digital transformation processes, which often relate to public trust.

‘Automated decision-making by authorities is legally challenging on the one hand, and on the other hand, it is an issue of trust. If authorities make automated decisions with AI, they run the risk of becoming faceless,’ says Lassenius. ‘How would that affect public trust?’

Aerial view to Aalto University Otaniemi campus
A popular concept for our digital future is the smart city – an umbrella term for various ideas to improve urban spaces. In the photo Aalto University campus in Espoo.

AI: building block or stumbling block?

A popular concept for our digital future is the smart city – an umbrella term for various ideas to improve urban spaces. One idea is that smart cities should optimize urban flows through automated infrastructures and AI. For the optimization to work, a network of data-gathering sensors would need to be dispersed into urban environments – an approach that raises challenges.

In Toronto, a development project called Quayside was to be a tech-savvy district with robots, heated sidewalks and automated recycling – a proof-of-concept for Sidewalk Labs, a subsidiary of Alphabet. After three years of planning, the company was forced to withdraw from the project by stiff opposition from local communities. The project failed to engage local citizens in its planning and address their concerns about privacy issues, among other needs. Citizens criticized the project for its hubris and arrogance.

At its worst, the smart city concept exemplifies the problem of approaching socio-digital issues with a technology-first mindset. ‘There is a sense of determinism and solutionism in the smart city agenda,’ says Johanna Ylipulli, an academy research fellow at the Department of Computer Science who studies digital inequality. ‘It is often driven by top-down, business- and technology-led thinking, which may limit citizens – with inalienable rights – to the role of mere users or consumers.’

To help develop a social contract for the digital age, such services should take heed of the role and voices of citizens in order to develop and sustain their trust.

Several grim examples from Europe have already shown the dangers of using AI-powered decision-making in public services. The Dutch tax authority was recently engulfed in the toeslagenaffaire(allowance affair) scandal, which forced the government to resign in 2021. The authorities deployed a self-learning algorithm in 2013 which mislabelled over 20,000 families that were seeking assistance. The brunt of the blow was disproportionately taken by ethnic minorities, immigrants and low-paid workers because the algorithm used controversial indicators of risk, such as not being a Dutch citizen.

The aftermath has been tragic. The algorithm left tens of thousands of families in financial ruin, and thousands of children have been taken into foster care.

‘What if we turned the tech-first thinking upside down and engaged people first?’ asks Ylipulli.

The drivers of socially sustainable software and AI

Beyond the smart city, recent AI demos have brought about sci-fi visions of sentient machines and aroused the public’s imagination around artificial creativity. As the debate on the ethics of AI rages on, some scholars see a silver lining.  

‘I think the fact that there’s a debate on AI’s ethical implications is a very positive development,’ says Marjo Kauppinen, professor of practice at the Department of Computer Science. ‘But the ethical issues span much further than just AI.’

Although software is the backbone of everything digital, it hasn’t been a focal point of ethical discussions in the media or the research community. Kauppinen’s research group is looking to change the status quo. The group’s work on user and stakeholder needs in developing digital systems has expanded to include enablers of ethical software development.

Transparency, explainability and understandability are the key factors in building the trust of users and broader stakeholders,’ says Kauppinen. ‘The companies I’ve worked with understand that their services have an impact on the everyday lives of people. If they want to earn trust, they need to walk the talk.’

The research group is taking a cue from AI research, where ethics has come under intense scrutiny.‘It’s not enough that we can tell how an algorithm works. There needs to be a broader consideration at play – how does a system affect individuals, particular demographics and society.’

The next step for Kauppinen’s team is to integrate societal impact assessments into the testing phase of software. ‘This is the stuff of computer scientists – we can’t delegate this responsibility to social scientists alone,’ she says.

These new research avenues are also inspiring new generations of software engineers and researchers through cutting-edge course content. ‘At our best, we’re able to translate the latest research results into our courses,’ says Kauppinen. ‘Some of our students use these results in their master’s thesis and generate new knowledge.’

This is the stuff of computer scientists – we can’t delegate this responsibility to social scientists alone.

Marjo Kauppinen, professor of practice, Department of Computer Science

Doctoral researcher Karolina Drobotowicz is a former student of Kauppinen, and is now working in the Citizen Agency in AI (CAAI) project, led by Sawhney. Along with her research group, she studies how including citizens in the design of AI-enabled public sector services could make them more trustworthy.

Drobotowicz has interviewed citizens about their requirements for trustworthy public AI service. ‘The bulk of misconceptions about AI are due to low transparency of AI-enabled services, low algorithmic literacy and how AI is presented in popular media. They have a huge effect on trust in the technology,’ says Drobotowicz. Her findings also underline the need for digital literacy and better awareness of how and where AI is used.

‘People don’t always know that AI is used in public sector services, which is problematic for democracy, inclusivity and the agency of digitalised citizens,’ says Drobotowicz. ‘In addition to digital skills, citizens need to have knowledge and an awareness of automated decision-making in public services so they can hold authorities accountable.’

The dynamics around digitalised public services can also create or amplify inequalities. ‘If there’s only a certain demographic that has sufficient digital agency, then a digital tomorrow will also be an unequal one,’ says Drobotowicz.

The CAAI project team is keeping a sharp watch on the EU’s proposed AI Act and its implications for public sector digital services in Finland. Their findings could offer insights into the challenges and best practices of using AI in the public sector, as well as providing a critical understanding of the role of new regulations in the Finnish and EU context. If adopted, the AI Act could have a global effect on AI-based technologies and services.

‘I see the regulation as an opportunity for innovation in AI: if we in Finland can devise robust and participatory approaches to create systems that are explainable and accountable, demonstrating software tools and practices that are trustworthy, then we have a competitive edge against the rest of the world,’ says Sawhney, who leads the CAAI project.

Risto Sarvas in the carage holding a keyboard on his shoulder
‘An algorithm doesn’t have its own agenda. Its developers do. It is their interests and motives we should look at,’ says Risto Sarvas (in the photo), professor of practice at the Department of Computer Science. Image: Matti Ahlgren/Aalto University

Inclusive and trustworthy technologies: the way forward

The modern discipline of computer science is very different from its strictly-within-the-machine past. The new millennium has shown us that technology is neither neutral nor universally beneficial and it can have adverse or unequal impacts in society. Trust is a crucial component of a socially sustainable digital transformation of our institutions.

‘An algorithm doesn’t have its own agenda,’ says Risto Sarvas, professor of practice at the Department of Computer Science. ‘Its developers do. It is their interests and motives we should look at.’

There are many ways researchers can help the public and decision-makers better understand these technologies and promote a sustainable transition. Researchers can turn an abstract trend on its head and examine its societal implications, engage with new demographics or stakeholders in civil society with co-design principles, or make sense of emerging digital policies and regulations that will influence our common future.

Education is also a crucial source of long-term renewal. At the Department of Computer Science, the Information Networks program has educated engineers with technical, yet human-centric, skills for over 20 years. The new Engineering Psychology major will provide students with a strong understanding of human behaviour and technology. The mutually complementary programsshare the idea that technology doesn’t come from a vacuum but is rather a mirror of the minds of its makers and the historical structures in which it was developed.

As the Director of the Information Networks master’s program, Sarvas is a daily witness of the makers of tomorrow’s technology. ‘This new generation has an ingrained awareness of societal issues and a desire to make the world a better place,’ says Sarvas.

‘When I think of societal impact, I think of the hundreds of graduating students that have multidisciplinary computer skills and a desire to do something meaningful with them.’

Artistic depiction of a bright light in space / made by Ray Scipak

School of Science

Science for tomorrow’s technology, innovations and businesses

  • Published:
  • Updated:

Read more news

Vidha Saumya's artwork with grotesque human figures
Research & Art Published:

Aalto ARTS alum Vidha Samya’s artwork featured at the Venice Biennale 2024

The Pavilion of Finland presents ‘The pleasures we choose’ at the 60th International Art Exhibition – La Biennale di Venezia until 24 November 2024.
Metallikuutio, jota kädet koskettaa
Research & Art Published:

IoT Forge donates EUR 1 million to the School of Engineering

The donation will be used for research and education on the Industrial Internet and digital twins.
Event information on a yellow to coral gradient background with yellow bubbles and a photo of a colorful event space.
Awards and Recognition, Campus, Research & Art Published:

Join us for the first Aalto Open Science Award Ceremony

All Aaltonians are welcome – no registration required!
A man with glasses looks at the camera, with summer nature in the background
Research & Art, University Published:

Donor story - Yrjö Sotamaa: ‘Supporting the university is about building our own future’

Professor Emeritus is still an active design influencer both at home and internationally. He is now also a monthly donor to School of Arts, Design and Architecture.