Milos Mladenovic is working in the Spatial Planning and Transportation Engineering research group in the Department of Built Environment. His research focuses on development of technology and planning practices for sustainable urban transportation systems. Among other transportation related topics, his key research areas include ethics of emerging mobility technologies, such as self-driving vehicles.
What is the purpose of the Horizon 2020 Commission Expert Group and how would you describe your role in it?
The group aims to produce an EU-level document, which then the Commission takes further to either develop more research based on it, get public consultation or to create new political discussions about driverless mobility.
The multi-disciplinary group includes 14 members from different EU member states working in different fields. My role is to represent the field of transportation engineering. However, given my multi-disciplinary interests, part of my role is to “translate” concepts within the group, as different scientific backgrounds tend to speak different “languages” with each other. While I am helping to increase understanding within the group, working in such a multi-disciplinary group is a huge learning experience in itself.
What are the biggest challenges in this work?
Although there are such immediate ethical challenges with automated vehicles such as the road safety and risks, data and algorithms are the huge long-term challenges that are not easily addressable. These challenges go beyond questions of privacy, frequently nowadays associated with GDPR, but can account for such aspects as how fair or discriminative are the algorithms we develop.
How would you compare the development of driverless mobility in US to EU?
The US has less regulated approach towards automated vehicles, because rules and regulations are set up in so many different levels, such as in states and cities, often leading to competition. The most infamous example is Tesla, which deployed its technology without first guaranteeing the full safety and the use of auto-pilot mode ended-up causing deadly crashes. Tesla tried to place responsibility on the customers for using the auto-pilot mode.
What was not taken into account in the first place, is that if something works in 99.9% of cases, it is still not safe when it is about traffic. This leads to even bigger question, which is the question of responsible innovation process. During the innovation process, the problems should be rather foreseen than attempted to be fixed after they have happened, while also relying on collective deliberation with wider public and diverse stakeholders.
As the question of innovation is a universal problem, is there any country, which has moved forward to tackle this?
Netherlands has some interesting developments as many people there are working with ethics of technology. The Dutch have created different kinds of methods to tackle the uncertainty and co-develop technological alternatives.
In Finland, I regularly attend AI gatherings, and in in one of those, which was about AI text translation, I asked the question about the undesired anticipated consequences from this technology. The answer I got was about immediate risks of grammatical mistakes. However, as in the state-of-the-art Dutch concepts, anticipation centers on the question of meaning. An appropriate anticipated consequence would thus revolve around questions such as how would the meaning of language change, or how much people would have trust in the institution knowing that their text is translated by an AI?
How does your research in Aalto connects to this topic?
When I started in this area, I was more interested in how do we engineer ethical principles into algorithms. Now I am interested in governance of smart mobility, which includes many emerging mobility technologies besides self-driving vehicles. In this domain, a central principle is value-sensitive design.
An example of a societal value is independent children’s mobility in Finland, which is important for their physical and mental development. How can we design this value into technological system? This is why we need to train the next generation of engineers, capable of understanding and designing values into technologies.
So, the values differ from a country to another, which creates a challenge for the design?
Yes, and that is exactly something we are trying to highlight in the Commission group. Often technology giants want to minimize customization and quickly take over the market with their version of products. However, something that is designed for circumstances in Silicon Valley, does not work as well in Finland.
That is why we need different ways of innovation, which start by asking what kind of values we want to protect in the future. Then, we can ask the following question – do we need self-driving vehicles to ensure future values related to our health, environment, or economic relations? In the case of Finland, how can we have even better opportunities for walking and cycling while having automated vehicles? Or should this innovation be on the list of our priorities if there are other, lower cost and lower risk, solutions for addressing pressing sustainability challenges?
Will the transportation be automated ever then?
Actually, I have been asked this question many times and many people have tried to answer it. However, my attitude is that I should not be giving these answers. The problem is that we are relying on limited expert knowledge and in the end, it is a question of democracy. Ultimately, a rather straightforward question arises – if we accept that city planning needs to have public involvement, why cannot we imagine the same case for technological innovation?
One of the things I want to underline is the approach often seen in practice now: let’s first make this device work and then think about the ethics later, as if this was a cherry on the top of the cake. In my opinion, quite the contrary, ethics is the ground on which we stand. If you have not explicitly defined your ethical challenges and responses to those in open innovation processes, you are most likely going to transfer your biases and limited set of values to the system design.
Assistant Professor Milos Mladenovic
Commission expert group details