Department of Computer Science
cs.aalto.fi
Computers have brought us further than their early inventors ever dreamed of. In the future, they could answer questions that contemporary generations can’t even fathom asking. But what are the limits of these magical machines? Are there any? How can we use them to compute things at maximum efficiency?
Associate Professor Jukka Suomela and his research group study these questions at Aalto University’s Department of Computer Science. Suomela and his colleagues work on the limitations and efficiency of computation, algorithm synthesis as well as distributed and parallel computing. One of their projects intends to automate their own work.
We spoke to Suomela about computers, algorithms, hard problems and his plans for automating science.
The smart phones in our pockets can do things that we were not able to do with supercomputers just a couple of decades ago. This is thanks to advancements on two fronts – hardware and software. We keep getting better computers and better computer programs running on those computers. Hardware is getting faster all the time. Modern computers can already do thousands of billions of calculations every second. Computer scientists also keep discovering clever algorithms that solve all kinds of challenging problems. And thanks to the new algorithms, even our old computers can do more than ever before.
This combination of progress in hardware and software is what has turned DNA sequencing, internet search engines, real-time 3D graphics and speech recognition from science fiction to parts of our everyday lives.
My own research area is algorithms, logic and complexity. It really boils down to understanding what can be computed efficiently. Our societies heavily rely on fast computers and fast computer networks. A solid understanding of what we can, and can’t, do with computers efficiently is the foundation for all of it. There are some fundamental limitations both in hardware and software.
Hardware is limited by the laws of physics. For example, there is no way to exceed the speed of light – this is already very visible in the global internet. Sending a message from Finland to New Zealand and getting an answer back, will always take more than one hundred milliseconds. Software also has its own limitations, but those limitations don’t come from physics. They come from the mathematical laws of computation.
These are the laws that my research group and I are studying because we want to understand the possibilities and limitations of all kinds of computers. We mostly focus on the theoretical aspects of computer science as well as distributed and parallel computing.
Parallel computing refers to a situation where we have a computation system with many processors working on a problem. We study questions such as which computational problems can be solved efficiently with these parallel systems. A typical problem is that if we compute this problem with one processor and it takes a thousand years, can we compute this problem in a year with a thousand processors?
Distributed computing is basically a network that wants to solve a task together. Something that we have studied a lot is a problem where we want to color a network in a way that all neighboring cells are of a different color. This is analogous to being an air traffic controller and making sure each plane has enough space and time in a given airspace.
We know that many tasks are computationally easy, there are efficient algorithms for solving them – which even an old computer can run. For some tasks, their computational complexity is still wide open. We don’t know if better algorithms exist for them or not. We haven’t yet found a clever way to solve them.
We also know that many tasks are computationally very hard. We know that we have reached the limits. It is not just that we haven’t yet figured out how to solve them fast. It is a mathematical fact that it's not possible to solve them fast with computers, no matter what kind of programs you use.
When we design faster algorithms for solving problems, we also simultaneously discover more and more about the limitations of computers. There’s always something computers cannot do. There are problems we would like to solve but even the fastest supercomputers won’t be able to help.
Many of them are related to our own field, computing. Designing algorithms, writing computer programs and checking that computer programs are correctly written are all challenging tasks for humans. It turns out that they are also very challenging tasks for computers.
For example, my research group is exploring the opportunities for using computers in the study of computer science itself – effectively trying to make ourselves as human researchers redundant. Just verifying if a given algorithm is working correctly is a very hard problem – it is a mathematical fact that we can't write a computer program that always solves this task, at best we can hope to solve it for some cases! And we want to go much further: not just verify existing algorithms but automatically come up with new algorithms.
Making a computer generate algorithms has been tried for decades. Researchers have succeeded in generating an algorithm for simple problems and problems for which humans have already developed an efficient algorithm. Only very recently has this algorithm synthesis been possible for problems for which no-one has been able to generate an algorithm. We are one of the few research groups who have succeeded in doing this in the field of distributed and parallel computing.
While generative AI tools like ChatGPT can be used to adapt and combine some previously-known algorithmic ideas they have seen in their training data, we are doing something fundamentally different: asking computers to discover entirely new kinds of algorithms, without any training material.
For example, we have worked on algorithms that synchronize the clocks of computers so that all computers can agree which step to take next. This is easy if there are no failures: just let one computer act as the leader and all other computers can follow it. But what if we want to tolerate failures in our computer network? For example, just following one leader can't work, if the leader is broken, and in the worst case giving misleading or contradictory information. Reasoning about all possible failure cases gets very hard (and error-prone!) for human beings, but this kind of logical reasoning is something computers excel at.
Nowadays we are using classical computers to study the foundations of quantum computing. But it would be truly exciting to see the day when we can finally use quantum computers to study the foundations of classical computing!
Go for a relaxing walk around Laajalahti bay, enjoy nature.
Well, to be honest this is something I'll do even if I don't manage to automate all of computer science…
I really like the question, now we are getting to something that is wide open.
Can a computer get frustrated, ever, for any reason? I mean, sure, it isn't that hard nowadays to write a chatbot that pretends to be frustrated or claims that it's frustrated. But can we build something from silicon chips that genuinely can have feelings, like, being frustrated, in the same sense as human beings have feelings? Nobody knows. I really hope I live long enough to see someone in academia shedding at least a little bit of light on these questions!
If you want to delve deeper into theoretical computer science, see Suomela’s open online courses in distributed algorithms and parallel computing. You can also find more detailed information about Suomela’s research group at: https://research.cs.aalto.fi/da/
cs.aalto.fi
Research focuses on the foundations of distributed computing. The key research question is related to the concept of locality in the context large computer networks.
Any method designed to find a matching is either slow or leads to a wrong solution
Our faculty works on various areas of theoretical computer science and its applications to algorithms engineering and other sciences such as DNA computation.