Try, fail, try again, fail better
Buddhists see ‘failing better’ as an acknowledgement of imperfection, accepting that failure is part of the learning process. Failing better means trying, and trying again, as learning from reflection makes the difference.
Four experts told us what failure has taught them.
Chance as a useful ally
Sanna Kannisto says that working with live animals can be unpredictable, but she has learned to trust her instincts and let chance interfere. She realises there’s no point in worrying too much when in challenging situations, better to just enjoy the experience.
‘Photography, for me, is a tool that can bring my thinking into visual form, so it’s about seeing, perceiving and finding a way to craft an image according to an idea. I derive my passion and inspiration from various things that I try to combine with photography, such as field science, biology or natural history.
I collaborate with ornithologists and volunteer bird ringers. When planning to photograph birds, I prepare my equipment, the location, and the studio setting as much as I can before the shoot. I have a collection of branches for birds to perch on, for example. I usually wake up one hour before sunrise, so I am ready to shoot when the bird researchers spread their mist nets to make a gentle catch.
When a bird is in the studio, I am patient and make a point of just observing at first. I do, however, prefer being in nature, and the moment when I come into contact with a bird is extraordinary. This is the best part of my work. A typical shoot can last about eight hours, during which I’ll make up to 200–250 frames of six to seven birds. If I make one image that I am satisfied with that day, it’s a good result.
In 2018, I made an expedition to a Costa Rican rain forest. The working conditions were very hard, and the location had to be changed just about every other day. My assistant and I had to take down my studio, carry the equipment to the next spot and then rebuild, all in very challenging conditions. Added to this, the researchers weren’t catching that many birds. Although the trip was a lot of fun and a wild adventure, I considered it a failure from a professional point of view. Upon my return home, I thought I’d made only three good photos during the three weeks that I was there but have since selected nine pictures from that journey for my upcoming book. I also realised that I needed to forgive myself for not being realistic and more successful.
It’s essential to experiment and to try different, new things. Making any art is a continuing process; even a minor incident or idea can eventually become meaningful. I may not find a way to use some photos, but those moments are still precious.’
Photographic artist and Aalto University alumna Sanna Kannisto’s works examine the interfaces of art and science. A cross-section of her 20-year career is on display at the Finnish Museum of Photography, Helsinki, on 10.6–30.8.2020. Her latest book, Observing Eye, will be released in conjunction with this exhibition. It is published by Hatje Cantz of Germany.
Fail faster, succeed sooner
Postdoctoral researcher Satu Rekonen highlights the importance of a supportive atmosphere when creating something novel. ‘It is easier to step outside one’s comfort zone when you do not have the pressure to succeed at once.’
‘A fail faster, succeed sooner philosophy is pretty central in my research, which explores how diverse teams approach ill-defined problems to create innovative and unique solutions. To find these solutions, a team needs to take action despite the discomfort of uncertainty and the high risk of failure. Dealing with and handling failure is one thing for individuals, but a different matter altogether in a group setting. Fear of failure or appearing incompetent to others may impede team member participation. This necessitates the creation of a team culture that is safe for trying things out and asking even stupid questions.
My research indicates that practitioners with little or no experience in creative problem-solving have a tendency to rush to conclusions; locking in a direction of pursuit seems to provide high levels of relief and satisfaction, which leaves little, if any, room for innovation.
With my colleagues we observed four teams of Finnish finance professionals engaging in experimentation for the first time. After two half-day workshops teaching ideation, experimentation and a human-centred approach, these teams participated in several coaching sessions, which allowed them to observe some of the potential pitfalls firsthand. We found that, in their eyes, experimentation equalled the quick implementation of ideas; as if the measure of success was how quickly an idea was implemented, not how much the team learned and improved their initial idea. A few participants wanted to stop because they couldn’t let go of the initial idea – some even felt that they’d failed if they couldn’t remove all the uncertainties related to their idea in just the one experiment.
When I work with groups, I really drum into them that the first experiment is a crucial stepping-stone. We can assume that the first idea isn’t going to be the one to be implemented. Success comes from how the team is able to learn from experimenting with the idea, and failing early can be good because it uses less time and resources, leaving more room for manoeuvre when moving on.’
Satu Rekonen is a postdoctoral researcher at the Department of Industrial Engineering and Management.
Gaming became failing
‘All I can say is that there was a lot of wild experimentation going on!’ Salu Ylirisku threw caution to the wind when he first created the Networked Partnering and Product Innovation (NEPPI) course and decided to play around with its structure.
‘The purpose of the NEPPI course is for students to plan and implement projects where they can discover and articulate novel product opportunities in the context of the IoT, Internet of Things. They work in multidisciplinary teams and develop their product by, e.g., discussing the feasibility and viability of their idea and then getting feedback from fellow students.
I wanted to add something extra to the standard course implementation by studying how a gamified approach could foster the students’ innovation skills. The real-time planning and course content-altering aspect were entirely intentional – NEPPI was built around the idea that we would adjust our flight as we go.
Before my Aalto University posting, I worked in Denmark and managed an online tank game clan in my free time. I decided to establish a motivating feedback system, so I implemented a regularly updated scoreboard enabling clan members to gauge how well they were faring. I was happy to see that clan members changed and improved their playing style because of this. I was confident that a similar motivation system for the first NEPPI course could work thanks to this experience.
I created an automated tracking program so that the students could submit things like work memos and findings for assessment by other team members. Users would also log their hours and get approval from fellow students.
I deliberately programmed the game logic while the course was underway so I could see how the students responded and make necessary changes.
But the participants hated being openly listed in ranked order, which led to students mocking the approval system and cheating by adding and subtracting their hours. All in all, it was a miserable experience for them, and the course received that year’s lowest reviews at the School of Electrical Engineering.
The course was finished and its design concepts completed, but a lot of ambiguity remained even during its final week.
The idea was and remains good, but the implementation was admittedly terrible in the first round and, upon reflection, I admit that my efforts on the gamifying aspect were harming the actual lectures. Nevertheless, I feel that university is a place for learning and research, and the best teaching experiments should combine both aspects.’
Salu Ylirisku is a senior lecturer and teaches design at the Department of Electronics and Nanoengineering.
Error enables learning
Like humans, machines learn from the errors they make, says Assistant Professor Alex Jung.
‘Machines don’t have to endure what humans experience as fear of failure, a potentially crippling hurdle to overcome. Emotion can lead down unexpected paths, however, and the humble human still forms the first part of the equation, as humans need to feed into the system what is referred to as labelled examples, against which predictions are compared, enabling trial and error to facilitate learning.
Like humans, machines learn from the errors they make, so machine learning (ML) methods are computationally efficient implementations of the trial and error paradigm. The discovery of new physical laws is driven by deviations between the predictions of an old theory and observations in nature. Similarly, ML methods refine their predictions based on error feedback. The more error feedback, the better and faster methods can evolve.
At this stage, machine learning systems still need to be fed information, after which they can then proceed to performing their task. The systems are shown labelled examples, such as pictures of dogs, trees or cats.
So, training an ML system to detect pictures of certain things, such as cats, involves feeding it enough examples to enable it to generalise on its own, and from there to correctly classify new inputs not included in its original training set.
The ML process still involves error-based learning, but this happens with unfathomable speed. Inaccurate end results remain a distinct possibility, requiring the engineer/scientist to -further tweak the system.
When building an ML model, it’s essential to know that even real-world data is imperfect; different types of data require different approaches and tools, and there will always be trade-offs when determining the right model. Just because a system is fed a picture of a cat doesn’t necessarily mean that the end result will be a cat. It might determine that the image is of a dog, for example, simply because the ears don’t fit within the standard model. This requires constant tweaking and altering of the algorithms and models.’
Alex Jung is an assistant professor, machine learning and data analysis, at the Department of Computer Science.