Where do you see the development of creative AI models heading in terms of the creative industries?
Once policymakers provide the needed – and demanded – regulation on these matters, the latest generation of creative AI will likely become yet another tool in many professionals' creative work. Especially in applied art, they will likely yield considerable cost deductions and an increase in productivity. I expect it to be used beyond early phases of the creative process all the way to the final product. I also project that future generations of these systems will require even less involvement by artists, which is presently still very much needed.
However, the jury's still out on whether this development can be considered a benefit for everyone: our latest study found that professionals assumed various kinds of roles for themselves when working with AI, from "art director for the AI" to "slave to the AI." Moreover, while substituting potentially otherwise unavailable skills and workforce, these systems might also increase our dependency on technology and those who provide it – a development that we should be very conscious of.
The take-home message is that the impact of creative AI on professionals is not only positive; the situation is rapidly changing, and the diverse reactions prohibit a one-size-fits-all solution as of now. This puts industry leads, researchers and policymakers into a tricky position. Also, as teachers at Aalto, we must watch these developments closely to equip our students with skills that will complement their traditional skills in a future-proofed way.
How can we make the adoption of generative AI socially and ethically sustainable?
I consider sustainability as one of the prime challenges for all of us in balancing the wellbeing of those affected by creative AI with business interests and scientific curiosity. More specifically at this point we see two pressing questions that put many professionals into inner conflict. First, are artists going to be credited and compensated for the data that are used in the models training, and how? Second, a major issue for professionals is who owns the copyright of the outputs. I argue that these issues must be resolved first through quick and transparent legislation to support the ethical and sustainable use of these systems.
In addition to these questions, we are left with a whole range of issues that are still in flux. For instance, what do professionals find most meaningful about their work, and consequently, which aspects should AI rather not touch? To this end, professional creatives must be involved in the regulation and development of creative AI. Discussions on social media and the news can be very noisy and too superficial for e.g. policymaking. Through scientific studies, we can give professionals a clearer voice. Doing this in a longitudinal fashion should allow us to track how uses and perceptions change and adapt appropriately. Complementing such user studies, we must also become capable of experimenting with changes to the systems themselves, rather than taking what industry has to offer. We are now at a point where these types of models have become flexible enough to be trained and investigated at Aalto, an opportunity which my colleagues and I now actively pursue.
How should we define creativity and how does machine creativity differ from human creativity?
One way to differentiate human creativity from machine creativity is to think of it in terms of motivation. For instance, much of human creativity is driven by intrinsic motivation such as curiosity. Here, we act not for any value outside of the activity itself. This is fundamentally different from most creative AI, which is built to optimise a separate goal, such as producing outputs that people find most appealing, by including features of the data that the system was trained on. However, I believe that this not only limits an AI’s creative potential, but also the extent to which it could really complement and augment, rather than just substitute, human creativity. My research challenges this divide.
I believe that studying the functional and perceived disparities between human and AI is crucial in that it enables us to ask: how should artificial creativity be different from human creativity? And what biases are at work when we interact with creative AI, that keep us from using it in a more fulfilling way? We’re now at a juncture where, instead of asking "can AI be creative", we should be asking "what kind of creative AI is best for us".
You can follow Christian's work on Aalto's webpage, Mastodon and Twitter.