Critical AI & Crisis Interrogatives (CRAI-CIS) -seminaari
About the Seminars
The CRAI-CIS* Seminars engage emerging work across critical AI, Human Computer Interaction (HCI), participatory design, and crisis-related research. The seminars seek to invoke dialogues on how computational, human-centred, and social sciences perspectives can offer new insights and methods for inclusive approaches and critical inquiry with societal impact.
Each event features invited speakers who share distinct perspectives, ongoing research, methods, and challenges for future work in a 30-45 minute talk, followed by Q&A and space for mingling and networking.
In autumn 2021, the seminars will be hosted bi-weekly on Wednesdays 13:30 - 15:00 from October 27th to December 8th in the Aalto Computer Science building (Konemiehentie 2). Tea/coffee and cookies will be offered.
All talks are hosted as hybrid events (in-person and online), and will be recorded for open access in the future. Please register for the seminars you wish to attend in-person or online.” Use this Zoom link at attend any of the seminars remotely.
*CRAI-CIS stands for CRitical AI and Crisis Interrogatives. The CRAI-CIS Research Group is led by Prof. Nitin Sawhney in the Department of Computer Science at Aalto University. The CRAI-CIS seminars this autumn are hosted by Karolina Drobotowicz and Henna Paakki, doctoral researchers at Aalto University.
*Subscription site currently only works on Chrome & Firefox (not Safari)
Why AI and Human Rights?
Doctoral Researcher, Tampere University
Venue: Lecture Room T2 (C105), Aalto Computer Science building, Konemiehentie 2
Talk Abstract: AI has the potential to undermine and violate human rights. Effective governance of the technology will require the operationalization of ethical principles and human rights values and norms, which are indispensable to ensuring Trustworthy AI. Without a continuous exercise of interpretation of these values and significant Human Rights Impact Assessment (HRIA) adapted to concrete use cases across different sectors, high-level ethical guidelines and best practices will become either impractical due to their vagueness and lack of enforcement, or simply “ethics washing.”
To maximize the benefits and minimize the societal risks of AI, while aiming at competitiveness and excellence, human rights must be embedded in the design, development, and deployment of the technology in the real world, and not only in academic papers. Therefore, meaningful collaboration amongst all actors shaping AI governance is urgent (especially between computer scientists and human rights experts), and it will necessarily require participation of rights-holders and other stakeholders.
Speaker Bio: Bruna is a Doctoral Researcher in Law & Technology at Tampere University, Finland. She holds a Bachelor and a Master of Laws and a MA in Human Rights Policy and Practice. Her expertise, interests, and research subjects include AI Regulation in the EU; human rights impact assessment (HRIA) of AI applications, and Corporate Social Responsibility (CSR) and emerging technologies.
Participatory AI with Children: In Pursuit of Fair and Inclusive Technology Futures
Postdoctoral Researcher, University of Oulu
Venue: Lecture Room T5 (A133), Aalto Computer Science building, Konemiehentie 2
Register for the 10.11.2021 seminar here
Talk Abstract: Artificial intelligence (AI) is evolving to mimic human-like cognition, emotions, conversations, and decision-making. Yet, how AI affects children is not well studied. Studies on AI and children mainly focus on cultivating, nurturing, and nudging children towards technology use and design and while various global and national policy frameworks on Children and AI are being developed; the approaches are child-centered but not child-led, restricting children from affecting their own digital futures. Further still, there is little discussion with children on the limitations, inherent biases, and lack of diversity in current design and development of AI, and on critical examination of the ethical aspects of technology use, design, inherent limitations, and consequences of these on children and society at large. In this talk, I will discuss two case studies exploring ethical AI with children with a focus on algorithmic fairness, human agency and oversight, and present my work on developing stakeholder-inclusive models for critically examining the design of ethical AI. Building a case for employing future oriented methods when working with children on the topics of ethical AI, I will also present my research on inclusive design futuring approaches.
Speaker Bio: Sumita Sharma is a postdoc researcher in Human Computer Interaction (HCI) with a focus on empowerment, inclusion, and accessibility using interactive and novel technologies for and with children and underserved user groups. She has worked on designing and evaluating technology for underserved communities, including children living in the remote regions of Northern Siberia, children with autism and other developmental disabilities, children living in urban slums in India, and low-literate women in Guwahati, India. From 2020, she is working on short-term and long-term participatory studies on critical design and making with children in schools in Oulu, Finland. She employs design fiction with a diverse set of participants and in diverse contexts, through projects with schools and workshops with adults, creating diverse approaches for designing for the future. From Sep 2021, she is working on her postdoc project on Participatory AI with Schoolchildren, funded by the Academy of Finland.
Project website: https://interact.oulu.fi/paiz
AI, Human Cognition, and Ethics
Fulbright-Nokia Distinguished Chair in Information & Communications Technologies 2021-2022, Department of Computer Science, Aalto University
Professor, Department of Information Sciences & Technology, George Mason University
Venue: Lecture Room T4 (A140), Aalto Computer Science building, Konemiehentie 2
Register for the 24.11.2021 seminar hereTalk Abstract:The ethical impact of Artificial Intelligence (AI) on society has become a critical concern for developers as well as users of AI-based systems. The impact of AI-based decision-making is being felt across domains including policing, financing, health, and education. What makes AI-based systems so contested and how can we better understand their impact on society? What, if anything, can we do to make AI-driven decision-making more ethical? In this talk I draw on research in cognitive sciences, learning sciences, and technology ethics literature to argue that the impact of AI on society is best understood as a continuing extension of the symbolic systems capabilities of humans. As humans have developed systems of languages, writing, and now computation, their cognitive abilities have extended with these developments but so has the complexity of how systems, including society, work. To create a more ethical and just AI-driven world, we need a comprehensive platform that not only engages designers and developers, but also the users - the public, in deliberation, and this is our greatest barrier in doing good with AI.
Speaker bio: Aditya Johri is Professor of Information Sciences & Technology and Director of Engineering Education and Cyberlearning Lab (EECL) at George Mason University, USA. He is currently serving as a Fulbright-Nokia Distinguished Chair at Aalto University, Finland. He co-edited the Cambridge Handbook of Engineering Education Research (CHEER) which received the 2015 Best Book Publication Award from Division I of AERA. His research is supported primarily by the U.S. National Science Foundation (NSF) and he received an NSF Early CAREER Award in 2009.
More information at: http://mason.gmu.edu/~johri
Data magic: Performativity and Organizing Power of Data in Social Media Analytics
Docent, PhD, University Researcher, University of Helsinki
Venue: Lecture Room T5 (A133), Aalto Computer Science building, Konemiehentie 2
Register for the 8.12.2021 seminar hereTalk Abstract: The availability of massive datasets and the need to develop methods to analyze them is a prominent technological change affecting organizations and society, and a development that lives parallel to the awe, magic and charisma related to technology. This talk conceptualizes data as part of the technological unconscious and argues that data do not only represent and abstract social action, but also play a performative role in the organizational field of data analytics. By bridging critical data studies and the sociomateriality literature in organization studies, this talk explores the ways in which material/technological factors interact with the human/social factors in the context of social media data analytics. A research setting based on ethnographic observation and thematic interviews collected in four social media analytics companies allowed for observing and tracing the social media data assemblages as the data travel through the organization. In this process, the data transforms from a matter of concern to a matter of authority; digital environments generate expectations for preciseness, traceability and never-ending knowledge; and the data itself, metrics and visualizations become means to achieve exactness and credibility. In addition, data works to institutionalize data analytics practices across organizations and reproduces data assemblage formations that support the infrastructural power of platform companies.
Speaker Bio: Salla-Maaria Laaksonen (D.Soc.Sc.) is a senior researcher in the Centre for Consumer Society Research, University of Helsinki. Her research areas are technology, organizations, and new media, including organizational reputation in the hybrid media system, the organization of online social movements, and the use of data and algorithms in organizations. She is also an expert in digital and computational research methods. Her work has been published in top-tier journals such as New media and Society, Journal of Communication, and Information, Communication and Society.