Dr Tomasz Hollanek (Technology Ethics PhD 2018) balances theory and practice in his approach to researching the ethics of Artificial Intelligence.
Tomasz’s research encompasses two strands. The first of these, based at the University of Cambridge’s , focuses on the ethics of human-AI interaction design in applications such as companion chatbots and ‘deadbots’, which digitally ‘resurrect’ the dead by simulating their behaviours based on the digital print they leave behind.
, his colleague at CFI, and published in May 2024 in the journal Philosophy and Technology explored the potential negative consequences of using AI in the ‘digital afterlife’ industry and recommended ethical ways of developing such systems.
Tomasz, who is now a , says: “When we think about the impact of a particular system, we often focus on specific user who is interacting with that system. For instance, how does a deadbot system affect the user, and how is it impacting their grieving process?
“In our study, we argue that you cannot think about the ethical impact of such systems only through the perspective of that individual user; you have to consider a few very important stakeholder groups. These include: data donors, people whose data can be used to create a deadbot; data recipients, people or institutions who are in possession of that data; and the direct users we refer to as service interactants.
“One of the key things we suggest is that the principle of mutual consent needs to be applied, that assuring you have the consent of the data donor as well as the consent of the service interactant is important, because one of the worst potential consequences of these emerging systems is that people might all of a sudden experience the presence of someone they would not want to see as postmortem avatars.
“The impact this piece has had is significant, ranging from a lot of media coverage to me and my colleague advising governments, companies, foundations and NGOs tackling the question of what we do with the data that someone who died left behind, and how we deal with it in an ethical way as more opportunities emerge with the further development of conversational Artificial Intelligence.”
The other half of Tomasz’s work looks at helping AI developers to design technologies in a more ethically responsible way.
He worked alongside Gonville & 91ֱ College Bye-Fellow Dr Eleanor Drage on the High-Risk EU AI Toolkit (HEAT), a comprehensive, open access resource co-developed with professional services company Accenture to help companies comply with the EU AI Act.
“The AI Act puts forward a set of requirements for providers of high-risk AI systems,” says Tomasz. “These are systems deployed in what the EU considers particularly sensitive areas of innovation: education, public services or administration of justice, for instance.
“While big companies have whole legal teams at their disposal to comply with the Act, we are trying to help smaller actors to not only comply but go beyond mere compliance. This tool is our way of thinking about how principles of social justice and environmental sustainability can be translated into practices.”
Tomasz also leads a working group focused on developing new consent elicitation mechanisms for conversational AI systems.
“The model of consent implemented in industry right now is not working for people,” he says. “You agree to some terms when you sign up and that’s it. But models change their behaviours, companies introduce updates and users’ behaviours themselves change. There are lots of moments when the user needs to be presented with a little more information about a system to meaningfully consent to interacting with it.”
Tomasz’s working group brings academics together with colleagues from major companies, with the aim of building coalitions with industry to implement these changes wherever possible.
Tomasz adds: “There are some in the industry who might think that our efforts to reform current practices are unlikely to bring about meaningful change, but actually working with colleagues in the industry makes me feel optimistic about what is still possible. The future is not yet fully determined by business objectives alone.”