Read our Posts

The use of AI raises new questions about the notion of ownership in the fields of law, medicine, and art

A workshop on "Understanding Ownership: The Impact of AI in Law, Medicine, and Art" at Goethe University’s House of Finance, organized in cooperation with the Center for Critical Computational Studies, revealed the need to continue debates on ownership, responsibility, and the practical use of AI in various contexts.

11 July 2024

In the field of creations of the mind, or intellectual property, the question of ownership has always been a tricky one. Nowadays, the use of Artificial Intelligence is adding significant new challenges. Who owns AI-generated texts or images: The software developer? The providers of the data used to train the AI? The user who formulated a few prompt lines for a generative AI like ChatGPT or used any other of the vast variety of emerging AI tools?

Closely related to the concept of ownership is the question of who is responsible for AI-supported or even AI-generated decisions. The supposedly obvious answer, "It’s the human, of course," has become questionable given that AI has become verifiably more reliable, at least in strictly delimited fields like the detection of illness patterns in medical imaging or the steering of autonomous cars. In these cases, taking responsibility could necessarily imply that humans should hand the decision-making over to a machine, which in turn would raise severe ethical questions. During the workshop "Understanding Ownership: The Impact of AI in Law, Medicine, and Art" at the Goethe University Frankfurt’s House of Finance, on July 3, 2024, these questions were discussed by renowned experts.

Hosts Katja Langenbucher, law professor at the House of Finance, and Kevin Bauer, Professor of E-Business and E-Government at the University of Mannheim, in cooperation with Goethe University’s Center for Critical Computational Studies, invited keynote speakers Stefan Bechtold, Professor of Intellectual Property and Associate Vice President IP Policy at ETH Zurich; Florian Adolph, a Frankfurt-based artist and painter interested in the conditions of human perception in media contexts; Dr. Sebastian Biedenkopf, lawyer and executive partner of Biedenkopf & Associates; Stefan Feuerriegel, Professor at LMU Munich and Head of the Institute of Artificial Intelligence in Management; Professor Dr. Janne Vehreschild, founding director of Goethe University’s new Institute for Digital Medicine and Clinical Data Sciences; Dr. Ralph Koerfgen, Senior Advisor at the Boston Consulting Group; Dr. Sven Schneider, Partner at Hengeler Mueller; and Cathy Yang, Professor at the business school and grande école HEC Paris.

The workshop revolved around billion-dollar questions, one might say. Generative AI in the healthcare sector alone will reach a global market volume of $22 billion by 2027, as Ralph Koerfgen pointed out, while emerging risks like hallucinations, data biases, and data privacy issues have not yet been fully addressed by current regulatory frameworks. From the clinical perspective, Janne Vehreschild asked "Who owns the decision?" and provided examples of patients’ questions: "How will AI make a decision matching my personal preferences and needs? Who will explain the recommendation to me, who will I discuss it with? What is the role of my doctor when AI defines my treatment plan?"

More practical approaches to the challenges, aiming at promoting the use of AI, were presented by Kevin Bauer and Cathy Yang. Current AI approaches work astonishingly well but are, in most cases, merely based on calculating correlations. Human reasoning, however, relies on causality, on giving reasons for taking specific decisions. Bauer therefore hinted at the relatively recent efforts of computer scientists to teach machines the understanding of causal relations. AI-generated recommendations that also reveal causal relations would make it more viable for humans to accept and trust them. Cathy Yang, finally, pointed to a dilemma that has the potential to hinder the introduction of GPT in companies. According to a study of hers, employees tend to hide the fact that they used AI to do their work, fearing that it might be less valued. Managers, on the other hand, feel more inclined to allow GPT adoption when its use is disclosed by their staff and when the added value becomes more obvious. As a potential solution to the dilemma, Yang suggested involving more experienced managers in the introduction of GPT who might recognize the value added by AI without diminishing the contribution of their employees.

After a lively discussion between Sven Schneider, Ralph Koerfgen, and Kevin Bauer, moderated by Professor Dr. Guido Friebel, holder of the Chair for Human Resources at Goethe University, Katja Langenbucher closed the workshop, stating that we stand at the threshold of a transformative era: From revolutionizing legal frameworks to advancing medical diagnostics and challenging traditional notions of creativity in art, AI is reshaping our world. It’s up to us now to ensure that the use of AI benefits society as a whole.

25 April 2023

This website draws on artwork created (?) by AI. This includes the architectual composition, which is not reflective of our campus to come.

Although there is no accounting for taste, we still need to reflect on our artwork. Hence, stay tuned for a workshop, taking place in Winter Term 2023/2024, on how large generative AI models (LGAIM) impact our understanding of art, creativity, and innovation.

For the architectual composition that you see attached to this post, we used this prompt: "3d Architectual Drawing of: The Center for Critical Computational Studies (C3S) is a newly founded hub for inter- and transdisciplinary research at Goethe University Frankfurt am Main, Germany. To establish and further critical computational studies, it brings together diverse disciplines, ranging from computer, life, and social sciences to health and the humanities."

For the abstract composition that you see attached to this post, we used this prompt: "A Research Center of Goethe University Frankfurt am Main. The Center has three missions: Designing trustworthy, sustainable and justifiable computational methods; understanding critical systems' transformations by computational means; exploring how computational technologies shape social realities. Abstract Digital Art, bright colors including, but not limited to, dark blue, grey, pink, white background."


Christoph Burchard

21 April 2023 | Announcement

Franziska Matthäus (Founding Director for Teaching) & Juliane Engel (Founding Director for Transfer) discuss the topic of Critical Computational Literacy with GU faculty as part of the 'Ideenwerkstatt Digitale Transformation'.

Science and society are undergoing profound transformation. Not only AI developments will fundamentally change education, research and teaching. So what does it mean to shape the digital future? Together with the Vice Presidents' Office, we cordially invite all departments of Goethe University to reflect, beyond the usual subject boundaries, on the possibilities to guide computational progress. Together we will work on what it means to conceptualise critical computational literacy as a future skill.