Universität Zürich

IKMZ - Department of Communication and Media Research

Media Change & Innovation Division

Andreasstrasse 15
CH-8050 Zurich
Phone +41 (0)44 635 20 92
Fax +41 (0)44 634 49 34
Contact

Deepfakes and manipulated realities: assessment and policy recommendations for Switzerland

Thanks to editing software and artificial intelligence, sound, images, and videos can now be manipulated in an almost invisible way. Whether what we see and hear in digital media is a recording of real sound and images or a construction can hardly be discerned anymore. This interdisciplinary research project will assess the state and effects of digitally manipulated audiovisual content in general and deepfakes in particular.

About the Project (2022–2023)

The project is funded by Foundation for Technology Assessment (TA-SWISS) and is led by Fraunhofer Institute for Systems and Innovation Research ISI (Murat Karaboga, Michael Friedewald, Frank Ebbers) in collaboration with the University of Fribourg (Benedikt Pirker, Astrid Epiney, Manuel Puppis, Gwendolin Gurr) and the University of Zurich (Moritz Büchi).

The project is organized around the lifecycle of deepfakes and can be applied to both existing and potential new deepfake cases. We suggest that a meaningful discussion of the threats posed by deepfakes must consider five phases of deepfake deployment: (1) preparation and production; (2) dissemination; (3) perception; (4) victims' rights; (5) prosecution and holding perpetrators accountable. In this way, we will investigate in which phase which forms of governance already exist and what protection they offer against the corresponding use of deepfakes. By means of this analysis, the need for action first becomes clear, so that in the next step, proposals for action are formulated, which must be directed at different actors (including technology providers, user regulators, law enforcement agencies, etc.), depending on the phase.

The proposed study proceeds in three steps. In the first step, deepfake technologies, opportunities and risks of deepfakes, and legal frameworks are examined. The second step entails detailed analyses of the five areas of application: deepfakes among young people, deepfakes in journalism, deepfakes among companies, deepfakes in court and deepfakes in politics. Finally, in the third step, the overarching research questions are analyzed, focusing on the regulation of the technical dimension of deepfakes, training in dealing with deepfakes, and the need for regulation to protect those affected by deepfakes.