Transparency for AI content
Albert Einstein, Alice Salomon and Robert Koch: Back in Berlin with AI
The scene looks realistic: the picture shows a street view of Berlin, with the television tower and skyscrapers in the background and cars and a streetcar on the street. Large graffiti is emblazoned on the wall of a house: Albert Einstein poses there in a familiar pose with his tongue out. This Berlin view on Prenzlauer Allee actually exists – except for one detail: The original wall does not show Albert Einstein, but fragments of a street art work.
The work featuring the famous physicist is due to a combination of a real photograph and an AI-generated image. Einstein has become the first person to use the AI program “Supermachine” in the image created by photographer Michael Zalewski. It forms part of a series of images presenting portraits of renowned scientists such as Rudolf Virchow, Max Planck, Lise Meitner, Alice Salomon and Robert Koch in Berlin’s urban space. All of them were working in Berlin around the turn of the century; some of them spent the most important time of their careers here and shaped the Berlin research area with their outstanding work. The series of images commemorates these important personalities and places them where they once worked: in the heart of Berlin.
Prompts create the perfect picture
“It all starts with the idea,” states Michael Zalewski, explaining how such images are created. This idea behind the image is entered into the image AI in a “prompt” – i.e. a description that is as precise as possible – in text form. What motifs should the picture show? Background, time of day, even exposure time, camera model or even photography style can be specified in the prompt. From the information entered, the AI calculates an image that usually does not immediately match the expectations. With further prompts, ever more detailed information flows into the AI, which optimizes and adapts the image based on this. It can take several hours of work to get everything right. “Getting to grips with the programs and prompting is very time-consuming, especially at the beginning,” explains Michael Zalewski, describing his experiences. “If you work with it frequently, the whole process is faster.” Some image ideas can be implemented more easily, quickly and cost-effectively with AI.
Michael Zalewski required several attempts and prompts to create the perfect graffiti of Albert Einstein. “Sometimes, the nose was crooked, the facial expression funny or I didn’t like the wall of the house.” In the end, the photographer was satisfied and used an image editing program to turn the resulting graffiti into an original photograph. You can’t tell from the result that part of the image was created with the help of AI and inserted afterwards.
The two sides of the coin
AI tools and the content they generate are also the focus of researcher Dr. Dafna Burema. The sociologist is a postdoctoral researcher at the Cluster of Excellence Science of Intelligence (SCIoI) at the Technical University of Berlin. Her research questions pertain to human behavior and society in the context of technology: “I look at how (and why) people create artificial intelligence and, above all, have an eye on the ethical issues involved. Are ethical considerations taken into account and correctly applied when creating AI tools? Are the new technologies helpful or not?”
AI has already made its presence felt in everyday life
Almost everyone already uses AI unconsciously in their everyday lives – in streaming services, when shopping online, via spam filters or when navigating. It is now almost impossible not to use AI, says Dafna Burema. However, generative AI takes us one step further: it creates entirely new content based on training data – texts, images, music or videos.
Such AI tools can be very helpful for scientific communication: complex and extensive scientific publications can be quickly translated, simplified and shortened using text programs, for example, so that they can also be understood by non-experts. Image programs use the right prompts to create images of difficult-to-observe phenomena or abstract concepts. They can illustrate data and patterns and appeal to a wide variety of target groups. AI can, therefore, help to ensure that research (and its results) are communicated in a more comprehensible and accessible way.
But there are also risks involved, such as manipulation of content or faking false facts. Data protection and intellectual property rights are further points of criticism that are often raised when it comes to the use of AI. The models are usually trained with texts, images, artworks and data without obtaining permission from the authors and owners. The issue of transparency is also very relevant: does AI content have to be labeled as such, or can users be left in the dark about the fact that what they are reading or viewing was not created by a human? And finally, AI content is not infallible. The term “AI bias” describes how human prejudices influence AI algorithms and lead to distorted results, which can then disadvantage certain social groups, for example.
Transparency is important
“The research field of AI and ethics is about taking a closer look at all these uncertainties and gray areas, and developing guidelines and rules for the use of AI,” as Dafna Burema describes one of her research tasks, which is rooted in the interface between sociology and technology. “And also the question of whether and why we want and need AI at all.”
Dafna Burema considers transparency to be one of the most important criteria when dealing with AI. “Many issues and risks result from people not knowing that they are interacting with artificial intelligence or consuming content that has been generated using AI.” Bots – i.e. automated software programs – as well as deep fakes and fake news are the negative examples that can be generated in enormous quantities at the touch of a button with the help of AI and are spreading rapidly, especially on social media. Conversely, AI is able to make content accessible to many people for the first time and support scientists in communicating their results and messages to a broad population, something Dafna Burema is convinced of. “AI can also have a democratizing effect,” emphasizes the researcher. “For example, when language barriers are overcome with the help of chat or translation programs.” Ultimately, AI – like so many technological achievements – is a tool that harbors risks, but at the same time can do a lot of good. “There are two sides to the coin, and we have to handle it carefully and critically,” summarizes the researcher. “We shouldn’t be too pessimistic, because the benefits of AI are enormous in numerous areas.”
Michael Zalewski
Michael Zalewski studied photography at the Academy of Visual Arts (Hochschule für Grafik und Buchkunst) in Leipzig, and has been working as a photographer and artist in Berlin since 1999. His focus is on architectural and subject photography.
Dr. Dafna Burema
Dr. DafnaBurema is a sociologist and researcher in the Cluster of Excellence Science of Intelligence (SCIoI) at the Technical University of Berlin. She deals with the social impact of AI and other new technologies.