Springe direkt zu Inhalt

Communicating with AI

Robert Koch, founder of modern bacteriology and clinical infectiology, researched and taught in Berlin for many years. In 1905, he was awarded the Nobel Prize for Medicine.

Robert Koch, founder of modern bacteriology and clinical infectiology, researched and taught in Berlin for many years. In 1905, he was awarded the Nobel Prize for Medicine.
Image Credit: Michael Zalewski, created with Supermachine (AI)

Artificial intelligence (AI) creates images that are more than just realistic. This affords numerous new opportunities and advantages within the scope of effective scientific communication. Important historical figures can be transported into the present, microscopically small motifs can be depicted larger than life and phenomena that are difficult to observe can be captured in realistic detail. What will be possible with the new AI instruments? What are the opportunities, limits and risks involved? And how are these images created in the first place? 

Transparency for AI content

To date, there is no legal requirement to label content created with the help of an AI as such. Furthermore, AI-generated texts and images are not protected by copyright, and can be copied and republished without restriction. The European Artificial Intelligence Act (AI Act), which was adopted in March 2024 as the world’s first comprehensive regulation pertaining to AI, does, however, make exceptions: Special content – so-called deep fakes – must be labeled as such. The term “deep fake” refers to AI-generated or manipulated image, audio or video content that looks deceptively real. This can become a problem if this content is used to fake false information about real people. Even if AI is used to inform others regarding matters of public interest, this must be disclosed, in order to prevent disinformation – unless the content is monitored and reviewed from an editorial standpoint, and a person takes editorial responsibility for the publication. Further information

Albert Einstein, Alice Salomon and Robert Koch: Back in Berlin with AI

The women’s rights activist, Alice Salomon, was born in Berlin in 1872 and is regarded as a pioneer of social work as a branch of science.  In 1937, she left Germany under pressure from the Gestapo.

The women’s rights activist, Alice Salomon, was born in Berlin in 1872 and is regarded as a pioneer of social work as a branch of science. In 1937, she left Germany under pressure from the Gestapo.
Image Credit: Michael Zalewski, created with Supermachine (AI)

The scene looks realistic: the picture shows a street view of Berlin, with the television tower and skyscrapers in the background and cars and a streetcar on the street. Large graffiti is emblazoned on the wall of a house: Albert Einstein poses there in a familiar pose with his tongue out. This Berlin view on Prenzlauer Allee actually exists – except for one detail: The original wall does not show Albert Einstein, but fragments of a street art work.  

The work featuring the famous physicist is due to a combination of a real photograph and an AI-generated image. Einstein has become the first person to use the AI program “Supermachine” in the image created by photographer Michael Zalewski. It forms part of a series of images presenting portraits of renowned scientists such as Rudolf Virchow, Max Planck, Lise Meitner, Alice Salomon and Robert Koch in Berlin’s urban space. All of them were working in Berlin around the turn of the century; some of them spent the most important time of their careers here and shaped the Berlin research area with their outstanding work. The series of images commemorates these important personalities and places them where they once worked: in the heart of Berlin.  

Prompts create the perfect picture 

“It all starts with the idea,” states Michael Zalewski, explaining how such images are created. This idea behind the image is entered into the image AI in a “prompt” – i.e. a description that is as precise as possible – in text form. What motifs should the picture show? Background, time of day, even exposure time, camera model or even photography style can be specified in the prompt. From the information entered, the AI calculates an image that usually does not immediately match the expectations. With further prompts, ever more detailed information flows into the AI, which optimizes and adapts the image based on this. It can take several hours of work to get everything right. “Getting to grips with the programs and prompting is very time-consuming, especially at the beginning,” explains Michael Zalewski, describing his experiences. “If you work with it frequently, the whole process is faster.” Some image ideas can be implemented more easily, quickly and cost-effectively with AI. 

Michael Zalewski required several attempts and prompts to create the perfect graffiti of Albert Einstein. “Sometimes, the nose was crooked, the facial expression funny or I didn’t like the wall of the house.” In the end, the photographer was satisfied and used an image editing program to turn the resulting graffiti into an original photograph. You can’t tell from the result that part of the image was created with the help of AI and inserted afterwards. 

The two sides of the coin

Albert_Einstein

Albert_Einstein
Image Credit: Michael Zalewski, created with Supermachine (AI)

AI tools and the content they generate are also the focus of researcher Dr. Dafna Burema. The sociologist is a postdoctoral researcher at the Cluster of Excellence Science of Intelligence (SCIoI) at the Technical University of Berlin. Her research questions pertain to human behavior and society in the context of technology: “I look at how (and why) people create artificial intelligence and, above all, have an eye on the ethical issues involved. Are ethical considerations taken into account and correctly applied when creating AI tools? Are the new technologies helpful or not?” 

AI has already made its presence felt in everyday life 

Almost everyone already uses AI unconsciously in their everyday lives – in streaming services, when shopping online, via spam filters or when navigating. It is now almost impossible not to use AI, says Dafna Burema. However, generative AI takes us one step further: it creates entirely new content based on training data – texts, images, music or videos. 

Such AI tools can be very helpful for scientific communication: complex and extensive scientific publications can be quickly translated, simplified and shortened using text programs, for example, so that they can also be understood by non-experts. Image programs use the right prompts to create images of difficult-to-observe phenomena or abstract concepts. They can illustrate data and patterns and appeal to a wide variety of target groups. AI can, therefore, help to ensure that research (and its results) are communicated in a more comprehensible and accessible way. 

But there are also risks involved, such as manipulation of content or faking false facts. Data protection and intellectual property rights are further points of criticism that are often raised when it comes to the use of AI. The models are usually trained with texts, images, artworks and data without obtaining permission from the authors and owners. The issue of transparency is also very relevant: does AI content have to be labeled as such, or can users be left in the dark about the fact that what they are reading or viewing was not created by a human? And finally, AI content is not infallible. The term “AI bias” describes how human prejudices influence AI algorithms and lead to distorted results, which can then disadvantage certain social groups, for example.  

Transparency is important 

“The research field of AI and ethics is about taking a closer look at all these uncertainties and gray areas, and developing guidelines and rules for the use of AI,” as Dafna Burema describes one of her research tasks, which is rooted in the interface between sociology and technology. “And also the question of whether and why we want and need AI at all.” 

Dafna Burema considers transparency to be one of the most important criteria when dealing with AI. “Many issues and risks result from people not knowing that they are interacting with artificial intelligence or consuming content that has been generated using AI.” Bots – i.e. automated software programs – as well as deep fakes and fake news are the negative examples that can be generated in enormous quantities at the touch of a button with the help of AI and are spreading rapidly, especially on social media. Conversely, AI is able to make content accessible to many people for the first time and support scientists in communicating their results and messages to a broad population, something Dafna Burema is convinced of. “AI can also have a democratizing effect,” emphasizes the researcher. “For example, when language barriers are overcome with the help of chat or translation programs.” Ultimately, AI – like so many technological achievements – is a tool that harbors risks, but at the same time can do a lot of good. “There are two sides to the coin, and we have to handle it carefully and critically,” summarizes the researcher. “We shouldn’t be too pessimistic, because the benefits of AI are enormous in numerous areas.” 

Michael Zalewski

Michael Zalewski, photographer and artist

Michael Zalewski, photographer and artist
Image Credit: Michael Zalewski

Michael Zalewski studied photography at the Academy of Visual Arts (Hochschule für Grafik und Buchkunst) in Leipzig, and has been working as a photographer and artist in Berlin since 1999. His focus is on architectural and subject photography. 

Dr. Dafna Burema

Dr. Dafna Burema, Science of Intelligence / Technische Universität Berlin

Dr. Dafna Burema, Science of Intelligence / Technische Universität Berlin
Image Credit: Science of Intelligence (SCIoI)

Dr. DafnaBurema is a sociologist and researcher in the Cluster of Excellence Science of Intelligence (SCIoI) at the Technical University of Berlin. She deals with the social impact of AI and other new technologies.