Direkt zum Inhalt springen

Diskurs

Mittwoch, 21.06.2023

SoundTrack_Cologne 20

AI, Authors Rights & Moral And Personality Rights

Keynote from Matthias Hornschuh, Spokesperson, held on June 20th, 2023 at SoundTrack_Cologne 20

Hello everyone!

Who would have guessed that 20 years later we would still be getting together to talk about music in the media, happy birthday, STC!, - and who would have guessed the topics ...

It started with LENSA, then came Dall-E, ChatGPT, MidJourney … Everyone is talking about it, but hardly anyone knows what it exactly is and how it works: AI. That applies to the technology as well as to social, cultural and countless other aspects. We lack an overall, a holistic view, but only based on such a view we can balance opportunities and risks - and figure out the set of rules we need for our dealings with the technology and the guardrails that should bind the providers of the technology.

AI, for all its promise, raises uncomfortable questions; not few of them can be traced back to the search for the essential difference between man and machine, for identity and intelligence.

Personally, I am convinced that humans and society must be in the focus of any regulatory effort, and generative AI should be the central subject of a European AI regulation.

Three levels have to be distinguished, namely INPUT, PROCESSING and OUTPUT.

At the INPUT level, immense corpora of data of various kinds and often of unclear origin are being used to train the systems. Quite a bit of the data is personal data, thus harboring critical information: Medical records, mobility data, information about communication and consumption behavior. Others are: Copyright protected Works and their recordings. Our livelihood as creators.

What exactly happens on the PROCESSING level is, according to the operators' admission, not entirely clear even to them: they speak of BLACK BOXES.

The products of generative AI on the OUTPUT level are by definition not WORKS. Works in the sense of the German copyright law are only personal intellectual creations. An AI, however, is not a person, and what is not a Work does not enjoy copyright protection. We will have to deliberate further as to what extent the prompts of the users can establish such a protection.

Although the training corpora do not consist entirely of copyrighted works and recordings, they include them almost in their entirety, at least as far as they are available on the internet - and essentially without permission and without remuneration. I'll spare us details about TDM and the declaration of reservation, but not the reference to a current Stanford study, which proves that out of 10 relevant foundation models including the German entity Aleph Alpha, only 2 consider copyright clauses in terms of use when scraping the training data. Our works are presently being used to train the very machines that will rob us of our livelihood sooner or later and will face society with fundamental problems.

The political demands of the authors' associations for a regulation of generative artificial intelligence can be summarized resulting in five central issues.

We need:

  1. TRANSPARENCY about the origin / nature & scope / legitimacy of the training data.
  2. LABELING of any AI generated OUTPUT.
  3. REMUNERATION for works & recordings used for training of AI.
  4. NO TRAINING of AI WITHOUT CONSENT – this should apply for works AND for personal data!
  5. NO HUMAN RIGHT FOR AI! Sounds silly? It’s silly, that we even have to talk about it.

I am deeply concerned with one aspect that is of central importance but widely misunderstood, namely the close, intrinsic connection of moral and personality rights and copyrighted works.

„Clone the voice of anyone in seconds“: That’s the claim of the voice cloning AI vocloner.

What is this? The „democratization“ of third-party voices? When Emma Watson's voice reads Hitler’s MEIN KAMPF, and the performer simply cannot do anything about it, we witness the separation of the VOICE from the PERSON. And this is a monstrous process.

The voice is the identification and expression of an actress, with her voice she earns her money - and just that is regulated in German copyright law or more precisely in the neighbouring right. According to it’s paragraphs 1 and 11, copyright law is above all a moral and personality right – it is only in § 11, sentence 2 that the term remuneration appears for the first time.

But beyond the stage or beyond studio microphones, actresses are simply human beings.

And here, too, the law takes hold and protects, for example within the right to informational self-determination, the relationship of a person to his or her personal data.

The two legal constructions are virtually identical; both protect a person's relationship to an ideal object that cannot be separated from the person, specifically: a personal information or a copyrighted work or its interpretation and recording. Both are an expression of human dignity.

Generative AI has per se a personality rights dimension, and authors rights are not only property law but also personality law - with consequences: Unlike an AI or its operator, authors are liable for their works. He or who who speaks out, has control over what is said. Anyone who photographs a historical situation warrants for the authenticity of the image created.

Whoever publishes an accusation as an investigative journalist warrants for its truthfulness by professional ethics and personal reputation.

But what truthfulness, what truth are we supposed to grant to content when anyone can generate pretended documents with AI tools at low cost? In the first place, how shall we agree on what is truth in the future? And what if the operators of generative AI themselves own the distribution channels ... like Google does?!

We need a strict structural separation of generation and distribution of AI products: Foundation Model providers must not operate central platform services for the distribution of digital content at the same time. This applies especially for search engines or social media. The creator principle of copyright law must remain untouched. A labeling obligation for AI-generated content will not solve all problems, but it will at least enable those who want to know how to distinguish man-made from machine-made content.

In short, AI is here, and it's not going to go away. If we establish guardrails that identify the viable paths and show the limits of what is permissible, if we assign responsibility and make the law enforceable, we can make a difference.

We are not calling for a ban on driving; we need a traffic code.

We demand: AI? Act now!

Thank you!

Pressekontakt: info@urheber.info