Generative AI and its risks

In the global conversation about emerging technologies there is a new main character: generative artificial intelligence (AI). As defined by Amazon Web Services, this is a type of AI that enables the creation and editing of content and ideas, such as conversations, stories, images, videos and music. While it has been around for a few years now, the launch of ChatGPT and other similar systems such as BARD and LLaMA has brought generative AI to the forefront.

Like other technologies, generative AI offers a range of opportunities in different areas (which has led many to speak of a “revolution” or “new era”), but also risks that we need to be aware of and watch out for. As a matter of fact, many of these risks have already been with us for several decades, and what generative AI does is to amplify them, making them more evident and eventually more pernicious. A first example is that of privacy. Generative AI is based on machine learning models, i.e. algorithms that are trained on huge amounts of data. Both the collection and processing of such data should always take place in a transparent manner and in accordance with local legislation, and their storage should have adequate security safeguards.

Another important issue, also related to the data used to train the algorithm, is that of bias: a system trained with biased data will yield inaccurate or biased results. To the extent that we tend to rely more and more on these systems, which give us answers that we believe to be objective or unbiased (the so-called “automation bias”), the risk of discrimination and amplification of mis- and disinformation becomes greater.

A third challenge is that of intellectual property. There are already many voices warning about the lack of originality of content created with generative AI-based tools, and the possibility that they are infringing copyrights. A few weeks ago, George RR Martin, author of the book on which the famous Game of Thrones series was based, sued OpenAI arguing that his books were used to train ChatGPT without prior consent.

The risks also relate to cybersecurity issues, not only because of the possibility of manipulation and poisoning of models, but also because, by emulating human behavior in an increasingly convincing way, generative AI offers new tools for criminal groups – for example, by almost perfectly imitating a person’s voice.

In a turbulent and conflict-ridden world, where the boundaries between analog and digital are becoming increasingly blurred, it is essential to be aware of the opportunities, but also of the risks involved in new technological tools. These are already part of our lives and will continue to grow in importance. In this context, digital and information literacy, which involves not only knowing how to use these tools, but also understanding their logic and functioning, and always implementing our critical thinking, is becoming an increasingly precious skill, one that will enable us to make decisions and act freely and in an informed manner.

*María Laura García is the author of “Our Digital Challenge: Informing. Thinking. And freely deciding in the cyber era.”

María Laura García
Chairwoman Business Committee