ChatGPT – Promise or Peril?

ChatGPT is a large language model (LLM) developed by Open AI. It can generate human-like responses to a wide variety of prompts, ranging from simple questions to complex conversations. Many people all over the world have used it since the launch in November 2022. Its ability to create copy, using the Internet as it was in August 2022 as its knowledge base, has left many users amazed and enthusiastic about its potential.

Whereas Google takes a very structured and iterative approach, focusing on incremental improvements, ChatGPT is rolling out AI more exploratory and experimental, with a focus on discovering new and innovative use cases – and of course also on being first in the market.

OpenAI states that they publish ChatGPT merely to “get users’ feedback and learn about its strengths and weaknesses”. Which in other words means it is far from perfect. Now that the early enthusiasm seems to drop it is time to look at the limitations of ChatGPT in several areas:

1. ChatGPT and data protection

Italy has become the first Western country to block the AI. It will at least temporarily ban the latest version of ChatGPT over privacy concerns due to the software's ability to pull information without permission. Several companies have taken similar measures: JP Morgan has banned staff from using ChatGPT; Amazon reportedly told team members not to feed the AI with confidential customer data, both Verizon and Accenture have acted accordingly.

2.Copyright issues

What ChatGPT and similar Ais do is find unlimited ways of aggregating existing text, they provides a means of putting endless content across an immense variety of domains into structures in which further questions can be asked and, on occasion, answered. The AI doesn’t understand or even compose text. This is why science journals ban listing ChatGPT as co-author on articles. This immediately leads to the question of copyright. Currently, no sources are quoted, or credits given. When questioned about this topic, a campaigner for ChatGPT quipped: “This is the moment where we run for the hills.”

3. Posing imagination as facts and fueling bias

AI is learning based on the content and feedback available. As a lot of the content that is published on the Internet contains or reflects certain biases or discriminatory attitudes the outputs are bound to resemble this content. A chatbot has no concept of the truth, its purpose is not to evaluate information, just to regroup and restructure it. It doesn’t know what the truth is, it is as simple as that.

Future AI systems will have to be based on new technology blueprints, as the current autoregressive large language models produce hallucinations by design, and it is hard to control them and make them factual or non-toxic.

4. Controlling AI

At the end of March key figures in tech signed an open letter, warning of potential risks of powerful AI systems. It states that AIs could flood information channels with misinformation and replace jobs with automation. This follows a report issued by investment bank Goldman Sachs, predicting a loss of up to 300 million jobs due to the AI-based increase in productivity and automation. In a recent blog post, OpenAI itself warned of the risks if an AGI were developed recklessly.

Against this background, the Deputy Director General of The European Consumer Organization BEUC warned that society was „currently not protected enough from the harm“ that AI can cause. „There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them “, she said.

Understanding the benefits and limitations of ChatGPT

Artificial general intelligence has potential benefits in many areas, for businesses as well as for citizens. Take communication: AI helps people who don’t have a common language to communicate much easier as well as it helps people to communicate with computers. ChatGPT is producing convincing copy. The AI system has not only extensive knowledge, but also the means to express it. And some people make the case that bias is much easier to erase in AI than it is in humans.

However, we must not forget that content created by an AI, while in some cases based on facts, is always fiction. When challenged – and the interface makes it easy to do so – the bot nearly always admits that it was just making things up. AI doesn’t understand or even compose text. All it does is find unlimited ways of aggregating existing text. It is only reactive, and it cannot reason.

AI – handle with care

We should neither demonize nor glorify ChatGPT. It isn’t the first step in creating an artificial general intelligence that understands the world based on all texts available online representing human knowledge. Nor is it a demon. It is a tool to make all that knowledge, all those texts accessible to us, an interface into our digitized, digital world. It is a great instrument to play on, helping people to find answers to questions or to escape writer’s block. As long as we stay aware that it is no more than just that i.e., a starting point, and don’t believe a word we’re reading without verification, it can be truly valuable to us.

Continue reading

Go to the blog overview

Image of business people shaking hands after becoming partners

Become an ALSO Partner

Partnership is profitable. With ALSO.

Reseller registration Vendor application