Everyone uses ChatGPT, but it is a black box: Here's how to make the most of it

ChatGPT is the most powerful information tool we have ever had, but we don't understand how it works.

10/10/2023

ChatGPT, AI, Machine Learning, Deep Learning, Misinformation, Critical thinking

Just one year ago, many people used the internet in a different way than they do today.

If someone asked you at a dinner party where Napoleon Bonaparte grew up, you would probably google it on your phone, end up on Wikipedia and say “Corsica.”

But now, you might be just as likely to ask ChatGPT.

Here, it tells you the exact bit of information that you need. Napoleon was born in Corsica and moved to mainland France when he was nine years old.

You do not have to skim through a lot of text, and if you want to know exactly where he moved, you can just ask a follow-up question.

But there is a catch, which we will get back to later in this article.

A lot has happened in a year

On New Year’s Eve 2022, a lot of people might have already sensed that we were moving into a new era of artificial intelligence. However, as they stood and watched the fireworks they may not have foreseen just how fast things would go.

Once the fuse was lit for ChatGPT, it blew up.

Since its launch in December 2022, ChatGPT has quite literally changed the way millions of people use the internet.

ChatGPT is the quickest growing consumer app of all time. In less than two months it had more than 100 million monthly users.

In comparison, it took TikTok nine months to achieve the same user base, and roughly two and a half years for Instagram.

“It is very impressive. And it is easy to see why. ChatGPT has a nice interface and can do amazing things. It can give answers to pretty much every single question you can think of. It is a massive step up from the information processing from the search engines we are used to. And perhaps most interestingly, it seems humanlike,” says Daniel Hardt, Associate Professor of Computational Linguistics at the Department of Management, Society and Communication at Copenhagen Business School.

The catch

And now back to the catch. And back to the dinner party.

You have confidently told everyone at the party that Napoleon Bonaparte lived in Corsica until he was nine years old.

But a skeptical know-it-all with a silk tie and a top hat, maybe even a monocle, suddenly looks at you.

“How do you know he moved when he was nine years old?” he asks.

“I read it on ChatGPT,” you reply.

“And where did ChatGPT get the answer from?” he asks again.

The other guests are looking at you now. And you realise that you do not know. The answer seems right, but you have no way to verify it.

When you are on Google, you know that Wikipedia pages are written and moderated by voluntary people. This already prepares you to be slightly skeptical of the information.

But the Wikipedia page also tells you that the information surrounding Napoleon’s birth and childhood is cited from two different books. It also gives you the titles of these books and links, so you can go check it out for yourself.

ChatGPT tells you nothing about the source of information.

“I think this is a big problem. And it is actually not only the user, who will be unable to verify ChatGPT’s source of information. Scientists like myself or even the programmers behind ChatGPT cannot do this either,” says Daniel Hardt.

And the reason is not just that ChatGPT has been trained on vast amounts of information. The size of the dataset itself is not the problem.

“But the fact that ChatGPT has been trained over and over to predict this information and to optimise its answers makes ChatGPT change its own algorithm and process so many times that in the end programmers will not be able to know its deductions.”

ChatGPT is a black box

ChatGPT is a certain kind of chatbot called a Large Language Model.

The simplest explanation is that it is a word prediction machine. At least it was in the early days of its training.

You give it a text and cover up one of the words. Then you make it guess the missing word. Then you do it again. And again. And again. Over time, feeding it with new texts and more data, it does not only get increasingly more accurate at guessing words, but it also gains a vast amount of knowledge and learns to produce humanlike texts and responses.

This process is a type of machine learning called deep learning, where the model trains itself with little to no human intervention. It simply corrects its own model through success and failure.

“Models like these grow to see everything in every major language in the world. They have so much knowledge. Deep learning is often a very effective training method, but it leaves us with no insights into how it predicts its answers,” says Daniel Hardt.

On top of its deep learning training, ChatGPT as we know it was also the subject of so-called reinforcement learning, where people go about testing the answers it produces, giving feedback and restraining it from producing what could be offensive material.

“But this level of training is infinitely small compared to the deep learning training, so the human level of ChatGPT really only scratches the surface. Because performing reinforcement learning on a model as big as ChatGPT requires enormous resources,” says Daniel Hardt.

Once ChatGPT 3.5’s training was complete in early 2022, it had chewed through a massive dataset of 570 gigabytes of text containing websites, books, news articles, journals and social media.

And newer versions like the paid option ChatGPT 4 and the upcoming ChatGPT 5 expand on the computational power of ChatGPT 3.5, making their training much more extensive.

Through this self-correcting process, where the AI gets better and better at predicting, its reasoning becomes more and more obscure.

“And by the end no one really has a clear image of the process, because it has become so advanced. It really is a black box for everyone,” says Daniel Hardt.

The problems

Okay, so ChatGPT does not reveal its sources. As long as the information is correct, no big deal, right?

Well, first of all, you can never be 100 per cent sure that the data is correct. ChatGPT still makes mistakes, sometimes there is a flaw in the data it has, or sometimes it does not understand your question correctly. It is not the best at solving math equations either.

But all of this goes for Google, Bing and other traditional search engines as well. You can never be sure that the data is correct.

But the big difference is that without a source there is no way to spot the reasons why the info might not be correct such as potential bias or malicious intent or just plain mistakes from the original writer of the information.

“It is trained on human language, and so it learns the same kind of faulty connections that humans make. So, of course it makes mistakes. And no source is perfect, not even CBS professors. But what is really worrying is that we can never inspect the source of information. Not even if we have the training data. It has been trained so thoroughly that it is too difficult to tell,” Daniel Hardt says and adds:

“If we end up with a situation where everyone shares information without sources, then we are going to have a big misinformation problem. Adding to that, ChatGPT can and is already being used to write code for hackers and spam messages and phishing mails for bots and cyber criminals. Like all tools, it can be used for good as well as bad,” says Daniel Hardt.

“The main problem is not that the general public does not understand how ChatGPT works. The main problem is that no one does. Researchers and even the programmers themselves do not understand what happens once you put in that prompt."

Daniel Hardt, associate professor at CBS

“ChatGPT is a miracle”

Reading through this article, you might start to worry that ChatGPT and similar chatbots are going to have a negative impact on our knowledge. But that is definitely not the case according to Daniel Hardt. In fact, he calls ChatGPT a miracle.

“It is an absolute miracle. This has been the holy grail of AI researchers ever since the term was invented in the 1950s. In fact, the idea of a mechanical version of human language was a dream of Enlightenment philosophers such as Descartes and Leibniz. I would say most great thinkers throughout history were convinced this could never happen. So, from an intellectual point of view, it has massive significance. From a practical point of view, it is also, of course, of great importance. Obviously, language is the perfect interface for humans. It is the ultimate killer app. This is why ChatGPT has been adopted more quickly than any application in history,” he says.

Similar to when the internet first came around, we now have a much more powerful tool for information processing that all of us can access. But what about the misinformation problem?

“The spread of misinformation has been a problem ever since we have had communications media. ChatGPT and other LLM’s accelerate everything related to information; both the spread of important knowledge and the spread of misinformation are being accelerated. What we can hope is that the ChatGPT is spreading valuable knowledge more than it is spreading misinformation. If ChatGPT helps people to become better informed more generally, it will help them be more skeptical about misinformation.”

Education will help us use ChatGPT for good

But what can we actually do to ensure that ChatGPT does more good than harm when it comes to misinformation? According to Daniel Hardt, it is about education and restrictions.

The first part, education, entails reflecting on how you actually use ChatGPT yourself and being aware of things like sources of information being unclear and giving it an extra check instead of just sharing what ChatGPT told you.

“I think the first thing you must be aware of is what you are actually using ChatGPT for. Do you want it to be factual, or do you want it to be argumentative and convincing? Maybe you actually want it to give you a right-wing or left-wing perspective on something, and then that is fine. We just need to be clear about what we need from it, and how to prompt it correctly to do so. And learning how to prompt is currently an active research area, and something I believe we should start teaching our students,” Daniel Hardt says. He adds:

“But this is also happening naturally. I think the technology is so popular and appealing that many people have been rapidly become experts in the use of ChatGPT, and this has been happening while we academics are sitting around wondering how to best educate people.”

Regulation is part of the solution

Daniel Hardt also emphasises the importance of regulating models like ChatGPT to avoid the spread of misinformation.

“I would like to see all LLM’s required to watermark their output, so you can always tell when text has been produced by an LLM, and which one it was. This is technically difficult, but I am convinced it is possible. If companies were required to do it, they would figure it out. I would also like to see a pause in the development of larger LLM’s. The models we have now are huge and incredibly powerful. We need to learn how to use them and how to control them. We do not need bigger models.”

And remember:

“Even though ChatGPT of course can be wrong, it is so much better at giving us the information we need than anything we have ever seen before. If you think of any kind of task involving information, it can help you do it better. Therefore, it is important we learn how to use it in the best way possible.”

The page was last edited by: Sekretariat for Ledelse og Kommunikation // 10/11/2023