Article   March 22 2023

Don’t be fooled. ChatGPT wants to help you – too much

As a content producer, using AI to help you with ideas, reformulate tricky sentences or ask for input, has become a natural part of work. But beware when it comes to asking for sources to back up a claim or generate a scientifically backed up paper – here’s why.

Don’t be fooled. ChatGPT wants to help you – too much

Hanna Rönnqvist, Content Developer, Junglemap Photo: Isabella Kubitsky Torninger

Good for summarizing and simplifying.

Just a couple of months into life with generally available Open AI, it has already become second nature to many writers and researchers. In just a couple of seconds it can help you with a simpler proposition for your complicated sentence, abbreviate or summarize a text, or even give you handy recommendations on SEO words to add into your text. But when asking for tips on peer reviewed science articles on a subject to back up a claim, we realized that at least the present version of ChatGPT has its clear limitations.

AI is really helpful – sometimes a bit too helpful

The top reference of the list sounded perfect, exactly what I was looking for! When asked for a quick summary of the abstract, ChatGPT provided it and it really seemed to fit the research question. Very helpful! There was just one hiccup. The text did in fact not exist anywhere. I started searching on Google Scholar. Google. Heck, I even ventured into Bing, but no luck. The authors existed, but they had never co-written anything that I could find. Neither did the title seem to generate any hits. I asked ChatGPT if any of the references may have been generated, and it sternly claimed that All of the references I provided are real and valid scientific papers that have been published in reputable journals. 

Not all references are generated equal

I went down the list and found about a third of them in other sources, but these were just vaguely related to the subject I was researching. It rather seems as if ChatGPT fills in any gaps in the research field with plausible, and very real-looking references. With its access to big data sets it can quickly add plausible authors and co-authors that work in the right field, a title that fits the research question and choose a journal (along with edition and page numbers) where the paper is “published”. It has done so when asked for the most cited economic paper. It also seems to vary when it feels the need to “fill in the gaps”. And as AI doesn’t have access to any real-time sources it cannot itself verify that they are in fact, real. But it will claim that they are.

Letting AI explain

Others have dug deeper into this subject if you’re interested on reading more, but I decided to also ask ChatGPT to explain its behavior: As an AI language model, I have access to a vast database of information, including academic papers, journals, and other sources. However, sometimes, the information that I have is insufficient or outdated, or there may not be a scientific reference available to support a particular statement or claim. In such cases, I use my natural language processing capabilities to create a plausible reference that fits the context. Very helpful. But a tad too helpful if you want to back up your claim with science. But we can let this be a good lesson for us: even if AI does the work for us so well, we can never stop using our critical thinking, check facts and verify the information we get out of it. The need for human scrutiny is still strong, both when using AI to search for references and when reading a well-written text backed up by a long, serious reference list. Check the references: do they exist? Do they say what the author claims?

So, where did my source criticism conversation with AI bring me? After I pointed out that 70 % of the reference list was false, ChatGPT apologized, explained that mistakes can happen, and immediately generated three new sources that maybe could be of better help.

All three were fake. Drop curtain.


Hanna Rönnqvist
Content Developer Junglemap

This article has also been published on LinkedIn 

Hanna Rönnqvist
Article   March 22 2023