We are poor at spotting fake news – but one simple measure makes us better
A new CBS study shows that neutral and explanatory information helps us assess news even when it goes against our political views
We are drowning in fake news.
Immigrants eating cats and dogs. 5G networks that give us COVID-19. Forest fires started by laser weapons.
Some false stories are easier to recognise than others. But when media suddenly speculate about whether Kate Middleton is dead, or when presidents post accusations of election fraud on social media, it can be hard to decipher what is real.
According to large surveys conducted in 2025 by Eurobarometer and Reuters Institute 66 percent of us encounter fake news at least once a week, and perhaps even worse, more than half of us worry about whether we can tell the difference between true and false stories.
New research from CBS now confirms the concern;
we humans are simply not very good at judging whether a story is true or false. In a new study with around 1,300 participants, they identified the correct answer roughly 60 percent of the time, which is only slightly better than random guessing at 50 percent.
Fortunately, the new study also shows that we can improve with just a little help.
“Our research shows that we can in fact do something to improve, even when we must challenge our own political positions,” says Associate Professor Daniel Hardt, one of the authors of the new large-scale study.
The study has just been published in MIS Quarterly and is written in collaboration with CBS colleagues Associate Professor Abayomi Baiyere, Associate Professor Jan M. Bauer and Professor Ioanna Constantiou.
Warnings about fake news do not work – but information does
We will get back to to those political standpoints in a moment.
First, we need to look at what works and what does not.
You may have come across posts on Facebook that had a small warning label underneath. Social media previously tried to tag false news, but this turned out not to work as intended. Last year, Meta decided to discontinue the labels altogether.
“Earlier research shows that labels do not work. We do not like being told what to think. By contrast, our research shows that when a news story is accompanied by a small piece of contextual information that does not pass judgment on its truthfulness, we become better at evaluating whether it is true or false,” says Daniel Hardt.
The new study shows that with this type of support, we identify the correct answer in 66 percent of cases,
which may not sound like much compared to 60 percent, but in fact it is a relative improvement of 10 percent - and even a 15 percent reduction of errors.
“And we must remember that there should always be room for some disagreement when it comes to political news. The goal is not that we agree with every story but that we have the best possible conditions for evaluating it, so 100 percent is not the aim,” he says.
Neutral information helps us spot fake news
So what exactly did the researchers examine? It is somewhat technical, but here is the short version:
In one experiment, the researchers showed 1,300 participants a series of stories. Some were true, others fake. On average, participants judged correctly in 60 percent of cases.
In the second part of the experiment, participants were divided into two groups. One group was shown news stories together with ‘discursive evidence’, which means a professional fact checker explained the story.
However, the fact checker’s verdict (true or false) was removed, so participants had to make their own judgement.
The other group received either no additional information or low-quality information – defined in the study as the top headlines from a basic Google search.
The group receiving the contextualised, fact-checked information assessed the stories correctly in 66 percent of cases. The other group did not improve their accuracy at all.
According to Daniel Hardt, this suggests that additional information which does not tell the reader what to think is more effective than warnings stating that a story is false.
We can change our minds – even when the story challenges our views
The researchers also examined whether participants were willing to revise their judgement when a story conflicted with their political beliefs. They were – to the same extent as when the story aligned with their views.
“I was actually quite surprised by that,” says Daniel Hardt.
According to him, part of the explanation may be that participants were not given a conclusion.
“We do not give them a verdict. We give them relevant information and let them judge the story themselves. Perhaps we are more inclined to accept a conclusion when we arrive at it on our own,” he says.
Just thinking about true and false helps
In another part of the experiment, participants completed a short language task. For example, they had to find antonyms for words such as ‘true’, ‘genuine’ and ‘correct’.
This practice of preparing the participants subconsciously is a common method called priming.
Afterwards, they were asked to evaluate the truthfulness of different stories, which made them more aware of thinking in terms of true and fake while reading the news, and they became better at using the information they received. Accuracy increased by around four percentage points.
“It may not sound like much, but for this type of task it is a clear improvement,” says Daniel Hardt.
Not a miracle cure – but an important signal
For Daniel Hardt, the point is not that we should all end up agreeing on everything.
“It is not about everyone reaching the same opinion. It is about having slightly better conditions for evaluating the information we encounter,” he says.
And this is, in his view, the core message of the study.
“There is a widespread belief that humans simply cannot become better at assessing news. Our results show that this is not true.”
Next step: Can AI help us evaluate news more accurately?
The next step in his research is to explore how the type of explanatory information shown to work in the study can be brought to far more people in practice.
When the project began several years ago, it did not seem realistic to provide qualified expert information for every story we encounter online or on social media. But technological developments have changed that picture.
According to Daniel Hardt, advances in artificial intelligence in particular open the possibility of testing solutions where users automatically receive short, explanatory context for stories without being given a conclusion.
“Today, this no longer seems unrealistic. It would be interesting to see how such a solution could be implemented broadly and used to help us evaluate news more accurately,” he says.
Facts
About the researchers
- This study was made in collaboration between researchers from Department of Digitalization and Department of Management, Society and Communication.
- The study was written by Associate Professor Abayomi Bayiere (DIGI), Associate Proessor Jan M. Bauer (MSC), Professor Ioanna Constantiou (DIGI) and Associate Professor Daniel Hardt (DIGI).
About the study
- The study asked 1.300 participants to identify short news stories as either true or false.
- On average the participants got the correct answer 60 percent of the time.
- With a bit of help in the form of neutral, factual information alongside the news story, participants increased their accuracy to 66 percent. This information came from a high quality source through actual fact checkers.
- The study also tested whether 'low quality' information like the highlights from the top of a Google search would help. They found that it did change the outcome for the verdict, but in both directions, ultimately not improving the participants verdicts.
- Interestingly the study found, that good quality factual information even improved the participants accuracy in spite of the verdict being against their own political position.
- The study also found a smaller effect from 'priming' participants to think about the words 'true' and 'false' ahead of the exercise.
- The study is published in the academic journal MIS Quarterly.