I was in attendance at The Royal Society on Monday 17 November 2025, to see Professor Iryna Gurevych awarded the Milner Award 2025 for her major contributions to natural language processing (NLP) and artificial intelligence that combine deep understanding of human language, and cognitive faculty with the latest paradigms in machine learning.
This is a topic that is at the forefront of many minds and has been for many reasons. We as a society have been immersed and somewhat subjected to technology for years, but what has substantially increased is the notion that our devices are listening to us, and that the tech companies are trying to control what we view. We now have to contend with algorithms, bots on social media, the hallucinations of generative AI and many more – but these are artificial. What about clickbait which is orchestrated by mainstream media? There is a difference between someone knowingly publishing a headline and/or article to attract and influence an audience (and as a byproduct create societal divide), compared to those who publish the results of generative AI, or more manual research without reviewing the content beforehand. It is intent over complacency, but given the amount of content online, what can we trust?
I believe it is not up to the computer scientists of this world to tell us what to believe or not, we have the information to hand, therefore I think it is the responsibility of all of us to scrutinise content before we play a part in sharing it, or participating in an opinion of it. While scientists, legal personnel and the like will question authenticity, scrutinise details and look deeper in any matter, it does not have to be limited to those of us who work in that realm. Unless there is a capability issue, all of us can question and scrutinise the validity of anything we see online, and perhaps we should be doing this as general practice before we assist in raising awareness of it.
Misleading content is hard to spot — and equally dangerous to humans who consume it and to generative AI that might amplify it. Examples of such misleading content include false claims on social media supported by the misuse of credible scientific publications, images or videos taken out of their original context and paired with false narratives, and misleading charts designed to persuade audiences to accept inaccurate statements.
How can we identify and debunk misleading claims? This talk provides further information on the tactics used to create deceptive content and demonstrate how the latest advances in machine learning and artificial intelligence can be applied to protect both people and machines from misinformation.
Please keep in mind that there is a difference between disinformation and misinformation, but in recent times, the lines have blurred. It is easy for many who publish content (whether it is correct or not) for deliberate gain, to then claim it was an accident.
Please click on the content below to see this facinating lecture, and Professor Iryna Gurevych receive the award.
Some of the text above was taken from The Royal Society’s YouTube channel of this talk.