ChatGPT Passed Turing Test
- AI Institute
- May 24
- 4 min read
Updated: May 25
DeepMind Professor of Machine Learning at the University of Cambridge, Neil Lawrence in an interview with BBC Life Scientific May 2025.
Today, it's become obvious that AI, is going to play a huge part in our future, but not everyone's entirely clear or indeed in agreement on how that will pan out. A lot of conversations around AI and machine learning still reference the idea of a dystopian future where Terminator-style robots are our overlords. But don't panic. That's all rubbish say Neil Lawrence.
Neil Lawrence is the DeepMind Professor of Machine Learning at the University of Cambridge, a mechanical engineer by training. Neil's story is one of contrasts. He's worked for both academia and Amazon. He has helped with deploying AI and machine learning fields as varied as movie animation, Formula One, strategy and local planning applications. And he says, ultimately, all his efforts are about making a difference to our everyday lives, explaining, as scientists and academics.
When we say existential threat by AI, maybe one day it will be as intelligent as humans.
Neil Lawrence “I think it's a danger of what I would say is a socio-technical existential threat, but I think we're already in the middle of that. Digital tools are already isolating professionals from their decision making. And my worry is, by overly focusing on these futures that are a long way away, we're ignoring the challenges society is facing today that could be made worse or better from this technology.”
The nature of their intelligence is very different. In some sense, they've already exceeded us across many, many parameters. So just look at the stock exchange, where computers are used to make trading decisions at speeds that are inconceivable for humans. And what did that drive? It drives flash crashes that have to be systems in place to prevent the entire financial system crashing when these computers get a bit carried away. So that's not because they become intelligent in the way that we're intelligent. But technology, through that type of effect, fast decision making can already have a dramatic effect on society, and it's these effects, often that are the ones we should be paying attention to.
We're confusing different kinds of intelligence. The speed of communication of computing doesn't necessarily mean intelligence.
Neil Lawrence “This intelligence is very contextual. The key difference between human intelligence and machine intelligence is the rate of access to information we have when we're communicating with each other. We're sharing information around 2000 bits per minute. It sounds quite a lot, but two machines communicate at 600 billion bits per minute. So that's the difference in terms of speed of movement, the difference between walking pace and light speed. But it doesn't make the machine more intelligent than us, because, you know, it's more what you do with this information access and so much of the interesting things about human intelligence are, how we manage this narrow bandwidth, how, despite this limitation, we achieve extraordinary.”
What's your take then on the recent news that the latest version of ChatGPT has passed the Turing test, that is to say, it fooled the majority of testers in a scientific experiment into thinking that it was human.
Neil Lawrence “It's interesting because you sort of have to go back to an argument between Turing and colleagues that led to the Turing test, this notion about whether it made sense to think about a computer writing a sonnet. And what Turing was trying to show was the notion of thinking didn't really make sense as a scientific concept. He was saying, Look, I could build a machine that could fool you. Is that thinking or not? And of course, machines have been able to write sonnets for a few years now, very good ones. But I think that the underlying point is that the origin of the sonnet is important. So, when the machine is writing a sonnet, it's doing it based on all human written work, and the sort of in inverted commas understanding of humans that comes from reading everything we've written about. Can't sit there go up in the same way that human does. So even if you can't tell whether something is human written or not, there's still a significant difference between the origin of that feeling the origin of that thought. Not that it's not interesting and exciting.”
Clearly there are lots of exciting prospects here, but presumably there are risks as well.
Neil Lawrence “We're already seeing those risks panning out with previous deployments and machines. So, we're still trying to resolve the Horizon Post Office scandal, and that's a deployment that dates back to the late 1990s very clumsy deployment that failed to take account of how these systems can fail and the effect that they have on users at the front end and destroyed lives of very vulnerable people. When you look at that story, you look at people becoming disassociated from the tools of their trade and that's an emerging dystopia that we're living in already. But I think the extraordinarily exciting thing about this new wave of machine learning technology, is it could also make it better, it could bring people closer to the machine give them more control over what the machine's doing so you can imagine a world where we're asking the computer to do the thing we need rather than having to adapt to the thing that it can do so rather”
It’s not about AI controlling our lives, instead we are empowered to use it the way we want to.
Comments