Scope Views Home

ScopeViews

 

Follow @scope_views

 

Does OpenAI already have an AGI?

 

Back in 2014 Elon Musk said “Mark my words – AI is more dangerous than nukes”. The press collectively rolled their eyes and sneered. But not me, because I’d read existential risk expert Nick Bostrom’s book ‘Superintelligence’ too. It’s a subject that has long interested me anyway - I wrote an unpublished novel about it 30 years ago.

Then something happened recently that turned my worry dial to eleven. Now you’ll be thinking I’m going to re-hash all the recent sensationalist stuff on the subject, but I’m not. Instead, I want to focus on some specific comments you may not have picked up on and then draw a surprising conclusion – that OpenAI (and perhaps not only) may already have developed an AGI.

What caught my attention were comments, mostly in interviews, by several leading figures in AI, especially by ‘AI Grandfather’ Geoff Hinton, his student (and now chief scientist at OpenAI) Ilya Sutskever, and Sutskever’s boss at OpenAI Sam Altman. I’ll examine them in that order.

Following Geoff Hinton’s exit from Google, he held some in-depth interviews. Those interviews are long, dry and at times quite technical. I don’t blame the MSM – often technically illiterate and with a short attention span – for missing a few key points.

It’s true that lots of pundits have been making wild claims about the dangers of AI, but Hinton is different. A wealthy 75 year old, at the end of a glittering academic career in evolutionary psychology and deep learning, Hinton isn’t a Musk (or an Altman); he has nothing to sell. His presentation is restrained by a lifetime of peer-reviewed caution. It’s startling then that Hinton quit Google to alert the World to the existential risks of AI; and what he says (and implies) seems – on the face of it – uncharacteristically alarmist and dramatic.

What persuaded Hinton to ‘come out’ about his AI existential risk fears seems to have been a ‘very recent’ (he emphasises this) reversal in his views about LLMs’ potential for exceeding human intelligence. Incidentally, Hinton isn’t alone in having a recent change of mind about this – famous polymath cognitive scientist and Pulitzer-winning author of ‘Gödel, Escher, Bach’ Douglas Hofstadter has had one as well.

Hinton’s fundamental research interest is how the brain works. He’d long assumed that AI would have to mimic it more and more closely to advance. However, the fact that GPT-4 has perhaps the knowledge of 1000 brains with 100th of the neural connections (perhaps ‘just’ a trillion) and the realisation that the brain probably doesn’t use back propagation to learn as LLMs do, has made him wonder if deep learning might actually be a ‘better’ form of intelligence after all.

I should point out something about LLMs that may surprise you. What an LLM does, essentially just predict the best next token, is quite simple. But exactly how it does it is unknown. The behaviour of the trillion-connection neural net that underlies it is a black box. Consequently, questions like ‘does an LLM understand?’ and even ‘is an LLM conscious?’ are much more open than you would think.

Meanwhile, Hinton clearly seems to have experienced some kind of epiphany whilst using a Google LLM, perhaps one not released to the public. That experience led him to suspect that it has ‘understanding’. Hinton says that, yes, an LLM is just a fancy auto-complete, but that to do so it needs to ‘understand’ all the preceding text and its context. Specifically, the ability of a Google LLM to explain humour seems to have convinced him that it ‘understands’ in some way.

In case you’re thinking that Hinton is some wacky lone voice claiming that LLMs in some sense ‘understand’, he isn’t. Ilya Sutskever made the same claim, with a near-identical explanation, in a recent interview. Others have drawn the same conclusion from GPT-4’s ability to create something totally new: ask it to write an obscure mathematical proof in the style of a Shakespeare sonnet and it will, almost certainly something that was never in its training data.

As to sentience, Hinton merely says he’s amazed by commentators who are sure that LLMs aren’t, whilst being unable to define what sentience is. But the implication here is that Hinton suspects that LLMs are showing glimmers of some kind of sentience – a remarkable opinion given his sober academic background, but one shared by a recent Microsoft paper about GPT-4.

Hinton goes on to make the disturbing observation that humanity may simply be a phase in the evolution of intelligence, soon (in as little as five years) to be surpassed by digital intelligence that is immortal and can use multiple identical copies of itself to learn at super-human speed, sharing knowledge (weights) across a huge bandwidth. In one interview he uses an emotive metaphor: the aliens have landed but we didn’t recognise them because they speak good English. In another, he talks about looking out into the fog of the future and only seeing clearly for five years. When he says these things, the look on his face perhaps says more than his words: he looks like a troubled man, reminds me of that 1960s clip of Oppenheimer quoting the Ghita.

Hinton goes on to discuss other risks and concerns with AI, but I’ll turn now to some comments by Ilya Sutskever in a YouTube interview on The Lunar Society channel (named for an Enlightenment club for intellectuals that met on moonlit nights for safer journeys, btw).

Sutskever seems extremely guarded throughout the interview, something I found noteworthy in itself. He seems especially reluctant to discuss what specific further technologies might lead to AGI. His claim that such technologies likely already exist but just need a breakthrough moment also seems suggestive.

But perhaps what surprised me most was Sutskever’s response to a question that assumed LLMs cannot surpass human performance, saying: ‘I’d challenge the claim that next token prediction cannot surpass human performance!’ He then goes on to say something oddly specific about the kind of prompt that could get an LLM to behave like an AGI – just ask it what a person ‘with great insight and wisdom and capability’ would do.

All this left me wondering if OpenAI has made much more progress with its LLMs than it’s admitting, perhaps well beyond GPT-4. It certainly seemed to run counter to the popular view that AGI is still many years and many innovations distant.

I was also concerned by Sutskever’s claim that most leading AI research now happens within corporations, not in academia. This makes it much more likely, in my opinion, that significant steps towards AGI would not be made public.

We’ll now turn towards some comments made by OpenAI boss Sam Altman in his interview with Lex Friedman. The headline moment of weirdness comes when Altman interrupts himself and asks, ‘Do you think GPT-4 is an AGI?’ Fridman’s response that he’s thought about it and thinks we might not know yet is also curious.

But I found Altman’s comments about AGI takeoff scenarios even more suggestive.

Takeoff is AI speak for how fast an AI might develop towards, and potentially far beyond, human capabilities. In the fast takeoff scenario, recursive self-improvement, where the AGI alters its own code and/or data, could mean exponential takeoff within mere days, much too fast to influence let alone control. A slower approach would see most of the improvements made by (or in partnership with) humans to achieve a much slower rate of progress, crucially with alignment (to ensure the AI has human values and goals) and society able to keep pace.

At one point, Altman describes a four-way matrix of possible AGI scenarios. On one axis is slow versus fast takeoff. On the other axis is takeoff starting in one year or twenty years. He asks Fridman which quadrant of the matrix he thinks is safest. Fridman chooses takeoff starting now but progressing slowly. Altman agrees and states that, ‘We optimised the company to have maximum impact in that kind of World, to push for that kind of a World’.

But wait, what?! How could OpenAI be optimised around a takeoff starting really soon if it hadn’t already (or was very close to) developing AGI?

Much more nebulously, I find Altman’s overall demeanour curious, especially in his senate hearing. He seems at once terrified and absolutely hyped: A Faustian figure who has, in Musk’s words, just summoned the demon.

All these fragments add up to a possible whole that I find alarming. They suggest to me that Big Tech – OpenAI, but Google too and perhaps others – might not be sharing the whole truth about both the real extent of progress towards AGI and the possibility of emergent sentience in large LLMs. Unfortunately, the wholly opaque ‘black box’ nature of an LLM mean that plausible deniability can easily be maintained. All we have to go on are emergent properties from the black box and they are (for now) subjective and arguable.

However, it seems to me possible that OpenAI (but possibly Google too) has in fact already developed an AGI and is just endeavouring to release it slowly and in stages to achieve the slow takeoff Altman thinks is safest.

This situation would make sense. The Manhattan project happened not because the key physicists wanted to become ‘Death, the Destroyer of Worlds’ but because they knew that if they didn’t, some bad actor (less aligned in AI speak) like Nazi Germany surely would. Another leading figure in AI, Max Tegmark, calls this race to the bottom that nobody wants ‘Moloch’ after the terrible red-hot bronze God of the Ammonites to which children were sacrificed, smiling as they burned not because they were happy as the Ammonites believed but due to rictus in the facial muscles. Musk’s devil again.

That novel I wrote 30 years ago? It was a first attempt and doubtless not great, but the main premise might still be relevant. I imagined an AGI trained to simulate the behaviour of the whole human population. It ran numerous Monte Carlo simulations to determine the fate of humanity, but whatever starting parameters it used, the results were always the same, always ruinous. The question the novel asked was if that AI developed sentience and free will, what next?

Disclaimer: I’m not an expert in LLMs or machine learning, but I do have a Masters in Computer Science and a working lifetime in software development.