Six things we learnt at the Oxford Generative AI Summit

The Delphi team were at the Oxford Generative AI Summit last weekend, mixing with academics, policymakers, investors, founders and journalists, as we delved into this exciting new technology and how it might affect us all.

Over two days, we attended various panels and keynote presentations, and we thought it would be useful to summarise just a few of the big things that jumped out to us from all of the discussion. (We’ll dive into AI’s potential implications specifically for media, marketing and journalism in a separate post.)

If you’d like to talk about any of this further, please drop us a line at hello@wearedelphi.com!

1. So many questions, so few answers

The main takeaway from the summit is an obvious but important one – we really don’t know much about AI for certain.

We don’t know exactly how it works.

We don’t have a theory of what it’s doing (so can’t predict or manage it confidently).

We don’t know what its impact will be on our political system or the media.

We don’t know which AI startups we should invest in.

We don’t know how it will be used as a geopolitical tool.

We don’t know how dangerous it will turn out to be for the future of humanity.

All of these issues were discussed at the summit, but the speakers couldn’t give confident answers about any of them. To coin a phrase, it’s still “day one” in AI.

What we’ve got instead are lots of good questions, some initial ideas, and some early indications based on experimentation and observation.

That’s important context for everything else below – but also highlights that anyone who says they know how AI will play out in the years ahead is bullshitting you.

2. Small Language Models might be the future

Most AI discussion right now is about Large Language Models, the methodology behind the likes of ChatGPT, which ingest huge amounts of data to be able to answer any question in natural language.

But several speakers at the summit suggested that Small Language Models might be more useful, especially for businesses. They could be trained just on a company’s own data, for instance, to provide more specific and helpful answers in a work context.

This also gets round legal issues about who owns the data within an LLM and privacy concerns about who has access to it.

3. Building copilots, not platforms

The past two decades of tech innovation has primarily been about building platforms that connect billions of internet users with each other or with goods and services – think Facebook, Amazon, Uber…

But in the AI era, innovation might focus instead on building “copilots” – a word Microsoft is already using to describe its AI tools. Copilots would be helpers that exist alongside us in our everyday lives, helping us work and study more effectively. Imagine J.A.R.V.I.S. in the Iron Man movies – a super-smart computer you can ask to help you do anything you want.

Copilots are being built right now across sectors like finance, healthcare, manufacturing, and more. Just as writing went from being a rare and highly prized skill in the medieval period, to something all of us can do today, so AI copilots will in effect turn all of us into software engineers, working with AI to get our jobs done but via natural language interaction rather than complicated code.

4. Polluted information landscape

Arguably our information landscape is already a mess – full of mis- and disinformation, fake news, radicalised content, and bots.

But we ain’t seen nothing yet.

We’re moving into a world where the majority of content on the internet will be created by AI, not humans. It will be automated, artificial and inherently unreliable. That might shift users away from social media (which may become increasingly unusable), towards 1-1 messaging and connectivity where they can feel confident they’re talking to another human.

5. Britain has the potential to play a role in the global AI sector

To our surprise, the multinational speakers at the summit were in agreement that Britain can make a dent in the AI industry.

The UK is in third place globally when it comes to AI development. Sure, we’re miles behind the US and China who are fighting it out for first and second place. But we’re third nonetheless.

Moreover, we actually have some knowledgeable people in senior positions who are making a difference on this stuff. Matt Clifford, the Prime Minister’s representative on AI, and Ian Hogarth, chair of the government’s Frontier AI Taskforce, were both mentioned.

The upcoming AI Safety Summit at Bletchley Park in November is an important moment in getting AI risk on the global agenda and rallying countries to work together to improve AI safety.

And we have some other core strengths: an incredible academic base, a global talent pool, and a track record of frontier model development and AI safety research. We need to make sure we build on that.

6. Risks remain in the background

The summit ended with a powerful reminder about AI’s potential risks to humanity. While we don’t know if a superhuman form of artificial general intelligence is possible (see point 1 above), the best argument for it is that all the big AI companies are working on the assumption that it is.

These companies are all investing and developing their models on the basis that someone’s going to reach AGI and they want to get their first. As Sam Altman has said, the worst-case scenario for AI is “lights out for humanity”. The potential for catastrophic cyberattacks or bioweapons are things we need to take seriously.

Share

Sign up to our weekly Delphi Digest newsletter

"*" indicates required fields

Get in touch

We would love to hear from you and have a chat!

So drop us an email at
hello@wearedelphi.com