At the Oxford Generative AI Summit last weekend, one big topic was what impact generative AI might have on the interconnected worlds of media and marketing. (You can read our overview from the Summit here.)
There was enough debate to fuel a whole book, but for brevity we’ve summarised our reflections in a few key points below! If you’d like to talk about any of this further, please drop us a line at hello@wearedelphi.com.
1. The new marketing reality
While AI has a technical history stretching back decades, in the marketing industry – as Wavemaker’s Stuart Bowden put it – it’s effectively “only 47-weeks old”. That’s the time since ChatGPT went live and astonished people around the world with its capabilities.
One way of looking at this is to put it in context. “The history of marketing is the history of technological disruption,” WPP’s global head of AI Di Mayze said. Each wave of innovation creates new routes to our audience and requires new forms of creativity as a result. AI is another of these waves of innovation – but it might also be “the most significant one in our careers”.
Generative AI is now “the new marketing reality”, according to Mondelez marketer Qaiser Bachani – unignorable, here to stay, and only going to grow. But we also have to acknowledge it’s very early days and we don’t really know how it’ll play out. This is like looking at the internet in 1993 and knowing it’ll change business but not being to say how exactly.
2. Human connection – important or irrelevant?
AI can already generate convincing words, images, audio and video, and these capabilities will only grow (the image for this blogpost was created by DeepMind, for instance). But can an AI-generated song, novel, movie or painting – or ad, for that matter – be as original and as emotionally engaging as something created by a human being?
On this topic, our opinion changed during the summit’s discussions. We went in with the view that one of the reasons people enjoy art or media of any kind is that they’re seeking out that ineffable spark of connection with another human being. But as some of the panellists pointed out, evidence already suggests that people are happy to spend time consuming AI-generated content on social media and elsewhere. We’re less picky than we might want to think. Newer versions of AI are also capable of surprising imaginative leaps that look just like “creativity” to the average person.
More to the point, author Michael Bhaskar highlighted that “the true meaning of an art work is in what we take out of it, not what goes in.” Human reception matters, arguably more than human creation – just think of how we’ve spent centuries interpreting and reinterpreting Homer, Shakespeare, Michaelangelo, or Van Gogh, far beyond their original imaginings. Maybe art is just a sequence of words and pixels that we take meaning from. In that case, it’s perfectly possible for AI to create media that we consume, enjoy and find value in.
3. Reputational threats in a polluted information landscape
As BCW’s Chad Latz said, “we’re living in a post-factual era rife with mis- and disinformation”. This is only going to get worse as generative AI enables more people (and thus more bad actors) to create ever-growing amounts of content.
NewsGuard’s Veena McCoole referenced the rise of “unreliable AI-generated news sites”, automated content farms full of inaccurate information that can be uploaded in seconds in response to any news event. These are currently fairly easy to identify but could get more sophisticated, and meanwhile siphon off digital advertising spend from legitimate publications.
All of this means that brands are at greater reputational risk than ever before. It’s easier to create damaging misinformation about them, and it can spread further and faster too. Brands need to get better at preparing for these risks and planning for different scenarios.
4. Comms and marketing need to drive an enterprise-wide conversation
Because current AI systems deal in words and pictures, the comms and marketing departments within businesses have been the locus of most discussion so far. But now marketers need to “bring in other stakeholders around the organisation who aren’t usually part of their scope”, said Qaiser, like the chief legal officer and chief technology officer. This needs to be “an enterprise-wide conversation”, covering how the business is going to use AI (and what is off-limits ethically and legally), what up- or re-skilling is needed, and what the implications are for the entire firm.
Likewise, agencies need to educate their clients, embed transparency in their ecosystem, verify their tools carefully, and label the provenance of their content (ie, whether it’s AI- or human-generated). We need to establish “trust infrastructure” so that all parties are working from a solid foundation.
5. Journalists are wary – with good reason
Most newsrooms are using AI in much the same way as other knowledge-based businesses: as an aid to brainstorming (eg coming up with headline options) or as an admin tool (eg transcribing interviews or condensing long documents). Most AI at news outlets is deployed in back office functions, so it’s not as sexy as people might hope for.
That said, the ability of AI to create huge amounts of content rapidly and for near zero-cost is a major concern for human journalists. This creates a sea of dross that competes for audiences’ attention, threatens journalism’s commercial viability even further, and undermines its vital role in democratic societies. MIT Technology Review’s Charlotte Jee pointed out that AI can’t do the most valuable journalism – original reporting, interviewing people on the ground, and conducting investigations – so this human work might become more socially valuable but also less financially viable.
Collaborating with AI isn’t necessarily the answer either. As Oxford academic Felix Simon said: “There’s a long history of publications partnering with tech companies” only to become “organ donors”, giving away their most valuable assets and being bled dry. It’s another significant challenge to journalism’s future.