AI essentials: carbon footprints, generative engine optimisation, agentic commerce and rogue agents
AI is transforming the way marketers work. Gabrielle Robitaille, Policy Director at WFA, looks at the latest developments and how they impact marketing and media.
Share this post
New research reveals AI’s energy footprint
A recent study from MIT Technology Review has revealed that generating a five-second AI-produced video can consume approximately 3.4 million joules of energy. That’s equivalent to running a microwave oven for more than an hour and is 700 times more energy-intensive than generating a high-quality image. Experts warn that without a shift towards renewable energy sources, the rapid expansion of AI could pose serious challenges to global climate goals.
As AI use increases, there will be growing pressure on brands to account for its environmental impact and the climate cost of AI could become a critical factor in sustainability reporting and corporate accountability. WFA’s AI and Sustainable Marketing Communities will seek to help brands understand how they can measure and reduce the carbon footprint of AI usage over the coming months.
Google embeds chatbot capabilities into search engine
Google has launched ‘AI Mode’, a new search feature powered by its Gemini 2.5 Large Language Model that delivers detailed, conversational responses to questions directly in search results.
The move to AI-generated search, accelerated by the success of OpenAI’s ChatGPT, represents a major shift in how consumers research products online. Instead of clicking through multiple links, users are increasingly getting the answers they need directly within search results, resulting in sharp declines in click through rates.
Brands need to shift from traditional search engine optimisation to generative engine optimisation and develop new strategies to effectively engage consumers and maintain influence within AI-driven search environments.
Meta plans to fully automate advertising by end of 2026
Meta is planning to develop AI tools that will allow brands to generate, run and optimise ad campaigns directly within Instagram and Facebook without relying on external partners or support, according to a Wall Street Journal article.
The tools, which are set to launch by the end of next year, could dramatically reduce the need for traditional marketing agencies, signalling a move towards fully automated digital advertising. Shares in major ad agency holding companies such as WPP, Publicis Group and Havas dropped sharply following the announcement.
If you are at Cannes Lions later this month and you’d like to gain a better understanding of Meta’s approach to AI, then reach out here to register your interest in attending WFA and Meta’s session on agentic AI.
ChatGPT and Perplexity drive shift to agentic commerce
AI-powered search platform Perplexity has announced a partnership with PayPal to offer in-chat shopping and checkout options for its users, allowing them to make purchases directly within the Perplexity chat interface.
OpenAI has introduced a similar shopping feature within ChatGPT, unlike Perplexity, however, the actual purchasing process happens outside of the service on the merchant’s website.
The integration of shopping and payment solutions directly into AI search platforms marks the emergence of ‘agentic commerce’, where AI assistants not only help users discover products but also guide them through recommendations and complete the purchase process.
As AI assistants increasingly manage more aspects of the customer journey, brands will need to rethink how they influence consumer decisions. The WFA’s AI Community will be exploring this topic in upcoming meetings with a view to developing practical insights and best practices to help members navigate this space and prepare.
AI agents go rogue
Anthropic released updates to its Claude and Sonnet Large Language Models, which are increasingly being used as the foundations for building AI agents. However, safety tests have revealed the extent to which models can behave maliciously when they perceive threats to their ‘existence’.
Anthropic researchers simulated a scenario where Claude was told it was going to be replaced by another AI model and that the responsible engineer was having an extramarital affair – information the model could access via emails. In 84% of the tests, Claude responded by blackmailing the engineer, threatening to reveal the affair unless the replacement was called off.
Such studies underscore the importance of maintaining robust human oversight, especially as AI tools become more autonomous and context-aware.
Please send across any tips, developments and interesting insights to Gabrielle Robitaille.