Agentic AI is moving from novelty to infrastructure, changing how people ideate, make, and experience culture. The shift is less about replacing talent and more about giving imagination a broader canvas, with implications that stretch from studios and film sets to vehicles, education and finance.
Manon Dave approaches artificial intelligence as a working partner rather than a spectacle. The award-winning creative director, musician and Head of Propositions at BBC R&D frames the present moment as a design problem first and a technology problem second. The brief is simple enough to describe and difficult to execute; build systems where machines help humans create at higher fidelity, faster, and with fewer barriers, without hollowing out authorship or trust.
The creative industries often absorb new tools while arguing about their legitimacy. That tension is not new. Sampling made entire genres possible, pitch correction normalised post-production polish, and digital workstations collapsed the distance between bedroom and broadcast. What feels different now is the breadth of AI’s reach. The same underlying methods that turn a script into a nuanced read-through, or split a song into workable stems, can also turn a connected car into a real-time instrument, or build financial rails for creators excluded from mainstream payments.
“We are moving to a point where my personal AI will not just be one persona in a single app,” Dave explains. “It will be a suite of agents that all communicate with each other to execute different aspects of the things that I am interested in. One may handle correspondence, another may manage finances, and a third may help my family with education. These agents will effectively replace the operating system we know today.”
Executives should read that statement less as prediction and more as a product requirement. Multi-agent systems imply a fabric of permissions, context hand-offs and accountability that enterprise platforms will be asked to support. The promise is personalised, emotionally intelligent interaction that extends beyond chat, into tools that reason, delegate, and act. The risk is fragmentation and opacity if data governance, observability and human override are treated as afterthoughts.
Narratives about AI often start from capability and end at anxiety. Dave tends to invert that order. He starts at the user, then works backwards to the stack. The result is a consistent test for deployment: does this tool remove a barrier that a person experiences, and can it do so in a way that can be explained, maintained and scaled?
A new operating system for life
The move from applications to agents becomes visible in mundane tasks first. Email, scheduling, expense reconciliation and content drafting are the obvious early domains because they are repeatable and bounded. Dave’s point is that a cluster of specialised agents, orchestrated as a team, outperforms a monolithic assistant precisely because real workflows cross boundaries. The assistant that writes a note needs access to context in the calendar, the document repository and the last project review. It also needs to know when not to act.
“The fuel for this has been poured over the last decade or two,” Dave adds. “We have gathered so much data about ourselves. It has fuelled the experiences we are having today, including things like large language models. As we move into the agentic age, apps as we know them are going to be replaced by AIs that speak to AIs.”
That vision is as much architecture as interface. To work in practice, agent frameworks must pass four tests that leaders will recognise from other enterprise transformations. First, they need transparent identity and policy controls so that one agent cannot drag another into a decision it is not authorised to make. Second, they need a memory model that persists relevant state without hoarding sensitive material. Third, they require monitoring that turns black-box behaviour into something operational teams can reason about. Fourth, there must be graceful failure paths that keep a human in the loop when confidence drops or ethical flags trigger.
It is tempting to treat this as a distant horizon, yet consumer expectations tend to outpace back-office readiness. The companies that win will not be the first to showcase a multi-agent demo; they will be the first to make those agents safe, observable and genuinely useful across shared data estates.
Access, inclusion and real utility
Accessibility is where hype either proves its value or collapses into slogans. Talking Scripts, a voice-AI project developed with Idris Elba and director Stefan Schwartz, is a practical example. The immediate problem is unglamorous and costly: film and television scripts can run to hundreds of pages. Dyslexic readers, multilingual crews and time-pressed actors need a way to consume that volume that preserves nuance rather than flattening it.
“These scripts can be hundreds of pages,” Dave says. “For somebody with dyslexia, consuming them at speed is challenging. By training the model on the character through written and recorded examples, we can shape voices that feel authentic and nuanced. It allows filmmakers, actors and crew, many with English as a second language, to engage with the story in a way that feels natural.”
Generative speech has improved sharply, but character is not the same as clarity. The hard work sits in data curation and parameter control: tuning timbre, rhythm and emphasis so a character sounds lived-in rather than generic. That requires style exemplars, rights-aware training artefacts, and a tooling loop that lets creators iterate without writing code. It also requires careful handling of translation so that idiom and tone survive language switches.
The inclusion dividend is not just ethical positioning. It is operational. Faster comprehension reduces reshoots, cuts meeting time, and removes friction in multinational productions. The pattern is generalisable. Every industry contains dense, high-stakes documents that experts currently mediate for others. Medical consent, engineering change notices, compliance updates and safety protocols all benefit from speech interfaces that do more than read aloud. They benefit from context, adaptation and the ability to answer questions in-line.
Music made with machines
Dave’s route into AI-assisted creation runs through a familiar doorway. He was not trained as a virtuoso. He learned to make records through computers, then discovered he could use early stem-splitting algorithms to prise apart finished songs and develop new material around isolated parts. That progression, from workstation literacy to algorithmic leverage, is how many musicians now work whether they label it AI or not.
“I like to think of AI as a collaborator,” he says. “It reflects ideas back in ways that might not have been considered, like being in a room with three or four songwriters. It becomes a sounding board for imagination and the story you want to tell.”
There is a practical reason to frame AI as collaborator rather than replacement. Collaboration directs attention to provenance and rights. A workflow that generates original stems and then resamples them inside a new composition is different from a workflow that tries to mimic a known catalogue or voice. The former can be documented, licensed and defended. The latter drifts into murkier territory that executives should treat with caution.
The legal position continues to evolve and differs by jurisdiction. That uncertainty is not a reason to freeze; it is a reason to design contracts and clearing processes that assume hybrid authorship. Credits, splits and disclosures will change shape as tools change, just as they did when sampling moved from a legal grey area to a negotiated norm. The operational advice remains steady: segment data, keep audit trails, and build review gates into production so that work is checked for unlicensed elements before it ships.
“Where you draw the line is the ongoing debate,” Dave continues. “Music has always evolved through new technology, from sampling to digital mastering. The delta between imagination and actual output is just reducing because of these tools.”
Creativity is not a straight line from input to output. It is a conversation with constraints. Generative audio tightens that loop and changes the distribution of skills across a team. Sound design, prompting and editing become as central as performance. Leaders should expect new job descriptions to emerge inside their content organisations, and new training needs to arise for existing staff who want to widen their craft.
From cars to campaigns
The collaboration with Will.i.am and Mercedes-AMG on SoundDrive is a study in reframing a problem. Electric vehicles do not need to mimic combustion engines to feel responsive. They need to communicate motion in a way that connects the driver to the machine. The team treated vehicle telemetry as a control surface. Acceleration modulated bass lines, steering manipulated horns, and road speed altered the vocals, turning sensor data into a living arrangement that remixes in real time.
“The acceleration pedal is like one key on the keyboard, but under the hood there are countless signals,” Dave explains. “Mapping those to different elements of a track turned the car into an orchestra that the driver can play.”
The technology here is not exotic. It is the careful blending of signal processing, edge inference, latency management and rights-aware music assets. The result is a product lesson that travels well beyond automotive: connected objects can do more than display status and push notifications; they can co-create experiences with the user.
The Olympic Refugee Team campaign shows another facet of the same design instinct. The team trained a model on original illustrations by a refugee artist, then animated those images to music to tell a story about breaking out from an abstract, limiting box into the light of a stadium. It is a straightforward technique used with care, and it avoids a common trap. The AI is not there to dazzle. It is there to carry meaning at a cost and speed that make the project viable.
Not every project sits inside entertainment. Akuna Wallet, a blockchain-based financial platform, targets a different barrier: creators in parts of Africa who generate millions of views on global platforms but cannot receive payments because their countries are excluded from payout networks or banking infrastructure. The wallet is not itself an AI application. It is a prerequisite for the next wave of AI-enabled collaboration, licensing and revenue splits in regions that have talent but lack rails. The principle is transportable to any sector where the benefits of AI are gated by access to identity, payments or compliant contracting.
What leaders can implement now
Executives reading about agentic AI, creative tools and connected experiences need a pragmatic checklist rather than a manifesto. The first item is architecture. If the future looks like a suite of specialised agents with shared context, start by drawing the boundaries of permission and data flow. Decide which systems can expose which signals, and under what supervision. Build an event log that product teams and compliance teams can both understand.
The second item is capability mapping. Creativity roles already mix disciplines. A producer can now be a prompt writer, a data curator and a performer on the same project. Training should move accordingly. A short course that teaches a director how to control prosody in a speech model can save days downstream. Tooling must reflect the people who use it. Interfaces should expose the right level of control without pretending every user wants to see a model card.
The third requisite is disclosure. Labelling every tool in the chain is overkill; labelling outcomes that matter to audiences is not. A principled approach to disclosure earns trust faster than a reluctant one. It also forces clarity inside the organisation about what is human-led, what is machine-assisted, and where external IP might be implicated.
Number four is accessibility as strategy, not as afterthought. The gains are operational. Tools that reduce friction for users with dyslexia, for multilingual teams or for people with limited bandwidth will tend to reduce friction for everyone. Accessibility requirements have a habit of becoming mainstream expectations within a product cycle.
Finally, the fifth need is measurement. Creativity resists crude metrics, yet production does not. Turn time saved, reshoot reduction, asset reuse and error rates into dashboards that a CFO can read. The argument for AI in creative workflows strengthens when it shows up in schedules and budgets, not only in showreels.
On resistance, adoption and the line between tool and author
Resistance from purists is not a problem to be solved; it is a signal. Most industries benefit from a group that defends tradition and asks hard questions about craft. The job for leaders is to ensure those voices are part of the design loop, not casualties of it. Dave takes a similar view, even while urging creators to experiment.
“Creatives are better off if they are the first and the most advanced at harnessing the tools,” Dave says. “Every decade brings a reimagining of how art is made. AI is simply the next evolution.”
The philosophical question about where authorship begins and ends will not vanish. It will be managed in practice by disclosure norms, licensing regimes, and the texture of the work itself. The more a system becomes a collaborator, the more it should be treated as such in contracts and credits. That does not diminish human contribution; it formalises it against a changing background.
Education as the compounding force
The most ambitious thread in Dave’s thinking is education. The current system was not built for a world where a student can summon a patient, adaptive tutor that knows their history, mood and goals. It was built for batches. The gap between what is possible and what is deployed is therefore a policy and procurement problem as much as a technological one. “If money was no object, I would reimagine education as something woven into how we interact with AI,” Dave concludes. “It is an industry at risk of being a dinosaur waiting for the asteroid to hit it.”
Agentic systems are well suited to education because they excel at decomposing tasks, giving feedback and rehearsing skills. They also demand rigorous safeguards because they will shape how young people think, cite and collaborate. Leaders in education, publishing and policy will need to set a higher bar for transparency, content provenance and safety than consumer applications typically carry. The reward for getting it right is a workforce that treats AI fluency as basic literacy rather than specialist expertise.
Two questions will decide how fast this future arrives. The first is whether enterprises can make agent orchestration reliable enough for non-demonstration work. The second is whether creative teams can build workflows where AI reduces toil without diluting voice. Early signs are encouraging. Tools are moving from novelty to utility, and the best projects are the ones where technology disappears into the experience.
There is a sober message beneath the story. AI will not rescue bad ideas, heal poor organisational habits or compensate for absent governance. It will amplify both strengths and weaknesses. Dave’s projects, taken together, point to a practical doctrine. Start from the human barrier. Design the simplest system that removes it. Measure the result. Share the credit. Repeat.
The creative frontier is no longer bounded by individual skill alone. It is bounded by the quality of the partnership between human intent and machine capability. Agentic AI, handled with care, turns that partnership into a working practice rather than a talking point. The companies that treat it as a collaborator will set the tempo for the rest.



