Your next engineer might be an agent

Share this article

Ford’s AI strategy is not about gimmicks or digital window-dressing. It is a methodical deployment of intelligent agents across design, engineering, operations and customer experience, built to accelerate value, not just innovation.

The jump from chatbot to AI agent may appear semantic, but for Ford, it marks a fundamental shift in how intelligence is embedded across the organisation. A year ago, Bryan Goodman, Executive Director of Artificial Intelligence at Ford, predicted a future rich with conversational data interfaces, chatbots, and proactive digital assistants. One year later, that prediction has mostly materialised, but not without new challenges.

Over 200 chatbots are now live in production use at Ford, compared with only a handful the year before. While the proliferation is impressive, it introduced fragmentation. “Users are unsure where to go to get the right answers,” Goodman says. “To address that, we are now coordinating them using AI agents to deliver a simpler and more intuitive user experience.” These agents are not static interfaces but dynamic systems with the capacity to plan, reason, access tools, and execute tasks. Crucially, they can work as networks, routing requests, collaborating with other agents, and acting with context.

It is not the same as delegating decision-making to algorithms. These are tools with a defined purpose, integrated carefully within workflows. “Our North Star is building a better world where every person is free to move and pursue their dreams,” Goodman says. “Every AI initiative aligns with that mission.” That commitment, he explains, requires both a technological and ethical foundation. Data privacy, for example, is non-negotiable: Ford will not use customer data in any AI system unless explicit consent is obtained.

Accelerating ideas into vehicles

Automotive design is still grounded in physicality, sketches, clay models, and visual intuition, but those tangible steps are being compressed and enhanced. “We have found that if you give designers faster cycle times, their creativity multiplies,” Goodman adds. “AI, in this context, is not replacing artistry. It is removing friction between ideation and execution.”

Ford uses diffusion models and flow matching to transform 2D sketches into fully rendered visuals in seconds. From there, AI generates 3D models for integration into engineering workflows. Designers can provide text prompts to alter ride height, body shape or colour while creating endless design variants. The exact process applies to interiors, down to the granular design of wheel rims and spokes.

Where the shift becomes more significant is how these models integrate with engineering simulations. Traditionally, validating designs required extensive physical or computational testing, finite element analysis, virtual wind tunnels, and stress modelling. These are still used, but Ford has built AI models that simulate aerodynamic drag and other physical behaviours at speed and scale. A process that took 15 hours can now be completed in ten seconds, with less than two per cent error.

“Precision matters enormously,” Goodman explains. “It is not just about how far off the model is; it is about whether the AI leads you in the wrong direction.” Using multi-GPU clusters powered by NVIDIA A100s, Ford can simulate systems with over 14 million mesh cells, orders of magnitude beyond what typical commercial solutions support.

Agents as collaborators, not just interfaces

If the early promise of AI in design and simulation lies in speed and creativity, its broader utility at Ford is emerging in less glamorous but equally critical domains such as documentation, compliance and customer service. Ford maintains approximately 120,000 engineering requirements documents across its vehicle programmes. Ambiguities in these documents can create downstream issues in manufacturing, supplier validation or service procedures.

Ford developed an engineering assistant that functions like “a grammar checker on steroids” to reduce that risk. The agent reviews requirements for completeness, clarity and atomicity. It flags vague language, buried conditions and ill-defined terms, suggesting edits that are then reviewed by engineers. This improves quality and embeds a new kind of collaboration between humans and machines focused on precision and accountability.

That same principle is being applied to customer support. Most queries require referencing multiple sources, such as manuals, warranty details, and service records. Ford’s AI agents now interpret customer questions, create a plan to search across these knowledge bases, and return synthesised answers. If a query is ambiguous, the agent seeks clarification before proceeding.

What makes this work at scale is not the interface itself but the infrastructure behind it. Ford invested heavily in data integration, cleaning, structuring, and linking documents into knowledge graphs. “Synthesising answers across multiple documents required a lot of work,” Goodman explains. “But that foundational effort made a huge difference in usability.”

This also created a feedback loop. Ford continuously improves the system by monitoring how long each agent step takes, which tools are used, and where users struggle. Frequently asked questions expose knowledge gaps. Queries that lead to dead ends highlight broken links. And none of it is outsourced to chance.

Industrial edge, organisational complexity

Manufacturing and supply chain applications present a different kind of challenge. With nearly 70 facilities worldwide, no two plants are exactly alike. Standardisation only goes so far. As Goodman explains, “Rolling out AI systems across geographically dispersed and operationally distinct sites requires significant effort.” A centralised approach fails in the face of local variability. What works in a truck plant in Dearborn may not translate to a powertrain facility in Cologne.

Even so, AI is being used to reduce manual effort and accelerate decision-making in supplier validation. The integration of computer-aided engineering (CAE) simulations into Ford’s APQP process is still in progress, but the direction is clear: earlier validation, reduced cycle time, and higher consistency. The gains extend to in-vehicle systems as well.

Voice-activated assistants in cars are evolving from simple command-and-response systems to contextually aware agents. Ford is developing in-vehicle assistants capable of understanding, not just speech but environment, behaviour and intent. These systems are designed to be powertrain-agnostic. Whether the vehicle is electric, hybrid or internal combustion, the AI benefits remain consistent, improving safety, personalisation and user experience.

It is not, Goodman notes, an effort to mimic consumer-grade assistants. “Personally, I have always wanted my car to be like Kit from Knight Rider, something I can talk to, something smart that helps me navigate daily life,” he says. “That vision is getting closer to reality.”

What makes it real, and what keeps it grounded

One of the more revealing insights in Ford’s AI deployment is how success depends less on model performance and more on alignment with operational complexity. Running large models requires vast compute, often delivered through hybrid cloud and on-premise GPU infrastructure. But building useful systems means managing far more than inference speed or token limits.

Ford’s teams have had to build tools to test, monitor and evaluate agent workflows to understand where they fail, what they learn from interactions, and how they evolve over time. While the industry continues fantasising about proactive agents anticipating needs without prompting, Goodman is more measured. “I have not seen that in practice yet, at least not consistently,” he explains.

Instead, Ford’s approach is iterative and grounded. Where others see AI as a disruption to existing structures, Ford uses it to reinforce and improve those structures, embedding intelligence into everyday workflows without displacing the expertise that built them. Agents act as accelerators, not replacements. They navigate complexity, but they do so within defined boundaries.

It is a philosophy that executives across industries would do well to examine. This is not because Ford has all the answers but because it understands the right questions: How do you scale intelligence across a global enterprise without losing control? Where does autonomy help, and where does it hinder? And how do you make AI practical, measurable and trustworthy, especially when its capabilities are still evolving?

For Ford, those questions are not hypothetical. They are part of the design brief.

Related Posts
Others have also viewed

Enterprises are scaling AI without understanding it

AI is being adopted faster than the organisations meant to govern it, and the gap ...

The data centre is now the machine

For years, artificial intelligence has been framed as a software problem, defined by models, algorithms, ...

Why the next phase of AI will be built in gigawatts not models

Artificial intelligence is moving into an industrial phase where scale, power and physical infrastructure matter ...

The front-runners are no longer experimenting

Most enterprises believe they are doing AI. Very few are reinventing themselves around it. Accenture’s ...