AI startups: Where AI reshapes how work gets done

Share this article

Enterprise workflows have long been shaped by software that organises information rather than understands it. In this fourth article drawn from companies presenting at NVIDIA GTC, the focus shifts to start-ups embedding intelligence directly into decision-making, automation, and knowledge systems, where AI is no longer assisting work, but beginning to redefine how it is executed.

Enterprise AI is often framed as a productivity layer, something that sits on top of existing systems to make them more efficient. That framing underestimates what is beginning to change. The more consequential shift is not automation of tasks, but the restructuring of workflows themselves, how decisions are made, how data is accessed, and how systems coordinate across complexity that has historically been handled by people.

What makes this group of companies distinct is that they are not simply adding intelligence to existing processes. They are rebuilding the underlying logic of those processes, whether that is how robots generalise across tasks, how data is accessed and trusted, how legal reasoning is executed, or how large-scale interactions are managed across populations. The question is no longer how AI supports work. It is how work adapts to AI.

Building intelligence beyond data

TorqueAGI is addressing one of the central assumptions in AI, that more data is the primary path to better performance. Its focus is on physical AI, where that assumption begins to break down.

“Physical AI is exciting, but the idea that we simply need more data to make it work is not correct,” Ashutosh Saxena, Founder and Chief Executive Officer of TorqueAGI, says. “In the digital world, we had access to enormous datasets, trillions of tokens, petabytes of information, and that is what allowed large models to scale. In the physical world, we do not have that luxury, and collecting that kind of data is slow, expensive, and in many cases impractical. The real problem is not how to get more data, but how to use the data we have in a more intelligent way.”

That shift in thinking leads to a different architectural approach. “Historically, robotics did not scale because every robot had its own AI, and often every component inside the robot had its own model,” Saxena says. “If you move from a humanoid robot to a tractor or a warehouse system, you are effectively rebuilding intelligence from scratch. That does not scale. What we need instead is a foundation model approach for robotics, where a single model can generalise across tasks and environments.”

TorqueAGI’s answer is what Saxena describes as physics-informed, data-driven reasoning. “Data and physics are not competing ideas, they are complementary,” he says. “The physical world has constraints that cannot be violated, so we embed those constraints directly into the model. We have built transformer architectures that incorporate physics-aware operators and loss functions, so the model understands how robots should behave at a fundamental level. That allows it to reason across different scenarios and perform a wide range of tasks with much less data.”

The practical implication is speed. “Customers do not want to wait years to see results,” Saxena says. “They want to deploy systems quickly, and that means models have to be data efficient. By combining physics and data, we can build systems that operate at the edge, across logistics, manufacturing, and agriculture, without requiring massive training pipelines.” The ambition is not simply to improve robotics, but to make it scalable in a way it has not been before.

Unlocking data beyond the internet

Redpine AI is working on a different constraint, the quality and accessibility of data used by AI systems. As models have grown, so has their reliance on publicly available information, much of which is noisy, incomplete, or unreliable.

“AI today is trained on a huge amount of internet data, and a lot of that data is not high quality,” Leonora Vesterbacka, Founder and Data Scientist at Redpine AI, says. “This creates problems such as hallucinations and lack of accuracy, and the issue becomes even more serious when we move into domains like healthcare, finance, and law, where the cost of error is high.”

The company’s response is to build a platform that connects AI systems to proprietary, high-quality data sources that sit outside the public internet. “We enable AI builders to access data that is behind paywalls or not available online at all, but to do so in a compliant way,” Vesterbacka says. “We are not scraping or extracting data without permission. We work directly with data owners and allow access through structured agreements, where usage is tracked and monetised.”

That approach changes how data is used inside AI systems. “Access alone is not enough,” she explains. “You also need to optimise how that data is retrieved, ranked, and integrated into the model’s reasoning. We have built our own retrieval and re-ranking systems, along with pricing and distribution layers, so that agents can access the right data through APIs and pay only for what they use.”

The result is a shift away from general-purpose knowledge towards domain-specific intelligence. “When you move beyond internet data, you are entering territories that models have not been trained on before,” Vesterbacka says. “That allows for more accurate, more reliable systems, particularly in areas where trust matters.” In that sense, Redpine is not building a model. It is building the conditions under which models can be trusted.

Forlex is operating in a domain where the stakes are particularly high. Legal systems rely on precision, interpretation, and context, yet much of the current wave of AI in this space has been built on general-purpose models that struggle with hallucination and reliability.

“Five billion people in the world do not have access to meaningful justice,” says Daniel Bichuetti, Co-Founder of Forlex. “That is not a marginal issue. It is a structural failure, and there is no way to address it at scale without technology. At the same time, many of the AI solutions in the legal space are built as wrappers around large models, and those models are black boxes. The data is not transparent, and hallucination rates in legal contexts can reach levels that are simply unacceptable.”

The company has taken a different path. “We build our own models using our own data,” Bichuetti says. “We have collected large volumes of legal data from contracts, courts, and other sources, and used that to train systems that can operate with much higher reliability. Our models can run on premise, inside private environments, and achieve significantly lower hallucination rates than general-purpose systems.”

That focus on control extends to architecture. “We use a tiered model structure,” Bichuetti explains. “At the top, there is a reasoning model that can act at the level of a legal expert. Below that, there are models designed for specific roles, such as analysis or integration into software systems. This allows organisations to deploy AI in a way that matches their operational needs, rather than relying on a single model for everything.”

Data sovereignty is central to the argument. “In legal systems, you cannot assume that data will be shared freely,” he says. “Governments and enterprises need to control where their data is stored and how it is used. That is why our models can operate in isolated environments, without reliance on external infrastructure.” The broader implication is that AI in high-stakes domains cannot simply be imported. It must be built with the constraints of that domain in mind.

Scaling interaction across populations

CoRover.AI is addressing a different challenge, how large organisations manage interactions at scale. Its platform has been deployed across sectors such as railways, banking, insurance, and government services, handling billions of user interactions across multiple languages and channels.

“We started by solving a problem for large enterprises and government organisations that needed to interact with millions of users,” Ankush Sabharwal, Co-Founder and Chief Executive Officer of CoRover.AI, says. “There was no platform that could operate at that scale, across multiple languages, and handle complex interactions in a consistent way.”

The system has evolved from conversational AI into a broader platform for building and deploying AI agents. “Today, we support more than a billion users through thousands of agents operating in over 100 languages,” Sabharwal says. “These agents are not just answering questions. They are handling transactions, providing recommendations, and supporting decision-making across a wide range of use cases.”

What distinguishes the platform is its focus on speed and accessibility. “We have built a system where you can create an agent simply by speaking,” he says. “Not just use it but create it. That allows organisations to deploy solutions in seconds rather than weeks or months.” The underlying architecture includes models, orchestration layers, retrieval systems, and security controls, all designed to operate in enterprise environments.

The scale of deployment is part of the argument. “We have handled trillions of transactions and tens of millions of active users every month,” Sabharwal says. “The platform has delivered significant improvements in efficiency and cost, particularly in large-scale environments such as public services and financial systems.” The next step is expansion beyond its initial markets, applying the same model to other regions where similar challenges exist.

Where workflows are rewritten

Across these companies, the common thread is not simply the application of AI to enterprise problems. It is the restructuring of how those problems are approached. TorqueAGI is rethinking how intelligence is built for physical systems, Redpine is redefining how data is accessed and trusted, Forlex is rebuilding legal reasoning with domain-specific models, and CoRover is scaling interaction to population level.

That points to a broader shift. Enterprise AI is moving beyond tools that support existing workflows towards systems that reshape them. The question is no longer how to make current processes more efficient, but how those processes change when intelligence is embedded at their core.

This marks another step in the progression of the series. The first article explored AI in the physical world, the second in industrial systems, the third in healthcare and life sciences. Here, the focus turns to enterprise environments, where the challenge is not only technical, but organisational. The next group of start-ups emerging from NVIDIA GTC moves into risk, trust, and security, where the consequences of failure become even more visible and the tolerance for error continues to narrow.

All companies featured in this article are part of the NVIDIA Inception programme, which supports startups developing cutting-edge technologies with access to NVIDIA’s expertise, tools and go-to-market resources. The initiative is designed to help early-stage companies scale faster and bring advanced AI-driven innovations into real-world deployment.

Related Posts
Others have also viewed

AI startups: Where AI reshapes how work gets done

Enterprise workflows have long been shaped by software that organises information rather than understands it. ...

Europe pushes quantum and AI closer as cloud platforms expand experimental compute

OVHcloud has made a further move to expand access to quantum computing through the cloud, ...

The expansion of AI is forcing computing into spaces not designed to support it

Vertiv has introduced a new wall-mounted cooling system for small IT environments across Europe, the ...

New approach to power use suggests AI growth may not overwhelm the grid

Stellium Datacenters has reported a 75 per cent reduction in carbon emissions at its Newcastle-based ...