Artificial intelligence is being forced into environments where systems must operate continuously, not just perform occasionally. In this second article drawn from companies presenting at NVIDIA GTC, the focus shifts to industrial settings, where labour shortages, infrastructure limits, and physical constraints are shaping how AI is deployed and scaled.
The language around industrial AI still tends to drift towards futurism, as if the hard part were simply making the model clever enough. In practice, the difficulty lies elsewhere. Systems are being deployed into environments where conditions are inconsistent, infrastructure is often outdated, and the consequences of failure are immediate rather than abstract. A factory, a farm, or a utility network does not respond to theoretical capability. It responds to whether a system can function, repeatedly, under pressure.
That distinction begins to reshape what intelligence means. Performance in isolation matters less than continuity across time, across changing conditions, and across systems that were never designed with AI in mind. The companies emerging in this space are not trying to build better models in isolation. They are trying to make those models usable in environments where visibility is partial, labour is constrained, and operational margins are tight.
Monitoring livestock at scale
Intflow is applying AI to livestock farming, where scale has increased but visibility has not kept pace. Its edgeFarm platform uses conventional cameras to monitor animal behaviour continuously, analysing patterns such as movement, feeding, and inactivity to identify early signs of disease and stress.
“Every year, 250 million pigs die from disease, and outbreaks waste billions in feed, creating both financial and environmental losses,” Kwang Myung Jeon, Chief Executive Officer of Intflow, says. “The root cause is a global labour shortage. There are simply not enough people to monitor these farms properly, so in many cases we are raising animals without really seeing what is happening.”
That lack of visibility is not a marginal issue. It is structural. Large-scale farms rely on human observation that cannot scale with herd size or operate continuously. “That is why we developed edgeFarm as a proactive AI vet system,” Jeon says. “Using standard IP cameras, we analyse behavioural biomarkers such as lethargy or reduced feed intake, which allows us to detect symptoms that would otherwise remain invisible and identify individual animals that are becoming sick.”
The system operates in real time, processing video locally using edge infrastructure. “We can process video from hundreds of cameras without latency and generate continuous intelligence about what is happening on the farm,” Jeon says. “By connecting multiple cameras, we are not just looking at one animal but understanding the entire environment and how behaviour changes over time.”
That shift allows intervention to move earlier in the cycle. “Instead of losing an entire barn tomorrow, you can treat one sick pig today,” Jeon explains. “The system sends alerts directly to the farmer, identifies the location of the animal, and provides its history so action can be taken immediately.”
The commercial impact is clear enough to matter. “We have seen a two per cent reduction in mortality and a ten per cent reduction in feed costs,” Jeon says. “On a 20,000-pig farm, that can translate to savings of around $400,000 annually.” What Intflow is restoring is not simply efficiency, but visibility in an environment that has been operating with too little of it.
Teaching machines dexterity
RLWRLD is focused on a different constraint, the persistence of human labour in environments that are otherwise automated. The problem is not that machines cannot move or repeat tasks. It is that they struggle with dexterity, irregularity, and tasks that require coordination between vision, force, and memory.
“Many countries are trying to build fully automated factories, but still around half of the work is done by humans,” Junghee Ryu, Chief Executive Officer and Founder of RLWRLD, says. “The reason is that robots cannot yet replicate human dexterity, especially when tasks require fine manipulation and multiple degrees of freedom.”
The company’s approach begins with capturing human motion and translating it into robotic action. “We can take a single video of a skilled worker, extract the skeleton movement, and retarget that movement to a robot,” Ryu says. “On top of that, we build synthetic data pipelines and train our own vision language action models.”
That model is extended through additional components designed to handle the physical nature of tasks. “Dexterity is not only about motion,” Ryu explains. “You need memory to complete multi-step tasks, you need force sensing to understand contact, and you need a world model to anticipate what will happen next.” The system combines these elements so that actions are not only executed but adapted in context.
The examples are deliberately unremarkable. Opening a bottle, pouring liquid without spilling, handling deformable objects. These are tasks that sit between structured automation and human instinct, and they are precisely where most systems fail. “This is where the gap still exists,” Ryu says. “Closing that gap is what allows automation to move further into real industrial work.”
Efficiency is part of the argument. “We have achieved strong performance using a fraction of the compute required by larger models,” Ryu says. “This allows us to move faster and scale more efficiently.” The broader point is that dexterity, long treated as a limiting factor, is becoming a tractable problem.
Scaling robotics in production
Sensmore is operating in environments where automation has historically struggled to move beyond pilots. Its systems are deployed in sectors such as mining, construction, and material processing, where heavy equipment, complex workflows, and safety constraints create barriers to scale.
“You should think of it as a real-world production system rather than a single robot,” Bjarne Johannsen, Chief Technology Officer and Co-Founder of Sensmore, says. “We are dealing with large machines moving raw material, feeding conveyors, and supporting downstream processing. This is not a controlled environment. It is a continuous operation.”
The company’s argument is that previous generations of robotics failed because they relied too heavily on engineered rules rather than data-driven systems. “Robotics 1.0 did not scale because it depended on manual engineering,” Johannsen says. “What changes now is that we can build generalisable systems that improve through deployment and through data.”
That requires intelligence at different speeds. “In some cases, decisions need to be made in milliseconds, so you need a fast reactive system,” he explains. “In more complex situations, you need reasoning that can handle uncertainty and plan ahead.” Sensmore combines these layers, using AI models for perception and decision-making alongside robotics systems for execution.
The company also emphasises the importance of integration. “You cannot just build a robot and expect it to work,” Johannsen says. “You need the full system, including fleet management, site visibility, and integration into the customer’s operations. That includes software to monitor machines, coordinate workflows, and provide a unified view of the site.”
The key distinction is that these systems are already operating in production. “Our machines are moving tonnes of material every night in real environments,” Johannsen says. “We are working with large quarry sites and delivering systems that are fully operational and safety certified.” The gap between demonstration and deployment remains one of the defining challenges in industrial AI, and Sensmore is positioning itself on the side where that gap has been crossed.
Unlocking power for AI
GridCARE addresses a constraint that increasingly affects every other layer of the AI stack, access to power. As data centre demand accelerates, the limiting factor is not only compute or land, but the ability to connect to the grid within a viable timeframe.
“Everyone is talking about building AI infrastructure, but access to energy is the real bottleneck,” Shaneez Mohinani, Vice President and Head of Strategy and Operations at GridCARE, says. “The challenge is not just future generation. It is how to access power today.”
The company’s approach is based on the observation that the grid is underutilised, but unevenly so. “The grid is constrained, but only at certain times and under specific conditions,” Mohinani explains. “We use AI models to identify those constraints and determine how capacity can be unlocked.”
That allows developers to rethink where and how they build. “We can help expand capacity at existing sites or identify new locations based on grid conditions,” Mohinani says. “In many cases, the capacity already exists, but it is not visible without this level of analysis.”
The impact can be significant. “In one of the most congested data centre markets in the US, we helped unlock 400 MW of capacity and accelerate around $10 billion in investment,” Mohinani says. “That was in a market where development was effectively stalled. The implication is that AI is not only driving demand for infrastructure but also enabling the optimisation of the systems that support it.”
Rethinking physical mobility
Beltways approaches the problem from a different angle, focusing on the movement of people rather than materials or data. Its system is a modular, high-speed pedestrian transport platform designed for environments such as airports and large venues, where short-distance travel remains inefficient.
“Most trips taken every day are relatively short, but we do not have an effective mass transit solution for that distance,” John Yuksel, Co-Founder of Beltways, says. “We are building a system that can move people significantly faster than existing walkways and integrate into dense environments.”
The system is built from modular components that can be installed quickly and scaled as needed. “Each module connects to the next, allowing the system to extend over longer distances without complex construction,” Yuksel says. “The infrastructure sits above ground, reducing installation time and disruption.”
What brings it into the AI conversation is the integration of sensing and control. “We use computer vision for safety, so if someone falls, the system can stop immediately,” Yuksel explains. “We can also detect different types of users and adapt the system to support accessibility. The platform includes predictive maintenance and monitoring across modules, allowing operators to manage the system as a connected network rather than static infrastructure.”
The concept is different from most AI deployments, but it reflects the same shift towards embedding intelligence into physical systems. The goal is not to replace existing infrastructure entirely, but to improve how it operates and responds in real time.
Where systems must operate
Across these companies, the common factor is not the specific technology being used, but the conditions under which it must function. These are environments where systems need to operate continuously, where labour is limited, and where the cost of failure is immediate.
That changes the conversation around AI. Performance is no longer measured only by capability, but by reliability in context. A system that performs well in isolation has limited value if it cannot sustain that performance in the environments where it is deployed.
There is also a clear progression from the previous article. The first group of companies focused on perception and spatial understanding. This group moves further into operation, where systems are not only observing but acting, planning, and coordinating within industrial settings.
As AI continues to move into these environments, the distinction between model and system becomes more pronounced. The companies that succeed will be those that can bridge that gap, turning intelligence into something that operates consistently under real-world conditions. The next wave of start-ups emerging from NVIDIA GTC moves into healthcare and life sciences, where those conditions become even more demanding and the margin for error becomes narrower still.
All companies featured in this article are part of the NVIDIA Inception programme, which supports startups developing cutting-edge technologies with access to NVIDIA’s expertise, tools and go-to-market resources. The initiative is designed to help early-stage companies scale faster and bring advanced AI-driven innovations into real-world deployment.




