AI startups: AI leaving the lab and entering the real world

Share this article

Artificial intelligence is now being tested in environments that do not tolerate abstraction. In this first of seven articles drawn from AI startup companies presenting at NVIDIA GTC, the focus shifts to start-ups deploying AI into the physical world, where unreliable data, ageing infrastructure, and operational risk expose what these systems can actually do.

For years, the centre of gravity in artificial intelligence has sat inside the model, where progress has been defined by benchmarks and increasingly abstract measures of capability. That framing is beginning to give way as systems move into environments that are not designed to accommodate them. These are not controlled settings, but operational ones, where visibility is incomplete, conditions shift without warning, and the margin for error is often measured in physical terms rather than computational ones.

The implication is not simply that deployment is harder than development. It is that the definition of intelligence changes when systems are forced to operate under constraint. Performance in isolation matters less than the ability to function consistently over time, across environments that introduce noise, ambiguity, and failure points at every stage. This is where a different class of company is beginning to emerge, focused less on improving models in isolation and more on making them usable where it counts.

Seeing risk before it happens

Ailytics operates directly inside this gap, where the problem is not the absence of data but the inability to interpret it fast enough to prevent something going wrong. Industrial sites are already filled with cameras, yet those systems are largely passive, dependent on human monitoring that cannot realistically scale across dozens of feeds or long operating hours.

“Most of the time, all these issues are quite bad, in the sense that there are a lot of risks that are not being seen, and there are a lot of activities, whether intentional or not, or due to fatigue, that are not caught,” Wei Zhuang Tan, Founder and CEO of Ailytics, says. “They result in unsafe acts, accidents, incidents, insurance claims, and liabilities.” What the company has built is a layer that sits across this existing infrastructure, connecting even low-resolution or ageing cameras to a system capable of generating real-time insight, whether deployed at the edge, on premise, or in the cloud.

The difficulty lies in the environment itself. These are sites where lighting conditions deteriorate quickly, weather interferes with visibility, and objects that matter may occupy only a small portion of the frame. “We are dealing with completely remote spaces for defence, mines, and quarries that have the worst conditions in terms of lighting, weather, and scenario complexity,” Tan explains. “So, we have built core technologies such as converting a single camera feed from 2D to 3D, removing the need for LiDAR or radar as long as millimetre accuracy is not required.

“If we want to detect someone under a suspended load with a single camera, we must understand the entire 3D space on the X, Y, and Z axis to project where the load would fall. Otherwise, the use case does not make sense,” Tan says. The system extends beyond individual alerts, combining signals across cameras and recognising patterns that emerge over time. “We can also do very complex use case manipulation and chaining to allow for scenarios that do not happen on a per camera basis,” he adds.

Performance is defined by efficiency as much as accuracy. “We deploy across the entire NVIDIA stack, from small edge devices in remote sites to enterprise GPUs, and we are very compute efficient,” Tan notes. “On Jetson, we can run up to around 16 cameras in real time, and on an RTX Pro 6000, close to 90 cameras with multiple use cases on each.”

“More recent developments introduce a second layer of interpretation. With VLM approaches, we enhance accuracy and provide predictive insights about what is happening on the ground, but the most exciting part is contextual understanding, where we mimic an experienced professional looking at an entire scene.

The outcome is measured in operational terms rather than technical ones. “I can safely say that we have saved 13 lives as of today,” Tan says. “In one mid-sized construction site with over 50 cameras, we reduced non-compliance by almost 70 percent in 16 months.”

Reconstructing the real world

XGRIDS is working further upstream, addressing the problem of how the physical world is captured and reconstructed for machines in the first place. The limitations of existing approaches are well understood but often accepted as trade-offs rather than solved directly.

“Everybody knows there are many ways to create 3D spatial data, but we see significant limitations,” Sunny Liao, Global Director of Sales at XGRIDS, says. “Manual modelling is time consuming and expensive, photogrammetry struggles with complex or reflective environments, and AI-generated content may look convincing but does not replicate the real world accurately. What we need is not just high fidelity, but true geometry accuracy, the exact dimensions of the real world.”

The company’s approach combines spatial capture and processing into a single workflow, using a device equipped with LiDAR, multiple cameras, and inertial measurement to scan environments, which are then converted into Gaussian splat models that retain both visual detail and structural precision.

“You start by scanning the environment, then process that data into high-quality models that are exact replications of the real world, and export them into simulation platforms to begin training,” Liao says. The emphasis is not only on the output, but on reducing the effort required to produce it. “Our goal is to take care of data collection and processing so that users can focus on selecting the right environment and training their systems.”

The same system can be applied across environments that differ significantly in scale and complexity, from confined indoor spaces to large industrial facilities. Once captured, those environments can be modified and used to generate additional data streams that accelerate training and improve the transition from simulation to real-world deployment. “You can simulate scenarios and generate multi-source data directly from the model to accelerate training,” Liao concludes.

Automation beyond human limits

Raise Robotics sits at the point where perception becomes action, translating intelligence into physical work in environments that have historically resisted automation. “There is a lot of work in this world that is truly not suited for the human body, especially in heavy industry,” Gary Chen, CEO and Co-Founder of Raise Robotics, says. “People must work with structures and objects much larger than they are, and that size mismatch creates hazards. These risks are embedded in everyday operations rather than exceptional scenarios.

“People are working at heights where they may fall, which is one of the leading causes of fatalities, and they are manipulating large objects that can easily crush or impact them,” Chen explains. The company’s response is the autonomous mobile fabricator, designed to operate across these environments with a level of adaptability that traditional automation has struggled to achieve.

“We are building a general robotic platform for heavy industrial automation that can perform tasks in these environments safely and consistently,” Chen says. “The intelligence stack reflects deployment constraints, relying heavily on synthetic data and simulation. We have developed a method to train our models using only synthetic data, without needing real-world onboard data collection.

“We have units working on construction sites and in fabrication facilities today, generating revenue across multiple sectors. Physical access to these sites is highly controlled, and very few companies can deploy and collect data there. That allows us to build proprietary datasets and a defensible intelligence layer.

Rewriting physical product design

HILOS approaches the physical world through the constraints that still define how products are designed and manufactured. “Hardware is hard, and that is because the barriers to creating physical products are still incredibly high,” Elias Stahl, Founder and CEO of HILOS, says “When you digitise something like music or film, the upfront costs shrink, cycle times collapse, and distribution becomes almost free. That has not happened yet in physical products.

“A designer today must move through multiple specialists, each with their own tools, knowledge, and timelines. The barriers to entry remain high at a time when anyone can build and ship software.” The company’s platform is designed to collapse those steps into a unified system.

“We have built machine learning and 3D geometry pipelines that understand manufacturing constraints, so that anyone can design a product that can actually be made,” Stahl says. “You can design a product, generate it in 3D, and send it to print almost immediately. The promise of 3D printing was that anyone with a file could make something real, but you still needed to know how to design for manufacturing,” Stahl notes. “We close that gap.”

Where constraints define outcomes

Across these companies, the pattern is not defined by a shared technology stack or even a common market. What connects them is the environment in which they are choosing to operate. These are not systems being built for ideal conditions, but for settings where visibility is incomplete, infrastructure is imperfect, and outcomes carry operational consequence.

That distinction matters. It shifts the conversation away from what models can theoretically achieve and towards what systems can sustain in practice. In each case, intelligence is being shaped not by the pursuit of capability in isolation, but by the constraints of deployment, whether that is a construction site, a factory floor, a simulated environment, or a fragmented design process that has resisted digitisation.

The direction of travel is clear, even if it is uneven. As AI continues to move beyond controlled environments, it will encounter more of these constraints, and more of these trade-offs. The companies that succeed will not necessarily be those with the most advanced models, but those that can make those models function where conditions are least forgiving.

This is only one part of that shift. The next wave of start-ups emerging from NVIDIA GTC is addressing a different layer of the problem, where intelligence moves deeper into industrial systems and operational workflows, and where the challenge is no longer just perception, but integration.

Related Posts
Others have also viewed

AI startups: AI leaving the lab and entering the real world

Artificial intelligence is now being tested in environments that do not tolerate abstraction. In this ...

Cooling dictates the limits of AI infrastructure

Cooling is no longer a supporting system within the data centre, it is becoming the ...

How AI could transform networks from cost centres into economic engines

For decades enterprise and telecom networks have been treated as infrastructure overhead, a necessary expense ...

The processor everyone forgot is now running the AI economy

The AI boom has been framed as a triumph of acceleration, yet the system is ...