Las Vegas, CES 2026 — NVIDIA is accelerating the future of robotics and autonomous systems with the launch of a new suite of open physical AI models and frameworks, showcased at CES 2026. The initiative aims to bring robots and autonomous machines out of research labs and into real-world environments faster, safer and at greater scale.
At the core of this strategy is NVIDIA Omniverse, powered by OpenUSD, which provides a standardized framework for building high-fidelity digital twins. These digital environments allow developers to simulate, train and validate robots before deployment, reducing risk and improving real-world performance.
From Simulation to Real-World Deployment
NVIDIA’s physical AI stack spans the entire development lifecycle — from world simulation and synthetic data generation to cloud orchestration and edge deployment. The platform integrates technologies including NVIDIA Cosmos world models, Isaac Sim, Isaac Lab-Arena, Alpamayo AI models, and the OSMO orchestration framework, enabling autonomous systems to reason, learn and act in dynamic environments.
At CES, developers demonstrated real-world applications ranging from heavy construction equipment and factory assistants to social and surgical robots, highlighting how open AI infrastructure is driving rapid innovation across industries.
Industry Adoption Across Sectors
Caterpillar showcased its Cat AI Assistant, powered by NVIDIA Nemotron agentic AI models and running on NVIDIA Jetson Thor. The system enables natural-language interaction inside vehicle cabs while leveraging Omniverse-based digital twins to simulate job-site layouts, traffic flows and equipment coordination before real-world deployment.
In healthcare, LEM Surgical presented its FDA-cleared Dynamis Robotic Surgical System, which uses NVIDIA Jetson AGX Thor, Holoscan and Isaac for Healthcare to train autonomous surgical arms. The system employs digital twin simulation and synthetic data generation to enhance precision in complex spinal procedures.
Meanwhile, NEURA Robotics, AgiBot, Intbot, and ROBOTIS demonstrated how NVIDIA’s open physical AI frameworks are being used to train humanoid, service and social robots using sim-to-real workflows that improve reliability and safety.
Open Source at the Core
NVIDIA emphasized that open source is essential to scaling robotics innovation. By providing shared simulation environments, interoperable data standards and open AI models, the company is enabling developers to collaborate across ecosystems and accelerate deployment timelines.
Partnerships with platforms like Hugging Face’s LeRobot, along with the integration of Isaac GR00T models, further lower barriers for robotics developers, allowing both startups and enterprises to adopt advanced AI workflows.
Shaping the Future of Physical AI
As robotics and autonomy expand into construction, healthcare, manufacturing and everyday life, NVIDIA’s open physical AI approach positions Omniverse as a foundational platform for the next generation of intelligent machines.
By unifying simulation, AI reasoning and real-world execution, NVIDIA is helping transform autonomous systems from experimental concepts into practical, scalable solutions for the physical world.
