Imagine the bustling floor of tomorrow’s manufacturing facility: Robots that are well-versed with multiple disciplines thanks to adaptive AI education work seamlessly and safely along side human counterparts. These robots can seamlessly switch between tasks, from assembling complex electronic components to handling machinery assembly. Each robot has a unique education that allows it to predict maintenance requirements, optimize energy consumption and innovate processes on-the-fly, based on real-time data analysis and learned experiences within their digital worlds.
Robots will be trained in a “virtual” school, a carefully simulated environment inside the industrial metaverse. Robots can learn complex skills in a fraction of the time it would take humans to do so.
Beyond the traditional programing
Training industrial robots used to be like traditional schooling: rigid, predictable and limited to repeating the same tasks. We are now at the beginning of a new era. Robots can learn in “virtual classrooms”–immersive environments in the industrial metaverse that use simulation, digital twins, and AI to mimic real-world conditions in detail. This digital world provides an almost limitless training environment that mimics real factories, production lines, and warehouses. Robots can practice tasks, face challenges, and develop problem solving skills.
What used to take days or weeks of real-world programming with engineers carefully adjusting commands to get a robot to perform a simple task can now be learned within hours in virtual spaces. This method, called Sim2Real, combines virtual training with real world application to bridge the gap between simulated and actual performance.
The industrial metaverse, although still in its early stages of development, has the potential to reshape robot training. These new ways of upskilling robotics can allow for unprecedented flexibility.
Italian automation company EPF discovered that AI changed the way it approached developing robots. Franco Filippi, EPF’s CEO and chairman, says that the company changed its development strategy to develop modular, flexible components which could be combined into complete solutions. This allowed for greater coherence, adaptability and consistency across different sectors.
Learn by doing
AI model power increases when trained with vast amounts of data. For example, large sets of labeled samples, learning categories, and classes are learned by trial-and-error. In robotics however, this method would require hundreds hours of robot time with human oversight in order to train a simple task. Even the simplest instructions, such as “grab the bottle,” can have many different outcomes depending on its shape, color and environment. The training loop becomes monotonous and yields little progress for the amount of time invested.
The key to advancing robotics is building AI models that generalize and can then successfully complete a given task regardless of the surrounding. Researchers from New York University Meta and Hello Robot introduced robot utility models which achieve a 90% rate of success in performing basic tasks within unfamiliar environments without any additional training. Combining large language models with computer vision, the robot is continuously informed if it has completed the task. This feedback loop accelerates learning by combining AI techniques and avoiding repetitive training cycles.
Robotics firms are now implementing advanced perceptual systems capable of generalizing and training across tasks and domains. For example, EPF worked with Siemens to integrate visual AI into its robotics. This allowed it to create solutions which can adapt to changing product geometries or environmental conditions without requiring mechanical reconfiguration.
Learning through imagination
The scarcity of training data is one of the biggest constraints for AI, particularly in robotics. Digital twins and synthetic data have been used to train robots in a way that is significantly more cost-effective than previous approaches.
Siemens’ SIMATIC Robot Pick AI ( ) expands this vision of adaptability by transforming industrial robots, which were once limited to rigid and repetitive tasks, into complex machines. The AI is trained on synthetic data, which are virtual simulations of shapes and materials. It also prepares robots for unpredictable tasks like picking unknown items out of chaotic bins with over 98% accuracy. The system improves through real-world feedback when mistakes are made. This is not a simple fix for a single robot. Software updates can be applied to entire fleets of robots, allowing them to work more flexible and meet the growing demand for adaptive production.
Another robotics firm, ANYbotics generates 3D-models of industrial environments that act as digital twins to real environments. The integration of operational data such as temperature, flow rates, and pressure is used to create virtual replicas that are similar to physical facilities. A power plant, for instance, can use site plans to create simulations of the inspection tasks that robots will perform in their facilities. This allows robots to be trained and deployed faster, with minimal setup on site.
Simulation allows for the near-costless multiplicity of robots to train. “In simulation, you can create thousands of virtual robotics to practice tasks and optimize behavior. Peter Fankhauser is the CEO and co-founder at ANYbotics.
Because the robots must be able to see their environment in any lighting or orientation, ANYbotics and its partner Digica
developed a method for generating thousands synthetic images. By eliminating the tedious work of collecting thousands of real images on the shop floor, time is dramatically reduced to teach robots the information they need to learn.
Siemens uses synthetic data to create simulated environments for training and validating AI models digitally, before they are deployed into physical products. “We use synthetic data to create variations in object rotation, lighting and other factors, so that the AI can adapt well to different conditions,” says Vincenzo De Paola. “We simulate everything, from lighting conditions and shadows to how the pieces are oriented. This allows the model train under different scenarios, improving its abilities to adapt and react accurately in the real-world.
The digital twins and synthetic datasets have proven to be powerful antidotes for data scarcity and expensive robot training. Robots that practice in artificial environments are prepared quickly and cheaply for a wide variety of visual possibilities and situations they may encounter in real life. De Paola says that they validate their models in a simulated environment, before deploying them. This approach allows us identify any potential issues and refine the model at minimal cost and time.
The impact of this technology can extend beyond initial training. If the robotโs real-world data is used to update the digital twin and analyze possible optimizations, this can create a cycle of dynamic improvement to systematically improve the robotโs learning, capabilities, performance, and capabilities over time.
The robot that is well-educated at work
AI and simulation will be the driving force behind a new era of robot training. Organizations will reap the rewards. Digital twins enable companies to deploy advanced robots in a fraction of the time. AI-powered vision systems also make it easier to adapt product lines to changing market needs.
New ways of training robots transform investment in the field, while reducing risk. De Paola says, “It is a game changer.” “Our clients can offer AI-powered robots as services, backed up by data and validated model. This gives them confidence in presenting their solutions to clients, knowing that the AI was extensively tested in simulated environments prior to going live.
Filippi imagines this flexibility enabling robots of today to make products of tomorrow. In one or two years, the need will be to process new products that we do not know about today. “With digital twins and the new data environment it is possible to design a machine today for products that we do not know yet,” says Filippi. Fankhauser takes the idea one step further. “I expect that our robots will become so intelligent, they will be able to independently generate their missions based on knowledge accumulated through digital twins,” says Fankhauser. “Today a human guides the robot at first, but in the near future, they will be able to identify tasks on their own.”
The content was produced by Insights – the custom content division of MIT Technology Review. This content was not produced by the editorial staff of MIT Technology Review.