Digital twins are simulations that imitate an agent, object, situation or process in the world. They are a copy or representation of those things, built with software, and they are frequently used to monitor real systems, provide decision support to human experts, and explore possible action spaces to adjust those systems.
Some promoters of digital twins make the claim that they are a perfect digital replica of a physical thing. This is not the case, nor is it possible. A perfect replica of an atom is an atom; i.e. perfect replicas of physical things are also physical, but digital twins, by their nature, are defined by software, and software engineers have to make choices about how to simulate those physical things, which are expressed in code.
(That kind of nonsensical hype suggests that “digital twin” has become a buzzword like cognitive computing, which is used by consultants targeting managers who do not understand how technology works.)
In fact, people have been simulating things with computers ever since computers were created. (For example, ENIAC was used to simulate the behavior of projectiles in flight; i.e. ballistics.) Anything you can perceive or imagine can probably be simulated, but simulations are especially useful as a tool for thought, to model complex systems that need to be adjusted, because they allow us to express and update many and varied relationships among variables. That is particularly useful when facing an expensive or risky decision, like building a new factory, or updating the software that helps operate equipment.
Digital twins can be made to represent buildings, cars, cities, supply chains, factories, heavy equipment, call centers, epidemics, rockets, you name it. In other words, you can simulate phenomena at different levels of resolution, or “abstraction” as computer scientists like to say.
A simulation of a city might not allow you to model and adjust individual buildings (e.g. Cities: Skylines), making it more useful for urban planners than real estate developers. And a simulation of a building might not allow you to model the flooring of each room, making it more useful for real estate developers than interior designers.
Simulation modelers choose the level of abstraction at which they want to operate. Unlike reality, it may be impossible to zoom in or out to different scales of resolution in an artificial environment, since that environment is fundamentally a bunch of concepts that humans have used to encode their understanding of a slice of the world. So what you’re typically doing with a simulation is modeling a system at the resolution that facilitates your action or understanding. This is a crucial point: digital twins are our attempts to model the world.
Digital twins, like simulations, have a problem: they should simple enough for people to understand, and complex enough to approximate reality in a useful way. On the one hand, they are constrained by the limits of the human brain, and on the other, they face the challenge of gaining our confidence as accurate predictive models for the things they imitate. So the management of complexity is one crucial aspect of digital twins, particularly because, as they become more complex, explaining their predictions may become more difficult.
Making complex simulations, as you might imagine, is difficult. It requires effort, deep domain knowledge (rare talent), good feedback mechanisms with the real situation in question (which are typically limited to companies with existing operations), and great software engineering.
In addition to model complexity, another pitfall is real-world data. It is often non-stationary and messy, so signal may be low, or the ways you find signal might change over time, which means your model will need to be updated, and the quality of its predictions may vary.
To make the “digital twin” useful you are probably integrating with large software systems not entirely in your control, which may be hard to reason about (think ERP systems like SAP).
Digital twins do exist in deployment. What differentiates them from, say, any old machine-learning model you might use for predictions is that a “digital twin” is probably used for a more complex task than just classification. That is, it’s probably used to direct the actions of a system. Those words imply a larger solution. In operations, that is often called a model-driven decision support system, and in fact, that is one application of simulations and digital twins.
So one thing you see is simulations that embed machine-learning models and predict what actions to take in a given state: Think AlphaGo applied to business scenarios.
The digital twin idea, insofar as it includes large parametric models that depend on algorithms like deep reinforcement learning, matters now, because those machine-learning models are able to find structure in complexity, and make ever more accurate predictions about what to do.
That is, we’re able to identify optimal actions in more complex situations, with techniques more sophisticated than expert systems. That is particularly true when applying deep reinforcement learning to simulations, to construct digital twins, which is what Pathmind does.
In addition, improvements in network bandwidth, cloud compute and the ubiquity of sensors mean that greater amounts of more granular data can be used to feed the simulation, to assemble the digital twin. As we gain in resolution, we move away from crude and cartoonish representations toward models that are ever closer to the real. That point of overlap between simulations and reality, however far in the future it may be, is called the “simularity.”