Based in part on a conversation with swim.ai Have yet to look at that service, but plan to. Seems such architecture would have to be very adaptable. Linkable to real time and samples of stored operational data. Useful thoughts here.
What’s the right computing architecture for digital twins? by 7wdata October 17, 2020
Last week, I found myself having a conversation that covered edge computing, digital twins, and the concept of absolute truth. It started out as a discussion with Simon Croby, the CTO of Swim.ai, about that company’s latest product, which is designed to bring Swim’s edge analytics software to the enterprise and industrial world. But it quickly broadened to a conversation about the way we think about data storage and compute when we want to act on real-time information and insights.
Basically, with IoT we’re trying to get a continuous and current view of machines, traffic, environmental conditions, or whatever else so we can use that information to take some sort of action. That action might be predicting when a machine will fail, or routing traffic more efficiently, but for many use cases, the time between gathering the data, offering an insight, and then taking action will be short.
And by short, I mean the data might need to be analyzed before a traffic light changes or a person walks more than a few feet away from a shelf in a grocery store. Figuring out how to analyze incoming data and then create a model based on it, such as of an intersection or shoppers, so that a computer can act on it is what led to our discussion of truth. Crosby’s point was that truth changes every second, so if we’re trying to build a digital twin that represents the truth of a machine or a model, it needs to constantly change. And that has a lot of implications for how we think about computing architectures for digital twins.
For example, Swim.ai is working with a U.S. telecommunications company to create a digital twin of the carrier’s network in real time and then optimize that network based on the ongoing movements of people and any applications they’re running. The carrier is tracking 150 million cellular devices, which together generate 4 petabytes of data each day. With 5G on the horizon and an increasing number elements to track between devices and base stations, the carrier expects that the amount of data it will need to analyze will reach 20 petabytes.
Prior to Swim, the carrier would take that data and move it to a 400-node Hadoop cluster to analyze it in batches. It took roughly 6 hours and required a lot of servers. After switching to Swim’s software, the carrier can track those 150 million devices and base stations and start taking actions on its network in just 100 milliseconds. ... "
No comments:
Post a Comment