Recently, experts from VoltDB presented a webinar, “The Hidden Inflection Point in 5G: When the Changing Definition of Real-Time Breaks Your Existing Tech Stack”. In this webinar, Dheeraj Remella, VoltDB’s Chief Product Officer, covered why you need to rethink your definition of real-time to match changing reality, the two critical factors that must combine to change the definition of real-time and how this relates to 5G-driven enterprises being able to monetize their data. Check out our recap of the most popular attendee questions and answers from the webinar below.
Q: Isn’t edge computing on the devices? Could you elaborate on what you mean by the new definition of real-time in this context?
Dheeraj Remella: One of the things that I see a lot is that historically, edge computing means the computing that happens way at the edge on the device. So, that’s actually changing more and more as we get into intelligence near edge is becoming mainstream. So, what do I mean by intelligence near edge? The idea is, if the intelligence is within the device, the context of intelligence is extremely small. But if you want to have across context intelligence, then you need to stay outside the device but collaborate across these devices. And many times it could be collaborating across various subsystems within a single plan. You probably have seen this pattern for people who’ve been in the telecom industry and service provider industry.
There used to be what was called ATCA (Advanced Telephony Computing Architecture), where you had appliances for specific purposes. But the problem that these technologies highlighted was that if a failure happens, or if the appliance goes down, then you need a very, very long lead time to be able to bring in new replacement technology. In addition to that, being able to scale at the agility levels expected today is not possible with that. What’ll happen there is the intelligence and the application capabilities moved out of these appliances and became software enabled. You see that with software defined networking and network function virtualization. All of these are avatars that are very similar to what’s happening in edge computing. The edge devices, they’re really, really smart, but then they’re very, very intent driven and purpose built devices. But because this intelligence needs to be outside the devices, the devices are becoming just vanilla telemetry devices. The telemetry data comes to the intelligence hub, which is where something like a multi-access edge computing would play a role where you’re collecting information from a variety of sources within a single locality in a micro zone, so to speak, like a plant, or a power, a windmill or something like that, and you’re able to make a more comprehensive decision.
That is where your intelligence at the edge is happening in 10 milliseconds or less. Edge computing has moved out of the devices to near edge, so you can have better intelligence, better decisions, better actions, and outcomes.
Q: How is what you’re describing different from data thinning that is already in play today?
Dheeraj Remella: If you see edge as something that is only used for filtration or aggregation so you can send less amount of data but more digested version of the data to cloud. You’re right that there will be no difference, but those circumstances require evolving. For instance, we’re working with some tier one CSPs in the US, where your multi-access edge computing is moving beyond just simple data pinning or answering localized questions to being able to incorporate intelligent decisions. Anything from eCommerce and brick & mortar retail, to a manufacturing plant or automated testing of manufacturing equipment. In any of these cases, the multi-access edge computing is evolving to the next level where you’re not just being used as an intelligent data learning management tool. But you’re becoming almost a combination of what I would think of as a skater system, plus data thinning, plus decisions for real-time control loops. All of these things are coming together into the next gen MCE. So, that’s the way I look at it. Data thinning is an important part of it. But it’s part of it, it’s not the whole. There’s more that is expected near the edge than just simple reduction of data set size.
Q: How do you monetize better than cloud?
Dheeraj Remella: It goes back to one of the slides where I mentioned what we see with our customers. Going to the cloud is already considered near real-time and not precise real-time. What we have understood is there is a massive value hidden by moving from near real-time to true precise real-time, with accurate decisions. To reiterate some of the examples that I gave during the presentation, in cases of fraud management, by moving to proactive prevention through real-time, we’re able to monetize by preventing 83% of your fraudulent transaction completion. Think of an IoT network where you have a bot trying to intrude upon it, you need to run these decisions in less than 10 milliseconds to be able to get to that detection before the threat has manifested and infiltrated into your systems. One of our customers in Japan has been able to prevent 100% of the DDoS attacks by being able to make these decisions in 10 milliseconds. That’s the monetization difference between processing near the edge vs. processing in cloud away from the event source.
Q: Does it seem like edge computing can monetize better than cloud from what you’re seeing?
Dheeraj Remella: Yes, the need for low latency decisions is driving more and more industries to consider, what does my edge look like? And what can I do there to make a differentiation of my organization from my competitors and be the leader in my industry?
What are your thoughts on moving to the edge? How about how to monetize the cloud? Share your thoughts with us in the comment area, below.
Or, if you missed the webinar and want to catch it on your own time, view it on-demand here.