News & Interviews

Edge Executive Interview – Rob High, IBM

In the lead up to Edge Computing World, we’re taking some time to speak to some of our Keynote Speakers. Today we’re talking to Rob High,  Vice President and Chief Technology Officer, Edge Computing at IBM.

BOOK NOW

Edge Computing World: IBM has put in the cloud from the center of its strategy; so, where does Edge Computing fit ?

Rob High: Two motions are going on in the marketplace today. One is driven by the hyper scalers – the hyper-scale cloud providers – such as Amazon, Microsoft, Google, and IBM. I classify it as a cloud out movement. It’s largely driven on the idea that if you’re a client and you already have an account on one of these public cloud environments, and you’ve been building applications and posting them there and building up a large data repository in each of these clouds, then the cloud out strategy is beneficial to you – because it allows you to retain that investment, even as you bring the power of that cloud out further into the edge.

Of course, those cloud & edge environments will be managed from the cloud. But there will also be some natural physical limits to the number of instances that can be created, before you begin to run into restrictions or limitations and scaling. So, this is a really powerful approach for a condition where you started out with a commitment to a particular cloud provider, and now you want to get further return on that investment. And that’s relevant to some percentage of our customers from an IBM standpoint. The majority of IBM customers (and enterprise customers) have a multi-cloud or hybrid cloud account. They don’t really commit themselves to a single cloud provider and have two or three different cloud provider accounts. For different applications, they’ve got data that may be scattered across different cloud provider accounts. The majority of their work is done in the IP data center- call that a private cloud if you want.

So, what they want for their edge strategy is something that is much more neutral. And that’s fairly consistent with IBM’s stance in the hybrid cloud space. We build a hybrid cloud as a multi-cloud capability that’s agnostic to all the hyper scalars, including IBM Cloud. It also embraces the presence that most of our enterprise customers have in their own data centers, which further allows enterprises to focus their attention on the business problems they’re trying to solve, as opposed to any IP commitments they’ve made previously. It kind of changes the game also the nature of the business cases and allows them to concentrate on essentially what matters. That’s how we see it playing out.

There will be some intersection between what we call the “cloud out” versus the “edge in” approach. I think most of the intersection will surface primarily in the network edge space because that’s about as far out as the cloud out can reasonably get to running and scaling issues. It’s about as far up in the spectrum of edge locations that most enterprises want to exploit and still feel like they’re getting the lower latency and higher bandwidth benefits that come from moving workloads away from the central location of their IT Data Center.

Edge Computing World: Being agnostic seems central to this offer; and if you’re agnostic, it makes it even more important to work with the ecosystem. How do you see the ecosystem in edge computing? And, what further developments are needed to take full advantage of the opportunity?

Rob High: Our observation is that the edge marketplace is somewhat segmented as it is not a fully defined marketplace yet. But in all the places where edge computing has utility, or all the vendors that are participating in enabling and supporting it, there’s been a high degree of chaos – creating a lot of disparate points of view and independent value propositions being introduced in start-up activities.

However, that’s beginning to change. I think the work that’s being done at the Linux Foundation around the edge, the sub-projects there, including secure device, onboarding the work that we’ve been doing around open horizon pledge, are all beginning to form the basis of an industry standard to manage and deliver value in the edge.

Secondly, I think that also becomes a rallying point for communities to begin to form, through which we can now build out the ecosystem and commercialize the value of that community. Commercialization, like all vibrant ecosystems, is not about one base vendor having complete control over it. It’s a collection of capabilities that makes multiple suppliers capable of providing. When multiple other suppliers get a chance to participate, they can add their value and connect and build off that become healthier from a commercialization standpoint.

With that kind of essence and crystallization organization, it’s necessary to launch this edge marketplace into its next phase of growth as it’ll accelerate that growth. I think that this marketplace represents some uniqueness in our industry – when I compare it back to the last 60-70-80 years of the modern IT industry. We never had an occasion where we’re talking about literally hundreds of billions of devices !

Even the mobile computing marketplace was a few billion less than 10 billion, perhaps. Or earlier examples of distributed computing and client-server that we’ve seen in the past. We’ve been measured in millions, even 10s of millions. But when you’re dealing with 100 billion potential pieces of equipment in the marketplace globally, it implies that the particular enterprise will have to manage hundreds of thousands, or perhaps millions of pieces of equipment under their purview.

And when we’re talking about software-defined equipment, getting it in the right software at the right place and at the right time makes the difference between it being useful or a nightmare. And if we don’t a standardization process, that nightmare can become a horrendous thing. Let’s say you’re a factory manager and you’ve bought equipment from 10-20 different suppliers. If each of those equipment is powered by software, it’s important to know where that software has to be managed as each of those suppliers has introduced its own management technique. As a factory manager, you have to manage those 10-20 different management systems. In such cases, the cost of management can quickly exceed the benefit that you’re getting from having that computer curve closer to where the data is generated and where the action is being taken. But the ecosystem is absolutely essential for the success of this marketplace, which we feel strongly about fostering in our approach to the edge.

Edge Computing World: So there’s plenty of opportunities for the companies to add value and define space in the market?

Rob High:  Yes!  An example of that is our collaboration with Intel around their Open Retail Initiative. Part of our mission is to bring forward an open software framework for retail. But even prior to that, crystallizing. We’ve already seen where the other equipment manufacturers that are part of the Intel ecosystem are quite interested in getting on to some of the existing open standards, including open horizon as an example.

Edge Computing World: The retail has of course under some stress over the last six-eight months under the COVID situation, and there’ve been some opportunities as well as some big threats. So, what kind of use cases do you see in the retail market and then elsewhere?

Well, COVID really highlighted the need for a lot of these retail companies to evolve their digital enterprise initiatives. However, they don’t need to go fully online  – for both suppliers and distributors. But, at the very minimum, the whole idea of buying online and picking up at the store, picking up to the curb have delivered that whole movement. You almost can’t survive in the business with the threat of the other digital retailers coming over the top of you. You can’t operate as a restaurant, unless you can support curb side pickup.

This highlights one of the major gaps that most retailers have -they don’t even have an online ordering system. So in some ways, survivability is dictating demand for a digital transformation, at least getting to buy online and pick up at the curb.

This situation certainly has improved. We’ve seen a lot of interest from retailers as they tried to do that. Again, if you’re in something other than restaurant or other form of hospitality or retail, where you’re really reselling goods – inventory management can become a big issue for you. Because, if somebody is buying online from you and coming down in 20 minutes to pick up their order, you better have that stock in hand, and you’d have to get closer to real time inventory tracking.

So, I think that there’s lots of opportunity and transformation occurring around that. But what’s interesting is that even when you do that, the retailers by putting that sort of minimal digital infrastructure in place are finding that down payment and minimum bar are something that they can leverage for many other things as well. So, even when you start getting people coming back into the store, give them a digital foundation to improve the customer experience.

For instance, if you have your digital order system and if I as a customer will be in the store or walking around, I can use that app to navigate the store or even to find co-selling opportunities and offers. The pairing activities just accentuates the potential for increasing the shopping cart and overall purchase experience. At that same time, if I want, I can look at the label and I’m in a grocery store – I can liquid label for nutritional information.

But if you extend this with something more digital and dynamic, give me recipe offerings or some sustainability sourcing information or reviews and recommendations, I can create a concierge-style interface that becomes incredibly powerful. These are the initial things that most retailers have to go through in order to just survive the COVID experience. I think it’s a really great launching point for better experiences and improvements going forward in the retail environment.

Edge Computing World:What about other major use case areas? What about the industrial sector?

Rob: It’s a big thing right now. We’ve known about the OT (Operating Technology) versus IT (Information Technology) debates, that makes it seem much more contentious than it really is. But those debates have been going on for almost two decades now. I think what’s happening is with the advent of higher utility of AI and machine learning algorithms – whether that is machine learning, for doing production optimization, or for improvements and worker safety – some of the same technologies that we might use to protect the safety of customers in a store or place in a warehouse, are now being used on the manufacturing floor has highlighted the fact that you need it. The reason is simple. 98% of the data science community does not know the proprietary technologies of OT, that they need to be able to build algorithms. They’re born and bred on Python and on AR, and TensorFlow and IT technologies.

If you are a manufacturer and you need analytics for your Mac manufacturing processes, you’re going to have IT to support that wave and even the OT vendors as they’d want to make that available within their OT solutions. They essentially have to introduce it under the covers to gain access to the data scientists’ skills, that are necessary to create these kinds of algorithms.

For me, it’s no longer a war. It’s a process wherein we focus on how we make use of this technology in the context of that technology, neither integrated on the outside or integrated on the inside. More active collaboration between IT and OT, realisation that the data is at the manufacturing level or at the OT vendor level. In any of these production scenarios, whether that’s retail, manufacturing or eating, you have the fundamental problem of how you make sure you get the right software in the right place at the right time. So again, the management problem surfaces and becomes a critical enabler to have production scale.

Edge Computing World: What’s changed for the developer in the edge computing environment? What are the opportunities for developers and how should they evolve their skill set to take advantage of these opportunities?

Rob: The good news is that the vast majority of the industry has embraced the idea of cloud native development practices. The dominant packaging technology for workloads in the edge industry are containers, Docker containers, or OCI compliant containers. There are chances that someone has developed creating containers for the cloud. Native development practices can now immediately be applied to the creation of edge workloads as well.

You also have to be a little bit more conscious of the fact that if you’re doing large micro-service interactions in the cloud world like if I had a distributed ledger with the mobile application which is using micro services that are hosted in the cloud or in the ad world, you’re dealing with multiple tiers where your cloud components could be deployed onto. There could be multiple interactions or multiple hops of interaction between different tiers. You need to be conscious of the latency implications that is beginning to get introduced to – when you’re dealing with Microsoft interactions as that network may or may not be reliable. You need to be conscious of the fact that you’re operating in a world that is increasingly more resource constrained further you go.

If you’re on an edge device, you might have one core and 256 megabytes of memory and that’s the resource that you have to deal with. The good news is that there’s a lot of algorithms that will fit into that kind of resource constraint. For instance, I can run Linux, I can run Docker runtime, I can run my management agent, and still have 128 megabytes of memory left to run my algorithms and that’s enough in many cases.

There are other cases where you have to figure out how you’re compressing your model and you have to fit in that kind of resource constraint. Of course, diversity is substantial out in the edge. Whereas, in the cloud world, everything is an X86 compute – cluster of a certain size. IT cloud computing is about trying to create the illusion of elastic scalability over virtually infinite resources. To do that, as a high degree of homogeny in the cloud, you get x86, you get 32-bit, 64 bit, one core to up to 16 or 32 cores, 256 MB to up to 64 GB of memory, a GB of storage or up to several TB of storage. Every device, every piece of equipment is different and you’re not trying to do elastic scalability across all those different resources. You’re not pooling all those resources into one great big amorphous cloud. Like a virtualized infrastructure view, each device has a purpose. If you’re going to place a software in the edge, you’re placing it there for reducing latency and increasing bandwidth and things like that. Therefore, it matters, where that workload runs. And, affinities between the edge device and the software are important. Those aren’t necessarily programming differences, but they’re certainly things you have to design for in your application and then exercise things like open rising, IV or mid Application Manager to make sure that your component gets placed correctly on the right device, the right equipment and at the right time.

Edge Computing World: Is this something that every developer needs to understand for applications immediately, in the next couple of years? How can we make people conscious of it on immediate basis?

Rob: Understanding cloud native Java practices is a prerequisite, but you already have that requirement and now you just need to get cloud. And I think, fortunately for our industry you have to be hiding under a rock if you haven’t learned about cloud native development practices. That’s fairly ubiquitous. But, you also have to add to that a little bit of awareness to the structure of your code to understand that you have some limitations that you have to build for and rely on the management systems to make the right choices.

Thanks Rob – Looking forward to your keynote to learn more ! 

BOOK NOW