There is no edge without the hybrid cloud

This TelecomTV Perspectives on Red Hat is, like the rest of the series, a way to understand a major or upcoming company: where it’s come from, how it sees the technology and telecoms business landscape, what specific areas it’s currently tackling, and why it thinks its approach is the one to watch.

It’s a fairly substantial story so we’ve disaggregated it into digestible chunks.

  • Red Hat’s Darrell Jordan-Smith talks telco cloud with TelecomTV’s Guy Daniels. Find out why Red Hat is so passionate about open source and its potential for edge computing; why a focus on network AI is a smart thing to plan for; and why lots of useful innovation is going to end up on the network edge, with 5G adding the missing ingredient – all on the way to the next-generation, cloud-native telco network.

  • We bring into focus Red Hat’s view of the technology landscape and explain why a perfect storm of powerful ideas are being marshalled to transform (in the real sense of that word) the entire field.

  • We take a particularly close look at the ‘edge’ – one of those powerful ideas – and why, in league with 5G wireless and cloud-native networks, it will play a crucial role in the transformation.

  • We profile what differentiates Red Hat from others in the field and why it believes it has a compelling technology approach based on Linux, open source software and open hybrid cloud to build the modern 5G network for IoT, autonomous vehicles, tele-health and more, plus the modern OSS and BSS it needs.

  • Last but not least, we look at why open source and continuous integration/continuous delivery (CI/CD) can sometimes represent a difficult step for telcos, but one that can be overcome by focusing on the two Ps – people and processes.

Red Hat claims it’s the world’s leading provider of enterprise open source solutions. Everything it does is based on Linux, cloud, container, and Kubernetes technologies.

Red Hat’s open source distributions can bind these capabilities together into a software platform which in turn can orchestrate and run cloud native or containerised applications across a cloud, or increasingly, across several clouds. This is the ‘hybrid’ cloud environment and its one that Red Hat sees as the underlying connecting fabric that will pull together all the big technology innovation ‘things’ that both aspiring digital service providers (DSPs) and enterprises are counting on. The ‘Big Five’ are:

  • The ‘Edge’ – distributing workloads closer to users and data sources

  • The hybrid cloud

  • 5G and next generation networks

  • The Internet of things (IoT)

  • Network Functions Virtualisation (NFV) and Software Defined Networking (SDN)

Up to now, all these technological areas of endeavour have been viewed as separate entities with their own ecosystems, standards bodies and trajectories. But for Red Hat, it’s become increasingly apparent that they should also be viewed as an interconnected whole and treated accordingly, with hybrid cloud technology acting as the glue.

The reason is simple. They all need each other, and will all work best for both users and service providers when seamlessly connected.

The Edge is arguably the most transformative element of the big five ‘things’ and the one currently winning the most attention in the industry.

For Red Hat, the combination of the network edge and its core hybrid cloud technology is an exciting development that plays to the strengths of its open source approach.

Simply understood, the Edge is about moving compute power out of remote data centres and closer to the user.

These data centres might not be at the exact Edge of the network, although there is talk of installing small servers directly onto cell towers, but they will be distributed close enough to users to make a huge difference in ‘latency’ – the time taken for data to travel to and fro, between source and application. Latency is defined by both physics (the speed of light) and the number of hops a message has to take on the journey.

This reduced latency is very good for applications, having the potential to improve quality of experience, network security, increase scalability, and lower costs.

Reduced response time makes a big difference for IoT applications which rely on very low latency to properly control an industrial process, for instance, but will also make everyday business or home applications more responsive. Online gaming will be an obvious beneficiary, something that many operators are anxious to exploit.

View full video Joe Fernandes, VP Cloud Core Platforms Business Unit, Red Hat

  RedHat 

Hardly surprising that a Thought Leadership Council (TLC) survey found that around 70% of mobile network operators plan to move, or have already moved, some compute to the Edge of their networks to intercept the new opportunities. But the Edge won’t just be a facility for mobile networks. Many large operators – including mobile operators – are deploying fibre to the premises and insist that their fixed services must benefit from the Edge too. Then there are cable networks which will certainly want to deploy Edge services.

But a properly functioning set of Edge services will also, inevitably, bring complexity. Enabling the Edge isn’t simply a case of shunting infrastructure out from the core of the network. For application or content providers, utilising the Edge will come at a cost, so there needs to be careful consideration as to what parts of an application will be required to be executed as an Edge workload and what parts may be more economically left in the hyperscale core.

The ideal cloud software solution to finesse that complexity is an open hybrid cloud.

Red Hat sees the network edge as a unifying technology that will benefit both telco and enterprise applications.The proliferation of hybrid cloud at the service provider network edge will stimulate enterprises to position their own applications there, hosted by the service provider at their network edge.

For Red Hat, this represents a double opportunity, since it can support hybrid cloud in both service provider and enterprise environments.

The more powerful the environment, the more important security becomes

In complex environments, security is not something that can be “patched up” as if the system were a giant PC. Red Hat’s approach is to share everything it knows about security flaws and vulnerabilities and how these might affect its products. It then provides several ways for its customers to access that information including via its own security data API. 

The idea, as in all things Red Hat, is that maximum sharing and openness is the best medicine. So the job of its global ‘Product SecurityTeam’ isn’t just to fix things when they go wrong, but to explain to customers how particular vulnerabilities might impact them so that they can build security in where it’s needed. 

At the beginning of ‘cloud’ as we now understand it, the hybrid cloud concept was seen as a way to get the best of both worlds.

Full control on the premises with a private cloud combined with public cloud to provide redundancy, cost effective storage and flexibility.

The problem came when connecting one environment seamlessly with the other – that required APIs and connectivity, such as a VPN, to move workloads between them. There were still problems though, since the different underlying cloud infrastructures mean that an app designed on one cloud may not move to another without requiring modification. This complexity could chip away at the desired agility gains due to the integration and modification needed to connect two or more cloud environments.

That problem is overcome by using the same operating system across all environments and deploying small apps as microservices. In Red Hat’s case, that means Red Hat Enterprise Linux and cloud-native apps, with the whole managed by an orchestration engine such as Kubernetes in Red Hat OpenShift.

The overwhelming advantage for the DSP is that it ends up with a seamlessly interconnected and consistent computing environment, and an ability to easily and automatically move workloads around to where they’re needed across different cloud types.

5G needs the cloud-native hybrid environment

The Red Hat open hybrid cloud platform based on Linux and open source code is an ideal platform for 5G. The level of agility required for real standalone 5G makes the use of containerisation technology mandatory. All the well-known advantages of NFV and cloud-native, in terms of flexibility, reduced operational costs and reliability, come into play.

Red Hat also meets two major requirements for large 5G networks. The horizontal nature of the cloud-native Red Hat hybrid cloud means it can be managed and evolved despite the high degree of complexity inherent in the 5G network. In doing so, it sets the scene for the application of artificial intelligence (AI) and a high level of network management automation, thus enabling a network of high complexity to operate without an army of technicians in operational support.

Containerisation and cloud-native are therefore essential for 5G if it’s to meet DSP ambitions for the all-important ‘network slicing’ capability. This allows multiple slices to be set up to act as virtual networks, each dedicated with the functionality required to meet the needs of its specific “vertical” segment. These will include slices designed for particular business segments, such as the legal profession, where the “slice” might be equipped with particular security and assurance measures.

Slicing is also likely to be very important when it comes to supporting a range of IoT applications, all operating at different speeds and meeting different security and performance standards.

Prototype cloud-native, open source, community-built 5G network

Last year at KubeCon + CloudNativeCon North America, a grand plan to express the capability of cloud-native and the power of the open source joint effort came to fruition when 80 volunteers and six open source communities and standards groups unveiled their 5G cloud-native network live demonstration.

It had taken four months to pull together and it worked. A complex international video call was accomplished using open source and cloud-native components, connecting a packet core in Montreal, an IMS core in the public cloud and an LTE lab in France.

OSS/BSS

Red Hat Consulting delivers open source technology expertise to help digital service providers create solutions that increase automation and agility in network management and business operations – the so-called OSS/BSS domain.

Again, by standardizing on open source technology, operations support systems and business support systems can better address the continuous integration challenges of evolving software architectures –

they can board the continuous integration/continuous development conveyor belt with the other open source applications.

Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01
play_circle_outline

It’s true. It’s like learning a new language, but also having to master its cultural nuances at the same time.

To telecom professionals, steeped in procedures and approaches that have served them well through multiple network generations, open source (and the wider transformation from hardware to other types of software) and its strictures, can seem alien and even anti-intuitive. That telco culture, grounded in a different time and set of technical circumstances, can find itself at odds with today’s IT culture is not a surprise: yesterday’s IT culture can also find itself at odds with today’s IT culture!

I asked one of Red Hat’s key explainers and evangelists, Susan James, its Senior Director of Telecommunications Strategy, to elaborate.

“In telecoms, when you built the physical nodes, you only had a limited amount of hardware that was going to be there, and in many cases that hardware would be activated and managed remotely,” says Susan. “So it was all about ensuring the hardware didn’t fail, and if it did fail, ensuring you could move the connection onto the ‘+1’ hardware deployed to cope with the rare outage.”

Rare, because the systems were designed to perform at 99.999 percent uptime.

“So the philosophy around building resilience, redundancy and scale was completely different. Today you’re building for a cloud where your hardware resource is (in theory) unlimited. You actually expect a hardware element somewhere in the architecture to fail at some point. In that circumstance you’re going to leave the decision on how to recover and scale out to different hardware at the application layer. As a result, the system is probably able to cope with a failure without anybody noticing.

“Back in the day, tolerance for outages was actually quite high and many may not even have been noticed by customers – today, people do notice, and are on their phones 24/7. When people notice an outage, it’s all over social media in a few minutes.”

The importance of understanding continuous integration/continuous delivery (CI/CD)

This aspect of open source/cloud-native operation seems to be one of the most polar opposites. How do we explain it, and why is it a problem for telcos?

“I think that’s largely due to the experiences that they’ve had over the years,” says Susan. “The software they had wasn’t necessarily of a quality that they could put straight into their network, so they had to test it for nine months or so before they felt happy.

“Also, back in the day, if something were to go wrong, all hell would break loose and they could be called up in front of the government to explain an outage. So people can be risk averse, but of course they have good reason to be. They’ve been brought up with that – and they have the mindset that their network is mission critical and peoples’ lives are at stake.”

But things are already changing fast.

“There are already service providers that are upgrading their RAN every two weeks and certain parts of some networks can be fully CI/CD.

“But you have to establish trust first. Typically you can start out very carefully testing software updates and getting sign-off from the parts of the organisation before pushing the button. But after doing that five or six times, you can get the go-ahead to automate the updates.”

Necessary culture change

Here’s why process change is vital if CSPs are to win full advantage from virtualisation.

“A full 84% of virtual network functions are still deployed as a vertical stack,” explains Susan, “And that means they’re still planning what they want it to look like from a configuration perspective, which in turn means they’re actually going to get very little benefit from running it on the cloud.”

“They haven’t changed their philosophy or their organisation. They’re not running that cloud like a cloud, and they’re not consuming that cloud as a cloud.”

“If each application is being sized up in isolation, it means they don’t get scale across the organisation. Worse, the poor guys responsible for the networking must be finding it an absolute nightmare.

“If they size an application’s requirement at its peak time, it could be that the peak only occurs once every three months – so at peak, they could be using resources that someone else is not using. Because of all that complexity, their ability to automate (and this is probably the big one) is extremely limited, as they’re doing the enabling configuration as 75 things instead of three things.

The way to do it is to use AI and analytics to work out where you need resources and in what timings. This can be done in advance.

Without this approach, it becomes greatly more difficult to expand the data centre intelligently and with efficient use of compute resources, another reason why the transition to containers with a long term objective of transitioning to cloud native over time is absolutely critical.