NFV: 5 Years in...

Industry Perspectives

Let’s be honest. It’s been five years now since the publication of the famous Network Functions Virtualisation (NFV) white paper and the formation of the ETSI specification group to define a framework for NFV’s development. By now (2018) we had all anticipated that the technology and business transformation enabled in part by software-driven networks would be well under way with at least some big CSPs crediting their new technology with improved operational efficiencies and savings, increased agility and faster time to revenue for new services... for enabling their technical and business transformation in fact.


That’s not quite how things have turned out so far, so we asked four prominent and diverse SDN and NFV transformation vendors to explain themselves. Had the software defined network and virtualization journey become bogged down and stalled as some allege? If so, what are the stumbling blocks and what needs to happen to remove them?

Importantly, what had the four vendors learned in the past five years and how has their approach changed and matured? This series of features explores the thinking behind four of those vendor journeys. Featuring Amdocs; Lumina Networks; Metaswitch and VMware.

NFV, 5 Years in | Part 1 - 90 seconds

VMware: enthusiastic DevOps proponent providing bullet-proof, pre-integrated NFVi solutions with over 100 NFV deployments already in production world-wide

VMware was founded in 1998, has revenues of around $7 billion and is now part of the Dell Technologies family of businesses. As its name suggests it is a specialist in virtualization software for the cloud, data center, desktop and, more recently, for NFV infrastructure. Its heritage in the data center stands it in good stead to provide a pre-integrated NFVi underpinning for CSPs looking to virtualize. We spoke to Honoré Labourdette, VP Global Market Development, Telco Business Group, VMware. Honoré says she is delighted with the progress seen at VMware so far, especially with the recent announcement by Vodafone that it will standardize on the VMware platform.

Honoré says that while almost all telecoms executives recognize that the future lies with virtualisation and cloud-based NFV software, there is still a big question mark over what route they should take.

Going back five years, visionary people in the industry realized they could take their VNFs and instead of running them on dedicated appliances they could virtualize over commodity hardware. In doing so they could break vendor lock-in and reduce cost. “What took a little longer for some to realize” notes Honore’ “is that they could also increase agility. It is the combination of taking out cost AND increasing agility that presents the real value of NFV, and that means solid operational capabilities as well as simply virtualising services”.

Some saw open source projects as the right course over the longer haul, and some chose to go with VMware because of its legacy in virtualisation and the data centre, and its ability to move quickly.

“The thinking was that if VMware could virtualize on the IT side of the house, then it could do the same thing for network side of the house.”

So those were the choices five years ago. But what’s happened? Why has there apparently been slowed progress and stalls. And what were the choices that might have caused them?

Honoré claims that, in her experience, a given CSPs’ approach to NFV originates in one of two ways. The first originates from a focus on technology and innovation; the second originates from a focus on ROI and TCO (return on investment and total cost of ownership). “Those who are motivated by more by business objectives see accelerated time to market and taking costs out of the business as prime objectives and are moving much more quickly, while operators taking the technology innovation approach can often be stalled by some of the realities of open source” she maintains.

“Those focusing heavily on technology may find that it is tempting to keep investigating in the next big thing, or getting involved with community projects exploring new approaches or new technologies, rather than following through and delivering short term business outcomes.” She notes that smaller telcos and niche network operators are, in general, moving much more quickly; in part because they are small and focused; in part because they don’t carry the R&D overheads and processes of larger operators. (Gabriele di Piazza, a colleague at VMware, talks of the changing role of R&D and new models of service innovation is his article CSP: a new approach to innovation).

“I think there’s also been an Industry wide ‘Ah-ha moment’ about what has taken some major NFV projects such a long time to move forward,” says Honoré.

What needs to happen to accelerate transformation
What is your view of where CSPs are on transformation journey 5 years on?

It’s very difficult to deliver multi-vendor VNFs on a variant of open source-derived NFV infrastructure because of the tweaks and modifications necessarily undertaken to get the solution working. VMware sees the underlying issue here is operational complexity. So the tendency has been to simplify the deployment by building a ‘vertical stack’ or ‘software silo’. This leads to dependencies up and down the stack which leads, in turn, to the same siloed business model, complete with lock-in, as we saw in the past. It might therefore be of little or no benefit and even possibly more expensive.

At this point, Honore’ points out that what looked like a divergent choice 5 years ago between opensource and VMware, is now converging on a best of both worlds solution for customers, with VMware’s proven virtualisation capabilities now accessible via VMware’s recently released OpenStack Carrier Edition’.

So given those snakes, ladders and the accompanying complexity around choosing vendor approaches, what does VMware offer and what sort of NFV ambitions does it best serve?

“We think we’ve a way forward which steers a path designed to win the advantages of rapid innovation through a DevOps operating model, and the rapid iteration and agility it confers on the CSP, with a solid NFVi (NFV Infrastructure) base,” claims Honoré. “With such a structure everything still gets the valuable iteration, but the CSP starts with a well-defined, tried and tested NFV infrastructure framework. Most importantly, CSPs can build a single infrastructure with a single operating team that can deliver NFVi as a service to multiple, internal, telecoms services’ teams.”

This approach strikes a useful balance between the imperative to provide the appropriate service level agreement for the customer, ranging from best efforts to five 9s, easy platform servicing and operations, low operating costs, and service flexibility.

In addition, VMware produces, tests and releases a pre-integrated NFVi platform. This simplifies installation, enables easy upgrades and provides what VMware refers to as intelligent operations with 360° visibility. With this approach, VMware claims its CSP customers can generate ‘repeatable’ deployments that can be put into production, operated and supported more easily. The result is a prescriptive (and hence deployable and scalable) but not restrictive NFVI platform. VMware’s VNF accreditation program already lists over 50 certified VNFs for the platform to provide multi-vendor choice when it comes to higher level service deployments.

“But the big ‘thing’ – the killer gain from a successful NFV transformation – is not that the CSP can build the ultimate set of VNFs, or bear down on capital and operational costs” says Honore’, “but that it has established an iterative development process and a clear path to the software defined, data driven, automated environment that that will eventually signal its complete digital business transformation.”

Perspectives-VMware_01 Perspectives-VMware_01

Perspectives-VMware_02 Perspectives-VMware_02

It’s very difficult to deliver multi-vendor VNFs on a variant of open source-derived NFV infrastructure because of the tweaks and modifications necessarily undertaken to get the solution working. VMware sees the underlying issue here is operational complexity. So the tendency has been to simplify the deployment by building a ‘vertical stack’ or ‘software silo’. This leads to dependencies up and down the stack which leads, in turn, to the same siloed business model, complete with lock-in, as we saw in the past. It might therefore be of little or no benefit and even possibly more expensive.

At this point, Honore’ points out that what looked like a divergent choice 5 years ago between opensource and VMware, is now converging on a best of both worlds solution for customers, with VMware’s proven virtualisation capabilities now accessible via VMware’s recently released OpenStack Carrier Edition’.

So given those snakes, ladders and the accompanying complexity around choosing vendor approaches, what does VMware offer and what sort of NFV ambitions does it best serve?

“We think we’ve a way forward which steers a path designed to win the advantages of rapid innovation through a DevOps operating model, and the rapid iteration and agility it confers on the CSP, with a solid NFVi (NFV Infrastructure) base,” claims Honoré. “With such a structure everything still gets the valuable iteration, but the CSP starts with a well-defined, tried and tested NFV infrastructure framework. Most importantly, CSPs can build a single infrastructure with a single operating team that can deliver NFVi as a service to multiple, internal, telecoms services’ teams.”

This approach strikes a useful balance between the imperative to provide the appropriate service level agreement for the customer, ranging from best efforts to five 9s, easy platform servicing and operations, low operating costs, and service flexibility.

In addition, VMware produces, tests and releases a pre-integrated NFVi platform. This simplifies installation, enables easy upgrades and provides what VMware refers to as intelligent operations with 360° visibility. With this approach, VMware claims its CSP customers can generate ‘repeatable’ deployments that can be put into production, operated and supported more easily. The result is a prescriptive (and hence deployable and scalable) but not restrictive NFVI platform. VMware’s VNF accreditation program already lists over 50 certified VNFs for the platform to provide multi-vendor choice when it comes to higher level service deployments.

“But the big ‘thing’ – the killer gain from a successful NFV transformation – is not that the CSP can build the ultimate set of VNFs, or bear down on capital and operational costs” says Honore’, “but that it has established an iterative development process and a clear path to the software defined, data driven, automated environment that that will eventually signal its complete digital business transformation.”

“But the big ‘thing’ - the killer gain from a successful NFV transformation - is not that the CSP can build the ultimate set of VNFs, or bear down on capital and operational costs” says Honore’, “but that it has established an iterative development process and a clear path to the software defined, data driven, automated environment that that will eventually signal its complete digital business transformation.”

Silicon Valley company, Lumina Networks, is not yet a year old, having been spun off from Brocade when that vendor was acquired by Broadcom last year. The new company came with 50 specialist staff, Brocade’s SDN controller assets (based on OpenDaylight), 200 CSP customers and Andrew Coward, Brocade’s VP of strategy. It also came with a carefully articulated new business model informed by the team’s market experience.

Lumina doesn’t believe in selling turnkey SDNFV ‘solutions’, but neither does it believe in just lobbing new software over the wall and leaving the CSP to do all the integration. Instead it wants to act as a catalyst for CSPs’ adoption of SDN, virtualization and automation, helping out on the very hard stuff, but implanting the ongoing process of testing and upgrading within the CSP. Like other SDNFV players, Lumina believes that the CSPs have to get involved and engage with the open source development process if they’re to win the full benefit of transformation.
“In the early days (just three years ago) when we were in Brocade, we thought we were in the business of providing an [open source] distribution,” says Andrew.
“But we soon came to realize that our market could be divided into early adopters and laggards.
“Then, when we looked at the differences between those two groups we realized it actually came down to how prepared the individual CSP was to put their own people onto these projects instead of simply thinking the vendors could wave a magic wand.”
CSPs’ likely willingness (or not) to engage properly in this way could be gauged by how diligently they approached things like RFPs (request for a proposal), he says. “We found this created a self-selection process for us because the ones that asked the right questions were more receptive to us and more willing to “play catch” with some of the open source projects.”
However some went the other way, saying “We don’t need any help, we’re going to do everything ourselves and manage everything. But inevitably some of those customers found it was a Herculean task to do all the integration, manage the new open source code, compile it, keep it reliable and keep up with the changes.”

Dealing with the abstraction layer
So some of those companies that had originally struck out on their own subsequently had a change of strategy and came back saying, “You know what, it doesn’t make sense for us to manage the relationship with open source or adding new features when you guys can do that.”
That turned out to be a viable business model for Lumina. “On one level we help with the integration, but what we really do is provide abstraction,” claims Andrew. “With SDN we’re trying to separate the business logic of the carrier – which defines the services – from the underlying hardware and from the vendors. That means the CSP can change or upgrade the hardware without the business logic changing. We’re dealing with a middle layer that has to be able to speak fluently downstream to all the different devices as well as northbound to billing systems.
“The great thing is that everything that gets built gets put back into the community and makes the job much easier the next time around.”
The abstraction layer also hopefully avoids the CSP customer accruing what’s known as ‘technical debt’. That occurs when devices are integrated directly or tactically (without an abstraction layer) creating a debt that will have to be paid back with interest in integration difficulties later.

What does Lumina Networks do for its customers?
Acting as a catalyst for service providers

So what’s changed since the publication of the NFV white paper?
“Five years ago we didn’t comprehend the need for CSP culture change to enable transformation,” says Andrew. “But things have changed greatly with SDNFV over the past four years especially. The industry has had to move from a science project through to ‘available in the lab’ and then to something that could be deployable. In the great scheme of things I think we’ve moved remarkably quickly on the open source side of things to make that happen.”
Most importantly it’s turned out that the industry wasn’t – as it perhaps at first thought – introducing a new technical framework and, ‘Oh by the way, you might have to change how you do things a little’. It now looks as though we’re introducing new ways of engaging with customers software, services and suppliers with some necessary and useful technology coming along for the ride. Culture change in other words, has become the prize, not the price.

There’s no doubt the process has been slower than thought. Why?
Andrew thinks “a lot of stuff got stuck in the labs and there was a feeling that everything had to be new.” In too many cases that appeared to mean white boxes needed to replace legacy hardware and there was a feeling that “before we can adopt this technology we need to put data centres in,” Andrew maintains.
“Actually, on the SDN side it’s predominantly all about the existing equipment. So not about replacing, but making the ‘physical’ equipment work with the new virtual environment,” he says.
The ‘stay in the lab’ problem may also be compounded by the tendency for CSPs to use the white box threat to negotiate costs down with their current vendor.
Another reason software might stay in the lab might be a pervasive fear of ‘failure’ on the part of many CSPs, somewhat at odds with the IT “fail fast” credo. Allied to this can be a reluctance to upgrade the network – in sharp contrast to the constant upgrading undertaken by the hyperscale players many carriers would like to emulate.
“What happens is that a lot of telcos try to dodge the upgrades: they will ask, “find me some patches; give me a configuration change, anything that means I don’t have to upgrade the software,” says Andrew.
Overcoming the upgrade phobia would help the new software ‘escape the lab’ on a more timely basis says Andrew.

What is Lumina’s perfect customer? “We’re looking for customers who have captured this technology and understand what it is they want to do. Typically they have stuff in the labs and they now want to get it out and they need a partner to help them do that. They don’t want to hand the task off to an outsourcing company because they’ll lose the learnings that they have and they won’t be in control of the outcomes. So they want to keep doing it but they know they need some expertize to help them with that process.”

Perspectives-Lumina_01 Perspectives-Lumina_01

Perspectives-Lumina_02 Perspectives-Lumina_02

So what’s changed since the publication of the NFV white paper?
“Five years ago we didn’t comprehend the need for CSP culture change to enable transformation,” says Andrew. “But things have changed greatly with SDNFV over the past four years especially. The industry has had to move from a science project through to ‘available in the lab’ and then to something that could be deployable. In the great scheme of things I think we’ve moved remarkably quickly on the open source side of things to make that happen.”
Most importantly it’s turned out that the industry wasn’t – as it perhaps at first thought – introducing a new technical framework and, ‘Oh by the way, you might have to change how you do things a little’. It now looks as though we’re introducing new ways of engaging with customers software, services and suppliers with some necessary and useful technology coming along for the ride. Culture change in other words, has become the prize, not the price.

There’s no doubt the process has been slower than thought. Why?
Andrew thinks “a lot of stuff got stuck in the labs and there was a feeling that everything had to be new.” In too many cases that appeared to mean white boxes needed to replace legacy hardware and there was a feeling that “before we can adopt this technology we need to put data centres in,” Andrew maintains.
“Actually, on the SDN side it’s predominantly all about the existing equipment. So not about replacing, but making the ‘physical’ equipment work with the new virtual environment,” he says.
The ‘stay in the lab’ problem may also be compounded by the tendency for CSPs to use the white box threat to negotiate costs down with their current vendor.
Another reason software might stay in the lab might be a pervasive fear of ‘failure’ on the part of many CSPs, somewhat at odds with the IT “fail fast” credo. Allied to this can be a reluctance to upgrade the network – in sharp contrast to the constant upgrading undertaken by the hyperscale players many carriers would like to emulate.
“What happens is that a lot of telcos try to dodge the upgrades: they will ask, “find me some patches; give me a configuration change, anything that means I don’t have to upgrade the software,” says Andrew.
Overcoming the upgrade phobia would help the new software ‘escape the lab’ on a more timely basis says Andrew.

What is Lumina’s perfect customer? “We’re looking for customers who have captured this technology and understand what it is they want to do. Typically they have stuff in the labs and they now want to get it out and they need a partner to help them do that. They don’t want to hand the task off to an outsourcing company because they’ll lose the learnings that they have and they won’t be in control of the outcomes. So they want to keep doing it but they know they need some expertize to help them with that process.”

“We’re looking for customers who have captured this technology and understand what it is they want to do. Typically they have stuff in the labs and they now want to get it out and they need a partner to help them do that. They don’t want to hand the task off to an outsourcing company because they’ll lose the learnings that they have and they won’t be in control of the outcomes. So they want to keep doing it but they know they need some expertize to help them with that process.”

Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01 Image 01
play_circle_outline

We asked Amdocs’ Angela Logothetis, Vice President, Amdocs Open Network CTO, about Amdocs’ own journey. What did she think her company has brought to the transformation party? What is it offering that might be a good fit for some of those myriad of CSP customer journeys? And, of course, is NFV taking too long?

“Anyone who’s been in telecoms for a long time knows that these things take significant time,” says Angela Logothetis of the NFV ‘journey’. “There’s always a hype cycle before mass market adoption as we’ve seen with mobility and IP and lots of other things. Although everyone hoped that NFV would be an exception rather than following the rule, the fact that it appears to be taking longer to arrive than that initial hype suggested is not really a surprise.”

On the positive side Angela points out that some of the large operators – the likes of AT&T, Vodafone, Orange and Telstra – have been enthusiastic early adopters. At this stage, though, much of the industry has been adopting virtualization to power selected network function but have yet to move to a more strategic, virtualized mode of operation where – most NFV advocates believe – the real payoff lies.

“For example, they will put in place a virtual IMS because it’s centralized and server-based. Then at the very edge of the network CSPs often adopt virtualized firewalls, virtualized CPU and virtualized set top boxes.

“All the early adopters are taking a network-wide approach, using software to drive software. The tactical players are just using individual functions but are not taking the ‘abstracted’ approach at all.

So how might this gradualist, tactical approach possibly disadvantage CSPs in the longer run?

“It means they’re putting virtualized technology into place but continuing their old approach, and if they keep on that path they’re not going to get the changes that they need. Instead they’ll continue to have a network that’s siloed by technology and – perhaps even more important – people that are siloed by technology and therefore have a network focus rather than a service focus.”

I ask: what does Amdocs bring to the transformation party?

“We’re experts at integration, having co-developed ECOMP with AT&T who then made it available as open source software. But most of all, Amdocs has a truly independent view and that really helps. We have the software DNA to bring CSPs into the network world without creating vendor lock-in.”

“Part of our story to customers is: “You need to start getting ready for NFV now by moving into autonomous assurance and autonomous operations,” says Angela. “Because if you don’t and NFV hits, there’s no way that you’re going to cope.”

The hardest part in NFV to remain untackled is ‘operations’, claims Angela. “When you’re starting down the path of NFV you can’t do a traditional model where you build the network and then throw it over to operations and say, “there you go, you operate it.” The operations part has to be built in up-front and that means getting the operations people involved in the transformation process early on.”

So perhaps the most important focus for Amdocs right now is service assurance and not just assurance for services running on the operator’s own network. What will become increasingly important as end customers become more demanding, are ways to assure services across multiple operators to align with their global networking needs.

Why the operational transformation has to start happening early on
Why the operational transformation has to start happening early on
Amdocs champions the Service-Driven Network

Service assurance is key – why now?

There has traditionally been an almost one-to-one mapping between a service and its underlying technology. So, for instance, a landline data service used to go across a PSTN network and was multiplexed at 56 or 64kbit/s: to hijack Marshall McLuhan, the technology WAS the service.

But increasingly services are much more complicated, mostly IP-based, and able to be carried over almost any network. As a result they are ‘best effort’ rather than deterministically guaranteed and that means the provider must put its own best effort into engineering them to deliver what has been promised.

On top of that, the measurements must relate to what the customer experiences, not what the provider thinks the network is delivering. Given the above it’s hardly surprising that most of the ‘issues’ customers present to technical support/helpdesk are about quality (for instance, my video keeps buffering; every now and then my voice quality drops way below par) rather than straight-forward ‘my link has disappeared’ events.

Amdocs believes this environment changes things greatly. It means the new network technology’s main task today, it’s make-or-break measure of success in the never-ending quest to improve the end-user experience, is one of service assurance: of pitting the growing intelligence invested in the network against its propensity to get things wrong as conditions change. For instance, when links congest, when unexpected conditions arise and low level network faults occur.

The journey to the service driven autonomous network

In preparation for the journey to a full service-driven network, however, the existing OSS assurance infrastructure needs to start its own journey to ‘NFV ready’: able to collect alarm and performance events from the existing data center infrastructure as well as the new infrastructure. Issues here include the problem of where to site probes given that data collection for assurance purposes needs to move away from being physical and network centric to being more service centric.

“With today’s ‘top down’ services, the service quality as perceived by the customer is largely being derived in very complex ways from the network experience, so it’s possible to have a situation where the network is saying everything is on ‘green’ (fully operational) but there’s still a problem with the service.

“That means the service experience needs to start to be monitored natively at a service layer via active probing and active testing and true KPIs around the service experience,” Angela says.

So what are the key points?

“The operational processes can’t simply be automated on day one,” Angela concludes. “I think what happens as you move towards the DevOps model is that the operations teams still need to have visibility over the process. And more than this they need to have the ability to configure and interact with the process giving what we call human guided operations – incrementally moving towards autonomous operations.

“One important thing to recognize is that there’s not one starting point and there’s not one journey toward a service-driven network and autonomous operations. It’s about us having the right components – which we have – and the partner CSP being clear about its objectives – and together executing the journey to get there.”

Perspectives-Amdocs_01 Perspectives-Amdocs_01

Perspectives-Amdocs_02 Perspectives-Amdocs_02

Service assurance is key – why now?

There has traditionally been an almost one-to-one mapping between a service and its underlying technology. So, for instance, a landline data service used to go across a PSTN network and was multiplexed at 56 or 64kbit/s: to hijack Marshall McLuhan, the technology WAS the service.

But increasingly services are much more complicated, mostly IP-based, and able to be carried over almost any network. As a result they are ‘best effort’ rather than deterministically guaranteed and that means the provider must put its own best effort into engineering them to deliver what has been promised.

On top of that, the measurements must relate to what the customer experiences, not what the provider thinks the network is delivering. Given the above it’s hardly surprising that most of the ‘issues’ customers present to technical support/helpdesk are about quality (for instance, my video keeps buffering; every now and then my voice quality drops way below par) rather than straight-forward ‘my link has disappeared’ events.

Amdocs believes this environment changes things greatly. It means the new network technology’s main task today, it’s make-or-break measure of success in the never-ending quest to improve the end-user experience, is one of service assurance: of pitting the growing intelligence invested in the network against its propensity to get things wrong as conditions change. For instance, when links congest, when unexpected conditions arise and low level network faults occur.

The journey to the service driven autonomous network

In preparation for the journey to a full service-driven network, however, the existing OSS assurance infrastructure needs to start its own journey to ‘NFV ready’: able to collect alarm and performance events from the existing data center infrastructure as well as the new infrastructure. Issues here include the problem of where to site probes given that data collection for assurance purposes needs to move away from being physical and network centric to being more service centric.

“With today’s ‘top down’ services, the service quality as perceived by the customer is largely being derived in very complex ways from the network experience, so it’s possible to have a situation where the network is saying everything is on ‘green’ (fully operational) but there’s still a problem with the service.

“That means the service experience needs to start to be monitored natively at a service layer via active probing and active testing and true KPIs around the service experience,” Angela says.

So what are the key points?

“The operational processes can’t simply be automated on day one,” Angela concludes. “I think what happens as you move towards the DevOps model is that the operations teams still need to have visibility over the process. And more than this they need to have the ability to configure and interact with the process giving what we call human guided operations – incrementally moving towards autonomous operations.

“One important thing to recognize is that there’s not one starting point and there’s not one journey toward a service-driven network and autonomous operations. It’s about us having the right components – which we have – and the partner CSP being clear about its objectives – and together executing the journey to get there.”

"It’s about us having the right components - which we have - and the
partner CSP being clear about its objectives - and together
executing the journey to get there.”

NFV, 5 Years in | Part 2 - 80 seconds

Metaswitch is a UK-based company, founded as Data Connection in 1981, it became involved with network protocol software in the 1990s, launched the Metaswitch brand in 2000 and concentrated on VoIP for telcos, eventually changing the company name to Metaswitch in 2011.
By that time Metaswitch had naturally migrated itself to telecoms software running on COTS (commercial off the shelf) platforms.
“We had started developing software to run in clouds and virtualized environments. And we had also always developed products for standard x86 platforms even before we started getting into cloud,” says Metaswitch CTO Martin Taylor. “So when the NFV white paper was published (in late 2012) there was a lot of celebration around the company. It was a case of, ‘Well, we always knew it was going to come to this – now the industry has recognized it formally’.”
So with the telcos telling the world that they wanted to do things the way that Metaswitch had already been doing them for 10 or 15 years, the company felt absolutely validated. But that didn’t mean that getting the virtualization message across was going to be all plain sailing. “We’d been doing things such as demonstrating session border controllers in a cloud environment which people were mystified by,” he says.
“But our virtual IMS (IP Multimedia Subsystem) work that we had started probably a year before, educated a lot of people. Operators were able to download it and play with it, and it provided a benchmark for what a virtual network function looked like.
“The biggest challenge when selling into an established category – such as IMS – is that you have to have feature parity with products that had been out there for a number of years,” he recalls. “It took us time to achieve that feature parity.”
Another major difficulty is that you don’t have any references from other telcos already using your product. Metaswitch, says Martin, achieved its big breakthrough by Sprint adopting its Perimeta session border controllers.
“The entire Sprint VoLTE database was relying on our SBCs – that was a big boost, especially as the SBC market had been going for ten years or more, so we were very late into it,” says Martin.

That break was pivotal
“When you (as a telco) are making a decision to move from an appliance-based product to a virtualized one, it’s a sufficiently big change that you re-evaluate your entire vendor landscape. You don’t automatically assume that you’re going to buy that product from your incumbent vendor,” he says “ So when telcos started selecting virtual SBCs ours rose to the top of the pile and we’ve been number one globally in that category ever since analysts started publishing numbers.”
So the vendor ‘journey’ in this case wasn’t so much about Metaswitch responding to SDNFV, but the new virtualization framework for telecom eliding with the direction it had already charted for itself.

How was the journey?
So the big question. How does Martin think things have progressed from the white paper to now. What sort of journey has Metaswitch taken?
“It depends on your perspective. We’ve found that there’s a huge market for our products – and for virtualization products generally – in the tier 2 and tier 3 space. There’s no shortage of operators in those tiers of the market who are very go-ahead – they can move quickly and are not bogged down by a huge organisation.”
Most important, he says, they don’t have the dreaded silos, with IT people who understand cloud in one and network people who understand network in other and “never the twain shall meet”.
“By comparison the tier ones seem to be making a bit of a meal of it in many cases.”

NFV adoption: some of the smaller telcos have been quietly getting on.  
Martin Taylor’s predictions for the CSP transformation journey in the near future

Faster, tactical, adoption may suit some
So will the faster adoption exhibited by the tier 2s and tier 3s give them an advantage so that that they grow their share of the total telecoms and networking market against the tier ones?
“Where we see people winning it tends to be in more niche areas like unified communications. If you’re trying to offer a large enterprise a complex system then often the tier 1s are just not very good at marketing and selling.
“On the voice side we’re seeing innovative ways of building networks and supplying services, particularly by MVNOs [mobile virtual network operators].”
Martin says the network operator provides the mobile access but the virtual operator provides all the componentry behind it – the evolved packet core, IMS and voice services, so perfect customers for Metaswitch.
Rather than just white labeling an MNO services, there’s lots of scope for innovation in that model, he says. And there is even the possibility that some of those operators might behave a little bit more like web scale players, leveraging the public cloud to get going with a very small capital base and working on a pay as you grow basis. “It’s early days but we’re seeing some encouraging stirrings in that space,” he claims.

So how did Martin think the market developed has there been a ‘stall’?
“Well, it has been a bit frustrating as we’ve watched the telcos wrestling with the complexities of OpenStack and orchestration – and then feeling they can’t move ahead with NFV. In some ways they’ve been making a bit of a meal with it.
“The normal telco view is to think everything through and plan everything in detail. That’s simply not the software way of doing things. Instead, you make progress by taking baby steps and learning along the way.”
So it’s not surprising, he says, that the most ambitious and ‘strategic’ telcos visions seem to be struggling the most. But the ones that have approached the task in a tactical way are doing better and moving faster.
“The tacticians tend to say things like, ‘OK we need to build out this bit of network where we need a new function and additional capacity – either we can go and buy a box like usual or we can start doing this in software and tap some network virtualization.’ Those ones are making the most progress.
”The strategy advocates will say, ‘Yeah, yeah yeah, but they’re missing the point.’ That may be true, but they’re learning along the way about what’s important and what’s not and they’re not investing in a ton of stuff that may turn out not to be all that important in retrospect.”

The software telco
“You can argue that not trying to plan every little thing in advance is the way software operators do things,” he says, “while telco people have the idea that everything will plug together and will work if you have standards.
“So when they try to deploy virtual network functions and they don’t plug together very well and don’t work, their answer is always that we don’t have strong enough standards. But that’s NOT the answer.
“You can build VNFs properly, with properly engineer APIs, so that they are completely portable across different cloud environments with no dependencies with the underlying hardware. And all without standards.
So the idea that you can get completely clean, plug and play compatibility with the right amount of standardisation is a chimera – there are always new dependencies likely to emerge at any interface?
“Exactly!” says Martin. “SIP (The Session Initiation Protocol which sets up voice and video calls across an IP network) has been around for 20 years and we still meet SIP interoperability issues every day.”

Perspectives-Metaswitch_01 Perspectives-Metaswitch_01

Perspectives-Metaswitch_02 Perspectives-Metaswitch_02

Faster, tactical, adoption may suit some
So will the faster adoption exhibited by the tier 2s and tier 3s give them an advantage so that that they grow their share of the total telecoms and networking market against the tier ones?
“Where we see people winning it tends to be in more niche areas like unified communications. If you’re trying to offer a large enterprise a complex system then often the tier 1s are just not very good at marketing and selling.
“On the voice side we’re seeing innovative ways of building networks and supplying services, particularly by MVNOs [mobile virtual network operators].”
Martin says the network operator provides the mobile access but the virtual operator provides all the componentry behind it – the evolved packet core, IMS and voice services, so perfect customers for Metaswitch.
Rather than just white labeling an MNO services, there’s lots of scope for innovation in that model, he says. And there is even the possibility that some of those operators might behave a little bit more like web scale players, leveraging the public cloud to get going with a very small capital base and working on a pay as you grow basis. “It’s early days but we’re seeing some encouraging stirrings in that space,” he claims.

So how did Martin think the market developed has there been a ‘stall’?
“Well, it has been a bit frustrating as we’ve watched the telcos wrestling with the complexities of OpenStack and orchestration – and then feeling they can’t move ahead with NFV. In some ways they’ve been making a bit of a meal with it.
“The normal telco view is to think everything through and plan everything in detail. That’s simply not the software way of doing things. Instead, you make progress by taking baby steps and learning along the way.”
So it’s not surprising, he says, that the most ambitious and ‘strategic’ telcos visions seem to be struggling the most. But the ones that have approached the task in a tactical way are doing better and moving faster.
“The tacticians tend to say things like, ‘OK we need to build out this bit of network where we need a new function and additional capacity – either we can go and buy a box like usual or we can start doing this in software and tap some network virtualization.’ Those ones are making the most progress.
”The strategy advocates will say, ‘Yeah, yeah yeah, but they’re missing the point.’ That may be true, but they’re learning along the way about what’s important and what’s not and they’re not investing in a ton of stuff that may turn out not to be all that important in retrospect.”

The software telco
“You can argue that not trying to plan every little thing in advance is the way software operators do things,” he says, “while telco people have the idea that everything will plug together and will work if you have standards.
“So when they try to deploy virtual network functions and they don’t plug together very well and don’t work, their answer is always that we don’t have strong enough standards. But that’s NOT the answer.
“You can build VNFs properly, with properly engineer APIs, so that they are completely portable across different cloud environments with no dependencies with the underlying hardware. And all without standards.
So the idea that you can get completely clean, plug and play compatibility with the right amount of standardisation is a chimera – there are always new dependencies likely to emerge at any interface?
“Exactly!” says Martin. “SIP (The Session Initiation Protocol which sets up voice and video calls across an IP network) has been around for 20 years and we still meet SIP interoperability issues every day.”

“Exactly!” says Martin. “SIP (The Session Initiation Protocol which sets up voice and video calls across an IP network) has been around for 20 years and we still meet SIP interoperability issues every day.”

Conclusion

It soon became clear while trying to chase down the dynamics of SDNFV’s evolution in these articles, that while there might be a roadmap there’s no single optimum route. Like an online navigation app several choices are presented with accompanying trade-offs: on your Google map one route is the quickest, another longer but cheaper, another more scenic but slower, yet another involves a train journey... and so on.
Each of our subject vendors offered subtly different routes for different sets of requirements and objectives.
For some telco customers there may be no long-term strategic NFV ‘goal’. Adoption is more tactical with the objective of developing particular services or more innovative offerings for their market opportunity with fast payback in terms of both costs and speed to market. For giant telcos, the path is more difficult. They may want full end-to-end NFV transformation to better manage and eventually automate their huge networks. That’s complex and they expect it will be a long process.

How long?

Just maybe there’s no end and no ultimate transformation, just progress toward that prized goal of automation. Journey, not destination.