Interview with Gerry Donohoe – on NFV, VNS and Virtualisation
May 9, 2016 - Openet
Amongst all the hype and general hoohah surrounding NFV, VNS and every other acronym to do with virtualisation, we decided to catch up with Openet’s Technical Director, Gerry Donohoe (GD) to find out what is actually going on.
DisruptiveViews (DV): When we spoke with Niall Norton not so long ago, he said that you haven’t answered an RFP for a while that hasn’t had some form of virtualisation as a requirement. So you have been in this area for a while?
GD: Absolutely, we were deploying virtualised solutions long before NFV proper started, in fact we have been involved with NFV since its inception. We have been working with virtualisation in a production environment, at scale for some years now.
DV: What is focus right now?
GD: So, the industry has been very focused on the data plane side, but we really need to change that and focus on the management side, there is a real bottle neck forming. The key to NFV is the replacement of Network Functions deployed as physical hardware with software entities called Virtual Network Functions or VNFs. The fact is, you cannot have a simplistic system to manage these Virtual Network Functions. Many still view a VNF as just an appliance deployed as a Virtual machine, which needs to be shut down for maintenance and upgrades. This will clearly does not work at scale, because it will service distribution. Think of upgrading many VNFs across many different data centres and different geographical boundaries, and it simply does not make sense.
Imagine an operator handling 100 million customers and having to implement a security patch. They would have to shut the existing Virtual machines then start-up new ones and move the traffic between them.
Our approach is to a VNF as a software entity, which is what it is. We do not treat it as a black box, and we can do incremental upgrades in situ. It gives us more granular control, and so we can add packages or upgrade easily.
This is the thinking behind Weaver, our VNF lifecycle manager and why we ‘gave’ it to the community. We saw that vendors were building bespoke VNF managers, which is fine up to a point. But when you have a multi-vendor environment, as you do with big players, then it falls apart. Imagine the on-boarding involved? Our approach prevents vendor lock-in, simplifies the whole process and allows proper plug and play. With Weaver and ‘giving’ it to the community we felt we were sharing our learning and experience. It also makes sense for our partner ecosystem.
The other proven advantage in taking the ‘software’ view is that how we can now upgrade several hundred servers across multiple regions in 20 minutes. Before, you would have to do it incrementally, typically targeting a specific region and upgrading during off peak period such as midnight on a Sunday and then repeating the process across other regions over several weeks.
DV: Where are we with virtualisation at the moment, are we still in the hype stage or are things happening faster than we thought?
GD: Well, here’s a thought. The original white paper was only published three years ago. Only 14 operators were participating at that time but membership has now grown to over 270 organisations. To be honest, the discussions about whether it is hype or not are gone. We attended a policy conference in Berlin and the feeling was that NFV is here, now, and it is a ‘must have’. We have gone way beyond the hype.
Take AT&T – they said they would virtualise five percent of their network by the end of 2015, and they are claiming six percent. Their next target is to virtualise 30 percent of the network by the end of 2016, and their ultimate goal is to virtualise 75 percent of the network by 2020, which is only four years away.
Read more at Disruptive Views