Last week, we attended the first OPNFV Summit, which brought together industry experts and members of the OPNFV community for the first time. There were 720 people registered for the event, with 60 sessions going on in multiple tracks. We caught a few of the key tracks, and here are our six key takeaways.
We welcome your feedback and input, so please drop us a line if you have specific questions to ask.
- Progress in the first year of OPNFV
- Telco and Cloud Convergence — some observations
- Use Cases — the voice of the end user
- NFV and Open Source – what it means
- Skills and Mindset gap in the NFV world
- What’s next for NFV?
- Daitan Group’s work with OPNFV
Progress in the first year of OPNFV
First, a quick bit of history: Open Platform for NFV (OPNFV) is a carrier-grade, integrated, open source platform designed with a goal of accelerating the introduction of new NFV products and services. It was established because of the need for an open reference platform to validate key NFV concepts and accelerate the development and ultimately the adoption of NFV products and services, and was first introduced in September 2014. You can read more about the OPNFV organization here: https://www.opnfv.org/
Heather Kirksey, is the Director of OPNFV, and described in the opening keynote the progress made in the OPNFV community in its first year. Since September 2014, 55 member companies have joined, including many of the world’s most important wireline, mobile and cable operators along with network equipment vendors, software, semiconductors and startups. The community now has an active and diverse community of developers and experts starting to work together. In its first year, more than 40 projects have been approved, and 1,790 accepted contributions, with over a hundred developers actively working on those projects. There have been 10 OPNFV test-bed infrastructure labs built, with several in the works. On June 4th, 2015, the first Arno 1.0.0 developer version was released, and on October 1st 2015, the first service version was released. So far, there have been over 5,800 downloads of Arno. If you’re interested in the latest stats, you can see them on the OPNFV dashboard.
Naturally, Ms Kirksey was optimistic. But she also emphasized that no single company is going to solve the whole problem of scaling NFV. That will only be solved with a whole community effort.
There were lessons that Ms Kirksey pointed to that she, and the whole team, learned in the first year:
- Do something. We have to do the work, not write about it. Problems are solved not by writing papers and arguing over standards, but by building projects and writing code.
- This is hard. Changing networks that have been in existence for a long time is hard. Bootstrapping a community from scratch is hard. Figuring out what to do is hard. But not impossible.
- Finally, your community will amaze you. She found it very inspiring to see what the community is doing together, collaborating across companies, and across geographical borders.
Finally, Ms Kirksey reported on the OPNFV-commissioned survey of its members. One key stat was that 86% of those surveyed agreed to some degree that OPNFV will accelerate adoption of NFV in the industry. But also that the biggest challenge the community faces is managing the competing — and competitive — agendas of the participating companies.
86% of those surveyed agreed that OPNFV will accelerate adoption of NFV
However, overall, it shows a generally positive outlook for the future of OPNFV in the minds of those that are tasked with making it a reality. And that optimism was reflected in all the panels we attended.
Telco and Cloud Convergence — some observations
What was clear to us from listening to multiple panels was just how much enterprise, cloud and telco are converging in terms of needs, and the massive opportunity that will result from the leverage of that convergence. IDC estimates that there will be 50 billion connected devices by 2020, and 2ZB of annual data traffic globally by 2019. So much higher scale and agility is required to make the transformation happen. Networks must transform at the same pace that the cloud is transforming.
Networks must transform at the same rate that the cloud is transforming
Chris Wright, the Chief Technologist at Red Hat, briefed audiences on their work with Intel to accelerate NFV, for example. He pointed to key learnings that our industry has to take on board, and how Telco can learn from the enterprise, is in terms of an agile mindset.
Enterprise infrastructure has already stepped away from proprietary hardware and function-specific servers to a virtualized compute fabric. Further, the DevOps movement has fostered a practice of collaboration and communication between software developers and IT professionals, while automating software development and infrastructure changes. IT management could no longer scale by adding more people. It had to scale by adding automation. And that required a mindset change too — because teams had to ‘give up’ on having their own physical infrastructure, and instead rely on a virtualized infrastructure in the cloud.
The enterprise, in turn, can learn from Telco by understanding their critical restraints. Telco’s heritage of reliability, serviceability and interoperability on a global scale has to be acknowledged as the industry embraces more standardized compute fabrics and an NFV infrastructure. As the enterprise relies more and more on a virtualized compute fabric it becomes more critical, and so the enterprise now becomes bound by some of the same constraints that Telco has lived with. This is where the contributions from Telco will be helping Enterprise.
Use cases — the voice of the end user
The ‘Voice of the End User’ panel at the OPNFV summit included leaders from Linx, Lucera, Merck, Google, AT&T and Orange. Each touched on their thought about killer apps and uses case requirements that are emerging as OPNFV becomes a reality.
The common theme was the absolute requirement for maintaining “Telco grade” quality, security and consistency. When we move to a virtualized Telco environment, quality cannot be compromised. Customers leave because of quality, and we are not yet seeing the critical ‘5 9s’ (99.999%) of availability that’s needed. The end user may believe, for example, that VoIP could work, but the gap between that perception, and the current reality, needs to be narrowed.
Protocol Stylist at Google, Jeff Mogul, reminded us that Google knows how to build high reliability services and networks on top of low reliability parts. They are also learning how to build a highly reliable network on top of low reliability switches and links, and how to do that at scale. And what they’ve learned from the SDN world is that it’s not just about separating the data plane and the control pane and de-valuing the switch vendors. It’s also about building a logically centralized control plane that functions as a distributed system and has high availability and control your whole network as ‘one thing.’ We can’t keep hiring smart people to control one box at a time. There aren’t enough of them, and it’s not scalable. We have to get to a point where we can manage from a top down, holistic view of a network where the system description allows the network to manage itself. Because there simply aren’t enough qualified people to do it in a more traditional and manual way. GMail, for example, is managed as a whole system in this way.
Jacob Loveless, is CEO at Lucera, a company that provides on-demand infrastructure solutions for financial services clients. Mr Loveless talked about the critical triumvirate of performance, speed, and security. None of those should be compromised in favor of another. Finance networks are extremely time sensitive, due to the high value of transactions. If you have a 1ms jitter on a financial network, it’s game over. It simply cannot be done in financial markets. For Lucera, that’s solved with instrumentation. And by keeping things as small and simple as possible. Instrumentation in software is the key use case for providing performance and resilience in finance network use cases, including ways to inspect the network to see precisely what is happening at any point in time.
Margaret Chiosi, Distinguished Network Architect at AT&T, talked about the challenges faced by carriers, including that of multi-tenancy, because carriers are supporting many companies. When AT&T’s network goes down, it brings down many companies, and that’s extremely painful. Ms Chiosi raised the question: does the OPNFV Platform need to tbe 5 9s, or the VNF? Or both? If there is a schism between what Telco is doing, and what the Enterprise is doing, then no one will win. Telco and the Enterprise must evolve and learn together. If the applications built on the Platform are not 5 9s, then the Platform will need to make up the difference, and the result will not be a success. We want to all innovate on the same platform. At AT&T, there are absolutely stringent requirements for the data plan acceleration use case to meet performance requirement in a multi-tenant environment, because the aggregate data throughput demands are much higher than what an enterprise or web company might need.
Hal Stern, Executive Director of Applied Technology at Merck, talked about the rise of smart sensors and smart devices. He told us we should learn from how the set-top box experience didn’t anticipate the growth of AppleTV or tablets, and apply that learning to smart sensors such as health sensors and home sensors. The data from those sensors should be accessible from anywhere. Imagine, for example, family members accessing the smart health sensor data in another family member’s home. But privacy, security, and HIPPA compliance are going to be critical problems to solve there. Customer Premise Equipment (CPEs) should be virtualized and pushed onto the edge of the network, and that’s going to be the job of both the carrier and the enterprise. There’s a huge opportunity to co-create what those services are.
Philippe Lucas, SVP for International Standards at France Telecom/Orange, described evolving from VPNs to virtualized CPEs (Customer Premises Equipment) for home gateways as an important use case. The question they are looking at, as other providers are, is how complex these CPEs should be. Looking at the smart home movement tells us that these CPEs should have a lot of functionality. But they are thinking about how to split that functionality into two parts: what he called a ‘stupid, basic connectivity’ at home combined with more functionality-rich virtualized features that could be upgraded and evolved rapidly on the telco side. With those upgrades ideally happening over fibre, at times that are convenient to the user, to resolve latency issues. For example this could deliver a VPN between homes as a service from the Telco provider, with absolutely no configuration required from the user’s point of view. It might also enable, for example, small networks between two different homes — between children and their parents or other family members for example — for something as simple as sharing and printing photos.
NFV and Open Source – what it means
OPNFV is built on open source, and there was a strong showing from the Apache Software Foundation and the Linux Foundation at the event, educating the audience on what it means to run an open source project.
Those companies emphasized that the developer community needs to be the heart of the project. They reminded the audience of the huge value that comes from an open source approach. Open source allows, and indeed requires, an invigorated, passionate, committed and organized developer community. They believe (as we would expect) that open source results in better software, with better security (because there are more eyes on the code), and much more nimble development with frequent releases. While there should always be organizational oversight, the developer community leads the way. “Vendor locking” will just slow down development, and NFV needs to move very fast to succeed.
But there are still non-believers. FUD factors tell us that there is no quality or quality control, there is slow development, the software has to be given away for ‘free.’
But this has been disproven time and again. Without open source we would not have Google, Amazon, Netflix, eBay, PayPal, Salesforce, LinkedIn, Twitter, Facebook or Uber. Open source is good for the end user and commoditization through open source is driving business innovation.
Open source is good for the end user and drives innovation
Open Source is not just a license, or code development. It’s about community and collaboration. And on the subject of licenses: OPNFV licenses are based on the Apache License (AL) license, which is a liberal license that is business friendly. It requires attribution, includes a patent grant, and is easily re-used by other projects and organizations.
Offering a slightly more cautionary note, EMC pointed out that OPNFV is doing a great job, but is only solving a part of the problem. With the huge amount of commercial investment already put into solutions such as instrumentation, monitoring, management and billing, they felt that it is not feasible to imagine OPNFV being successful without these type of features being available. And they felt it didn’t make sense to rebuild everything that already exists. EMC advocates the integration and coexistence of Open Source and Commercial components, and to that end they also took the opportunity to announce that they were now releasing RackHD as an open source version of their OnRack product.
Skills and Mindset gap in the NFV world
The Strategic Technology panel at the OPNFV summit included representatives from OpenStack, OpenDaylight, CoreOS, PLUMgrid, and Databricks. And the subject that resonated for us on this panel was that of the skillsets and mindset gap, both on the carrier / operator side and on the enterprise applications development side. There is both a skill set shortage, and a mindset gap, industry wide when trying to understand how applications are built on top of NFV networks.
On the enterprise applications development side, the growth of systems like containers, for example, has required development teams to understand how to build on top of complex networks where they don’t have complete control over the whole network. To be successful, those teams have to think about the network and infrastructure they operate within. So for the enterprise developer community, it’s critical to move the OPNFV community learnings up-stream and help solve new NFV use cases at the application level.
But there is also a problem around skills and mindset on the carrier / operator side too. Teams just don’t understand what’s involved. And that’s to be expected. Any time there’s a big technology shift, there’s always a skills gap. So one of the highest priority issues is that carriers / operators have problems finding people who understand how to manage infrastructure in this new way required by NFV.
Jonathan Bryce, Executive Director at the OpenStack Foundation, illustrated this gap by describing conversations he has with clients about the integration of OpenStack technology. He referred to the cultural challenges facing those companies. These days, carrier / operator teams can’t require three months notice before they can deploy new technology any longer. Technology needs to be deployed in a matter of hours, or even minutes, not months. This is a lot easier for startups and small companies that it is for large companies. To solve this, OpenStack has seen some of their clients focusing heavily on cultural changes to facilitate change, going so far as putting operational policy barriers in place if someone wanted to, for example, provision a physical machine, instead of a machine in the cloud. At the same time companies are going further than just policy changes to initiate cultural change by using education at all levels to make change happen and to get buy in. But just because something is mandated doesn’t make mean it will happen.
What’s next for NFV? Hopes, aspirations, and challenges
A “town hall” event brought together a group to talk about what’s next for NFV. They were universally optimistic, given the previous year’s progress, about what the next year could bring for OPNFV.
Here’s a quick summary of their thoughts.
Prodip Sen, CTO of NFV at Hewlett Packard: Mr Sen is looking to next year for growth in the community. He has seen such a lot of great progress so far and hopes it continues. He hopes that there will be a release out that is useful to users and can be used for something real next year. Because it’s not about whether the entire feature set is there. It’s about whether that release is stable, robust, and usable. He is optimistic that will happen in the next year. He also made the point that he hopes OPNFV can be a place where debate and confusion can be discussed and solved together.
Lingli Deng, Research Engineer, at China Mobile: Ms Deng has seen interesting progress in up-stream communities trying new things, but wants to see more. She hopes that more users will join the OPNFV project, and that those users will provide more specifics about their use case demands so that companies like China Mobile can build real test beds for applications. She pointed out that many people had come up to her at the Summit about doing this, so she is optimistic that this will happen. She wasn’t expecting that things would go as smoothly, and cautions that there would always be obstacles. But they are motivated to make things happen at China Mobile. She tells us that China Mobile is very committed to open source and commercial NFV deployment, and indeed they are in the process of building a standalone cloud nationwide for NFV testing, evaluation and trial.
Morgan Richomme, NFV Architect at Orange: Mr Richomme is looking for more developers, more test cases, more labs, and more analytics to allow a move to a more robust and scalable solution. He’s seen so much progress in one year that he is optimistic for the next year, and for exciting things to come out of the lab and into the real world.
Margaret Chiosi, Distinguished Network Architect at AT&T: Ms Chiosi hopes for a more robust build structure in the coming year, and also points to the need to drill deeper on use cases, traffic profiles and benchmarking that’s needed for data plan accelerations. She sees that there remain challenges in getting the community to come up with a common set of traffic profiles, for example, and that’s needed if there is to be a robust solution.
Sandra Rivera, VP and General Manager, Network Platforms Group, Intel: Ms Rivera sees the focus moving ahead is on developer outreach and her hope is that in the next twelve months to triple the number of developers contributing to the OPNFV project. Because she reminds us that it is developers that make it real and that will deliver something useful upon which commercial solutions can be deployed, so that real network transformation can occur.
OPNFV at Daitan Group
At Daitan, we have spent years building projects that span the Telco and Cloud world, so it was really exciting to participate in the discussions around what’s next at the summit. In our own work for clients, we’ve already seen first-hand the growth in choices of tools and components such as OpenStack, OpenDaylight, and OpenvSwitch, as we have integrated usage of those components in projects for our clients.
And while standardization and interoperability traditionally takes a long time to happen, it does drive welcome change and brings new technologies to the forefront that allow speciality Telco and cloud service providers like us accelerate software development.
We have no doubt that OPNFV’s traction will continue to grow, and we will be staying on top of developments as they happen. We expect to see new components and tools come online quickly, and we will be helping our clients take advantage of that. For some insights from last year, review our White Paper Attesting to the Benefits of NFV: Building integrated cloud communications services.
If you have specific questions about OPNFV, please drop us a line.