skip to Main Content

David Davis of Actual Tech Media interviews our CTO, Dr. Srinidhi Varadarajan about how IT is stuck between using fast, but complex network hardware or a simple-to-use, but slow and expensive network virtualization. The next wave of network is virtualized at OSI layer 3 to give you the benefits of both without their drawbacks.

Full video transcript:

Actual Tech Media (AM): Hi, I’m David Davis from Actual Tech Media. I’m excited to be joined by Srinidhi Varadarajan, better known as “doctor V.” He is also the CTO of Cloudistics. How are you doing?

Cloudistics (C): Thank you, David, my pleasure.

AM: Yeah, awesome, so today we are talking about network virtualization. I mean, in traditional enterprise networks, you know, you have virtualization hosts, these virtualization hosts have virtual switches connected to physical switches, and you have whole network team, you know, who learns to manage and monitor and troubleshoot a complex network infrastructure; and when you’re going to deploy new applications, of course, there is a lot of turn up time related to the network. Those inner tools like “distributed switching” that have tried to make management a little bit easier, but there’s still a lot of complexity and when you have new applications that need dedicated services like load balancing or DHPC or DNS or Firewall, you know, security features – there is a lot that has to be done, you know, in the background to make that work, so I’m sure people are struggling out there with that network complexity, and I’m excited to learn more about network virtualization and what the Cloudistics Ignite and the network virtualization can do.

So, would you start by telling us a little bit about how the traditional network compares to VMware’s NSX Solution which uses Vxlan and how that compares to Cloudistics Ignite network virtualization.

C: Absolutely. Thanks, David. And you’re right – networking is probably the most complex component of a datacenter infrastructure. You’ve had it for 30 years and it’s always been the domain of the arcane and it will continue to be. It really is that complex underneath to get up a worldwide network that has been set up, that is our internet, to talk to every component in there. So if I look at a typical traditional network, there are some design principles that are going to it, because the network connects everything in your infrastructure – it connects your storage, it connects your compute, it connects your virtualization. And when network goes down, it’s like the whole your system is going down, because there is no connectivity.

AM: Right

C: So, as network engineers, typically, or even network designers, we are relatively very risk averse, because the goal is to make sure – the network never fails.

So, we have general-purpose networks that connect everything. Not necessarily best suited for any application but will be reliable and will continue to operate. Now, what would you want in ideal world? In ideal world you want a networking system where every application that deployed ends up in its own network. Then I can tell the security towards that application. I can even set up quality of services that application gives, preferential treatment for traffic, if business so determines, that there should be the case.

AM: Right

C: To get that ideal world means I have to change my network many times a day. 10, 20, 30 times a day. And if I did that in underlying network, then the physical networking work would simply fail. Some piece in there will not have something called “convergence” – you will end up losing connectivity to machines and what this results in – is in a flaky network with many different services going down, so we can’t do that in our physical world.

Back in 2012, VMware originally started down the spot of network virtualization, saying: “we’re virtualizing everything else. We virtualize compute, we virtualize servers, we virtualize storage, but networking is still physical”.

The idea of network virtualization is – I should be able to take a bunch of virtual machines, create the network entirely in software between them – it almost has nothing to do with the physical network – that’s a completely different set of addresses and I can create this on the fly from user interface. That’s really simple. All the heavy lifting is done under the covers in the system over there and these virtual networks that can be deployed very rapidly, they can be built in a matter of minutes. It’s very simple and it’s very powerful – just like virtualization could tear up or tear down an entire machine in seconds, literally and that bringing the same power – that was the idea of network virtualization: bring that power to the application. We buy the simplicity argument – it’s actually a great argument. It works very well, in fact.

The problem with network virtualization has been to this day – performance. You lose a lot of performance. So, if you think in the traditional virtualization world – this is like going back to the ninetieth. If you wanted simplicity – you went to virtualization, but if you wanted performance – you went physical. And it’s the same problem that seems to plague network virtualization today. And the reason why that happens is because the way NSX operates and how network virtualization is actually implemented. So, if you have a virtual machine, and assume that this virtual machine is on a virtual network. This virtual machine sends a packet out, and this is a regular internet packet, the same packet that goes on a physical network, full layer to internet packet. But to do network virtualization, what then happens is that packet is made as the payload of a bigger packet. Basically you take this packet and put it inside a larger one. And a larger packet has its own IP header, own UDP header, own frame.

AM: So it’s encapsulated like a GRE tunnel essential.

C: Exactly, it is encapsulated. VXLAN is other encapsulation mechanism like GRE and there is STT which is the third. All three techniques for network virtualization we have today – all rely on tunneling. Now, problem with tunneling is, simply put, performance. Making this copy in order to do the encapsulation in software is very very expensive. It works fine at one gigabit line rate, but if you go to ten gigabits – you get over between 50 or 60 percent overhead. If you go to 40 gigabits you can put 80 – 90 percent overhead. System simply doesn’t scale.

AM: And you’re also increasing the size of all your packets.

C: And the network system in fact will drop the packet because it will not be able to transfer the lager packet.

AM: And then that fragmentation leads to decreased performance.

C: Decrease performance again because I’ve got to wait for the both parts of the fragment to join them back together before I can deliver anything upstream.

So it is this stack eventually robs the system of performance.

So, what three years ago, what we did, essentially, is look at this problem and say: “Why are we doing encapsulation at layer 2?” Network virtualization should not even have been done at that layer. It should have been done at layer three. The simple way of looking at it is: I’m trying to create a virtual network, a virtual IPV4 network, and my physical network is already IPV4, so I should be able to transfer it from one to the other, instead of treating the underlying network as a dial-up modem and encapsulating everything to send it through that. And there lies the crux of our technology. So, we do network virtualization at layer 3. So we make minor modifications to the header of the IP packet particularly in the source and destination addresses. And what this gives us is the same capabilities of network virtualization that you get with NSX, but without any of the performance loss.

AM: So, the sizes of the packets don’t change?

C: They remain exactly the same.

AM: There is no fragmentation?

C: There is no fragmentation, they are fully interoperable with your existing routers, as well, and that is one of its biggest strength. That is – packets look no different.

The second part of it is complete line rate performance. We operate at one gig in one-gig networks, ten gig on ten-gig networks, 40 gig on 40-gig networks and hundred gig on hundred-gig networks.

The other advantage of this thing is – you no longer have to deal with: “If I want performance, I go to physical and if I want simplicity I go to virtual”. The same virtual networking technology gives you performance of physical layer networks. What this lets us do now is deploy virtual networking everywhere. So, we don’t just use virtual networking to cover together applications, we even use virtual networking to talk to storage. Which if you did today at 40 gigabits per second overhead, you’re not going to be able to get it with NSX whereas with our technology everything is virtualized.

AM: So – complete network virtualization?

C: Complete network virtualization in there. In fact, the simplest demo that we have in there is – consider a laptop, on your wireless network and RDP into windows server, that happens to be virtualized. And the Windows server is running on the virtual network and you cannot tell the difference between whether it is physical or virtual. The performance is exactly the same, the packets look exactly the same. And if you trapped the packets anywhere along the wire – they look exactly like any other networking system packets.

AM: That’s awesome, that’s awesome!

C: The third thing that we did in here is something that we provide inside the system. It’s called micro segmentation. And the idea is – you should be able to create very small networks that are very tightly coupled to the application itself. So, let’s consider web tier, app tier, database tier. App tier happens to be JBoss. You would ideally like to have your own network for the JBoss tier and only allow those security privileges that are needed for the JBoss application to run. So, in terms of a firewall only open those ports that application needs. Instead of generally opening all sorts of ports in there and then opening yourself up to attack. This allows you to zone defense.

AM: Some people have called this “application defined networking”?

C: Exactly. That is exactly what micro segmentation is leading down to. Application defined networking, but each application puts its profile out of what it expects from the network and what it wants from the security prospective, and being able to deliver on that directly at the network layer. That is another thing that we do very well in AON – that is adapted overlay network, network virtualization technique.

Lastly, networks also have a lot of services. Just setting up the network is probably the first step of a long journey. You set the network up and after that all these micro services are needed – you need DHCP service typically, even get addresses on the network, you need NAT, you need Firewall profiles, you may need load balancers. And one of the nice things with the implementation of Ignite is all of the micro services are built in. So, when you deploy your network just check a box and say: “give me a DHCP server, here is an address range.” Check a box and say: “Deploy a firewall on this entire network”. Any VM that runs in there as a firewall automatically running under it. And this is the profile I want. So now, if I take the earlier example of JBoss, I could open only those ports that I needed for JBoss, and it doesn’t matter where the VM runs, on which hypervisor. Under the VM, because it belonged to that particular network that had a security profile, only those rules and those ports are open that are needed by that particular application to run.

AM: So, the virtualization admin or application owner doesn’t have to fill out the form and give the network group a list of all the different services and firewall ports he needs open, he can essentially do it himself.

C: That’s correct.

AM: Self-service network configuration.

C: And yet the underlying physical network, that is managed by network administrators, does not see this complexity, and they can define their security polices independent of the applications that are being deployed on that. So now, if you look at it from the management prospective, it’s much easier for them to build a network at the physical layer that serves the needs of everybody and let network virtualization take care of specialization of needs. And the best person who knows about the application is the application owner. And they can specify the rules themselves, without having to ever drop to a physical layer.

AM: So, what kind of application services would you be able to deploy? You said – DHCP. What else?

C: So, we deploy DHCP, we deploy network address translation, that you can specify what is your in port and what your out port is, you can specify distributive firewalls. No single choke point and that firewall runs everywhere where a VM runs, that belongs to a particular network. Load balancers, and in load balanced networks – the new virtual machines coming to the network that automatically get added to the load balancer and expand the pool, so it’s not another manual element you have to configure. It’s all these little things that make a big difference in managing what the network is today.

AM: Right. And what if I clone a virtual machine or that group of virtual machines, will all those application services follow the clone?

C: The application services will follow the clone because a template gets built out of it that defines what all the components are over there and that is that system follows, too.

AM: Real software-defined networking.

C: Complete software-defined networking.

AM: Excellent. So what are some the other benefits here?

C: So, the main business benefits, if you look at it from network virtualization prospective or even virtualization prospective in that sense, is simplicity. Just operational simplicity. Deploying physical networks with the series of services can take, in very agile places, in a matter of days to weeks. And most companies that have to go through series of policies – this takes months to deploy. As opposed to that – it takes minutes. It is that sheer agility of being able to deploy a network in a span of seconds to minutes. That makes it very simple and very powerful. And if you make a mistake – your physical network connectivity still continues: you can always destroy the virtual network and reconfigure it, it does no impact on general connectivity.

AM: So, tremendous simplicity, efficiency, really, not just for the network administrator but also for the application owner; greater agility for the business – they can deploy their new applications faster, get them up and running and in security, those applications are secure.

C: That’s correct.

AM: And then – performance. So, the applications, they’re going to perform with these network services at high speeds.

C: That’s correct. They are going to operate pretty much at the line rate, whether you take the entire system and deploy it in the physical world you will see no performance benefits out of that exercise. And you will be giving up all the simplicity elements in there. And that is what is nice about it – we have line rate performance whether you are virtual or physical, and it operates exactly the same. In fact, our current SDN routers that we deploy – they do network virtualization at one and a half terabits per second.

AM: Awesome, awesome! So, for people who want to learn more, I mean, I’m intrigued to see what Cloudistics network virtualization can do, what would you recommend?

C: I would recommend to take a look at our website – Cloudistics.com/demo. You can get a demo schedule or even get access to our hardware in a virtual lab where you can deploy your own networks and see, see how it operates.

AM: Very nice. Well, thank you very much, Doctor V.

C: Thanks for you.

AM: For more information, visit Cloudistics.com/demo.

Get a glimpse of what we’re up to – from best practices to analyses. View our latest resources.

Back To Top
×Close search
Search