Network virtualization for the Cloud: Open vSwitch study

In face of the current reality of ten thousand node data-centers and all the BigData jazz it seems like the network guys were slightly forgotten. We have enough hardware virtualization solutions but until now the network was left on the outskirts of the cloud hype. Let’s see what we can use right now and if it will get better in the future.

When people talk about network virtualization nowadays one name immediately springs into mind: Nicira, they invented OpenFlow, Open vSwitch (OVS) and… were acquired by VMware.

Why Nicira? They essentially designed the current state of network virtualization. OpenFlow is implemented in physical hardware and OVS is used by a lot of people to drive the software network stack in virtualized environments.

But is it any good? Let’s see. If you open the specification for OpenFlow it looks simple: let’s cut the hardware intervention at the Ethernet level and implement all other features in software. We essentially write a program (handler) that matches some fields in packet and acts according to simple rules: forward to port, drop, pass to other handler. But then, how do you install these handlers inside the switch? The solution is also not that complicated: you just write another more complex software that runs on something generic (like PC). It chooses handlers for particular flows by issuing a command to the switch, when switch encounters something it does not have handler for, it just passes it to this PC (controller) and controller either chooses a new handler for the switch or processes the packet internally.

Open Flow switch


What do we see here? It looks like there is an execution platform inside the hardware for running the network handlers and a controller which chooses the handler for each state. It looks very promising and flexible, and can be implemented not only in hardware but also in software only. And the same guys implemented it in OVS, shall we peek inside? Yes, I’ve answered to myself, and downloaded the OVS source.

When I looked inside the code I was little… how should I put it… surprised? OVS code has everything but a kitchen sink inside: serializeable hash table, JSON parser, VLAN tagging, several QoS implementations, STP implementation, Unix socket server, RPC over SSL server and the icing on the cake: their own database with a key/value + columnar storage engine. And everything implemented from scratch (or so it seems).

Ok, they have enough shock value already, but how does this thing work? It turns out that the operation is not that different from the hardware I’ve described above. It just has a kernel module instead of actual hardware and the flow handlers are just some functions inside the module. It looks like this.

Open vSwitch

Daemon uses netlink to talk to kernel module and install the handlers, database stores the configuration, controller talks to daemon via OpenFlow or plain JSON.

So, we got a software stack for network, why is it good for virtualization?

Short answer: because everybody uses Linux. And when your hypervisor runs on Linux why not use some of its capabilities for a nice network boost. But why OpenFlow/OVS?

The OVS docs describe it like this:

Open vSwitch is targeted at multi-server virtualization deployments, a landscape for which the previous stack is not well suited. These environments are often characterized by highly dynamic end-points, the maintenance of logical abstractions, and (sometimes) integration with or offloading to special purpose switching hardware.

But Linux always had a good network stack, easily manageable and extendible. What are the advantages? There are some.

  • OVS can migrate the network state with the bridge port (connected to the VM instance, for example). You will loose some packets, but the connections may still stay intact.
  • OVS can use metadata to tag frames (VM instance UUID, for example).
  • OVS can install event triggers inside the daemon to notify the controller on the state change (VM reboot, for example).

Does it justify a new kernel module + new DB engine + new RPC protocols? Maybe not. But you can’t get these capabilities from any other open source software anyway.

The only question I have right now is why Nicira did not implement STT in OVS, but certainly did in its proprietary NVP software?

Hadoop on OpenStack Swift: experiments

Some time has passed since our initial post on Hadoop over OpenStack Swift implementation. A couple of things have changed (Rackspace finally implemented range requests in their Cloudfiles library) others remained the same (still no built-in support for Hadoop in OpenStack / CloudFiles).

We got a lot of feedback and questions regarding the integration but not always had the time or patience to properly address them, sorry for that. But one of our readers, Zheng Xu, did a great job by putting together a slide deck on the exact procedure.

But there are still some points I need to address regarding the procedure he assembled there. It mostly boils down to Cloudfiles: although current Cloudfiles implementation has HTTP range support, our implementation uses our own code for the latter, therefore I really encourage ether using our Cloudfiles distribution (with patches) or changing our Hadoop code to use the new Rackspace one. Although the simple filesystem tasks will work as expected, any MapReduce job that works with big files will fail without correct HTTP range support.

I want to thank Zheng Xu for the effort and congratulate him on the success of his small experiment.