In this page, we provide detailed instructions on how to install and run Pwnetizer.

Overview

Our KVM-based OpenStack Pwnetizer prototype is comprised of four software components highlighted in the figure below, for a total of 3,796 lines of code (LOC).

  • Modified OpenStack – a cloning-aware OpenStack implementation. The modifications are minimal (194 Python lines of code on top of OpenStack Essex's original 151,227 LOC).
  • PwnetizerLibvirt – a modified Libvirt that runs on each host and orchestrates the entire cloning procedure. It required an extra 2,362 lines of C code on top of Libvirt's original 484,349 LOC.
  • PwnetizerServer – manages the networked share where VM disk files are stored and takes care of efficient disk cloning. It is materialized in 1,109 lines of C code.
  • PwnetizerClient – runs inside every VM and takes care of cloning detection as well as network reconfiguration. It is a small program, taking up only 131 lines of Java Code.

Pre-Configured VMs

We have created a 3-node OpenStack deployment based on VirtualBox VMs, which already have Pwnetizer VM Cloning configured in them. This is the easiest way to try Pwnetizer out. You can download it here: pwnetizer_VMs.ova.

VirtualBox Network Configuration
Before you import the VMs into VirtualBox, you must set up the network as follows:

1. Go to VirtualBox's Preferences and click on the Network tab.

2. Create adapter vboxnet0, set IPv4 Address: 172.16.0.254, IPv4 Network Mask: 255.255.0.0, IPv6 Network Mask Length: 0, disable DHCP Server.

3. Create adapter vboxnet1, set IPv4 Address: 10.0.0.1, IPv4 Network Mask: 255.0.0.0, IPv6 Network Mask Length: 0, disable DHCP Server.

Importing Pwnetizer VMs into VirtualBox
Download the pwnetizer_VMs.ova file. It contains all three pre-configured VMs inside. Then, launch VirtualBox and go to File –> Import Appliance… and follow the on-screen instructions.
Using the Pwnetizer VMs
Here are some things that you should keep in mind when using the provided VMs:
  • Essex1_pwnetizer must be the first VM to be turned on and the last one to be shut down. You should only start Essex2_pwnetizer and Essex3_pwnetizer once Essex1_pwnetizer has finished booting up. Meanwhile, you should only turn off Essex1_pwnetizer once Essex2_pwnetizer and Essex3_pwnetizer are both turned off.
  • In all three VMs, the root user has securityondemand as its password. You can SSH into Essex1_pwnetizer by executing ssh root@172.16.0.1. The same can be done with Essex2_pwnetizer and Essex3_pwnetizer by changing 172.16.0.2 and 172.16.0.3, respectively.
  • The OpenStack dashboard can be found at http://172.16.0.1. The dashboard's user is demo and its password is openstack.
  • You can run PwnetizerServer inside the Essex1_pwnetizer VM by executing /root/PwnetizerServer_v3/src/PwnetizerServer.
  • Because of the NFS share's configuration, Essex1 acts like a black hole. VMs can be migrated/cloned TO Essex1, but not FROM it. Given that VMs can be migrated/cloned TO and FROM Essex2 and Essex3, this shouldn't be much of a problem.

Now, you can skip to Step 4 of the Manual Installation to finish setting up your Pwnetizer test bed.

Manual Installation

If you wish to configure the Pwnetizer mod yourself, here are step-by-step instructions on how to do it.

Note: most files provided by us are compressed in RAR format. To extract their contents, you need the unrar package on Linux (i.e. sudo apt-get install unrar on Ubuntu).

PREREQUISITES
There are two possible setups:
  1. Running multiple VirtualBox VMs on top of a single machine.
  2. Physically installing OpenStack in two physical hosts or more inside the same LAN.

We recommend the first approach and assume that you already have an Openstack deployment already configured.

STEP 0: Getting Started
We highly recommend you use Ubuntu or some other Linux flavor to do this. Also, we are assuming that you will be extracting/downloading everything to your HOME directory, so replace ”~” with your folder of choice if necessary. Our OS of choice was always Ubuntu 12.04 LTS.

Every OpenStack node should have its firewall disabled to avoid connectivity problems. Run the following command on every host/VM:

sudo ufw disable

STEP 1: Installing PwnetizerLibvirt and Modified KVM
You need to install the modified Libvirt (pwnetizer_libvirt_v3.rar) and modified qemu-kvm (qemu-kvm-0.11.0-modded.rar) on every single OpenStack node. You'll also need libvirt_configs.rar to correctly configure Libvirt after installing the modified version.

INSTALLING MODDED LIBVIRT

You should do this on every single OpenStack node.

Extract the code:

unrar x pwnetizer_libvirt_v3.rar

Modifying the code:

If your NFS share is not running on 172.16.0.1, you must update the PWNETIZER_NFS_SERVER variable in pwnetizer_libvirt_v3/src/qemu/qemu_pwnetizer.h before compiling the program.

All the database-related variables (e.g. DB_HOSTNAME) can be safely ignored, as they aren't used by default.

Compile the code:

sudo apt-get update

sudo apt-get install build-essential

sudo apt-get install libxml2.dev gnutls-bin libgnutls-dev libdevmapper-dev libdevmapper python-dev libnl1 libnl-dev libyajl-dev libmysqlclient-dev

sudo apt-get install pkg-config

sudo apt-get install pm-utils

cd pwnetizer_libvirt_v3

./configure --prefix=/usr

sudo apt-get install make

make

The make command should throw various warnings. That's OK. gcc is a bit strict with qualifiers, but the code doesn't have any bugs that we are aware of.

Install PwnetizerLibvirt:

sudo make install


COPYING OVER GOOD LIBVIRT CONFIGURATION

You should do this on every single OpenStack node.

Extract the configuration files:

unrar x libvirt_configs.rar

Copy them over to the correct location:

cd libvirt_configs

sudo cp *.conf /etc/libvirt

Restart Libvirt to reflect the changes:

sudo service libvirt-bin restart


INSTALLING MODDED QEMU-KVM

You should do this on every single OpenStack node.

Extract the code:

unrar x qemu-kvm-0.11.0-modded.rar

Compile the code:

sudo apt-get update

sudo apt-get install libc6

sudo apt-get install zlib1g-dev libglib2.0-dev

cd qemu-kvm-0.11.0-modded

./configure --prefix=/usr

make

It is normal to get the following error with the configure command: ”Error: libpci check failed. Disable KVM Device Assignment capability.

Install the modified KVM:

sudo make install

reboot


END RESULT

You should do this on every single OpenStack node.

Executing ”virsh version” should give the output shown below:

Compiled against library: libvir 0.9.9

Using library: libvir 0.9.9

Using API: QEMU 0.9.9

Running hypervisor: QEMU 0.11.0

If you get an error saying ”virsh: /usr/lib/libvirt.so.0: version `LIBVIRT_PRIVATE_0.9.9' not found (required by virsh)”, here's a fix: http://web.dit.upm.es/vnxwiki/index.php/Vnx-install-trobleshooting

STEP 2: Installing PwnetizerServer
You need to run an instance of PwnetizerServer (pwnetizerserver_v3.rar) on OpenStack's controller node or wherever the NFS share is installed.

PREPARATIONS

Extract the files:

unrar x pwnetizerserver_v3.rar

Prepare the mirroring folder:

Create a folder where the NFS mirror will be maintained. The default folder is /var/lib/nova/mirror/:

sudo mkdir -p /var/lib/nova/mirror/

Modify the code:

If (1) you plan on using a mirroring directory other than /var/lib/nova/mirror/ or (2) your NFS share is not located in /var/lib/nova/instances/, you must update the NFS_ROOT and NFS_MIRROR constants in ~/PwnetizerServer_v3/src/PwnetizerServer.c before compiling the program.

All the database-related variables (e.g. DB_HOSTNAME) can be safely ignored, as they aren't used by default.


COMPILING IT

cd ~/PwnetizerServer_v3/src/

gcc -pthread -o PwnetizerServer PwnetizerServer.c `mysql_config --cflags` `mysql_config --libs`


RUNNING IT

cd ~/PwnetizerServer_v3/src/

sudo ./PwnetizerServer

Two log files will be generated while PwnetizerServer is running, which are especially useful when troubleshooting:

  • ~/PwnetizerServer_v3/src/log_communications.txt keeps a log of all connection attempts and PwnetizerServer's interactions with PwnetizerLibvirt.
  • ~/PwnetizerServer_v3/src/log_mirroring.txt keeps a log of all the file synchronization threads (i.e. rsync commands).

You should check that there are no errors in the aforementioned log files the first time you run PwnetizerServer.

Step 3: Configuring OpenStack for cloning
CREATING A BACKUP

To create a backup of all necessary files, run the following command substituting <BACKUP_DIRECTORY> with the place where you want all original files to be stored. You should do this on every single OpenStack node:

cp /usr/lib/python2.7/dist-packages/nova/compute/manager.py /usr/lib/python2.7/dist-packages/nova/compute/pwnetizer_stuff.py /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py /usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py /etc/nova/nova.conf /usr/lib/python2.7/dist-packages/nova/virt/libvirt.xml.template <BACKUP_DIRECTORY>


SETTING UP THE CODE

You should do this on every single OpenStack node.

Download openstack_mod_v3.rar and extract its contents:

unrar x openstack_mod_v3.rar

Before you copy over the modified files, you may need to modify the code found in OpenStack_mod_v3/compute/pwnetizer_stuff.py:

  • If the OpenStack private IP segment does not have 10.1.1.255 as its broadcast address, you must change the BROADCAST_IP constant accordingly.
  • If Nova DB is running in a server other than 172.16.0.1, you must replace every occurrence of 172.16.0.1 to the correct IP address.
  • If the [port, user, password, db] combination used by Nova DB is not [3306, “nova”, “openstack”, “nova”], you must change every occurrence of 3306, “nova”, “openstack”, “nova” to the correct combination.
  • If the OpenStack NFS share is not mapped to /var/lib/nova/instances/, you must change the MAGIC_FILE constant accordingly.

After all necessary changes are made, the files can be copied to their correct locations:

cd OpenStack_mod_v3

cp compute/manager.py /usr/lib/python2.7/dist-packages/nova/compute/manager.py

cp compute/pwnetizer_stuff.py /usr/lib/python2.7/dist-packages/nova/compute/pwnetizer_stuff.py

cp virt/libvirt/connection.py /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py

cp virt/libvirt/utils.py /usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py


MINOR CONFIGURATION CHANGES

You should do this on every single OpenStack node.

We require all VM instances to begin with a NATed interface. First, make sure that all hosts start the NAT gateway by default:

sudo virsh net-list --all

The command should return something like:

If not, run the following commands:

sudo virsh net-autostart default

sudo virsh net-start default

Now, we need to copy a modified XML instance description to assign NAT bridges to our VMs. Download libvirt.xml.rar and execute the following:

unrar x libvirt.xml.rar

cp libvirt.xml.template_original /usr/lib/python2.7/dist-packages/nova/virt/libvirt.xml.template

A restart is necessary for the changes to take effect:

sudo reboot


REVERTING BACK TO NORMAL OPENSTACK

If you want to run live VM migration, you will need to revert your OpenStack back to normal. Here are the commands to copy over all the backup files where they belong. You should do this on every single OpenStack node:

cd <BACKUP_DIRECTORY>

cp manager.py /usr/lib/python2.7/dist-packages/nova/compute/manager.py

cp pwnetizer_stuff.py /usr/lib/python2.7/dist-packages/nova/compute/pwnetizer_stuff.py

cp connection.py /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py

cp utils.py /usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py

cp nova.conf /etc/nova/nova.conf

cp libvirt.xml.template /usr/lib/python2.7/dist-packages/nova/virt/libvirt.xml.template

sudo reboot

Step 4: PwnetizerClient
VM's internal network configuration

Every Pwnetizer VM comes with two network interfaces. For them to be properly configured, edit the VM's /etc/network/interfaces file to the following:

auto lo

iface lo inet loopback

auto eth0

iface eth0 inet dhcp

auto eth1

iface eth1 inet dhcp

Then, execute the command shown below:

sudo /etc/init.d/networking restart

Use ifconfig to verify that both eth0 and eth1 are up and have valid IP addresses.


Every VM must be running an instance of PwnetizerClient (pwnetizerclient.jar) for their network configuration to be managed appropriately. PwnetizerClient receives two parameters:

  • NAT gateway IP - Each VM has two interfaces (see diagram below): one connected to the outside network and another one connected to a NAT gateway. The host's NAT gateway's IP is usually 192.168.122.1. To determine your host's NAT gateway's IP, execute ifconfig on the host and look at the virbr0 interface's IP.
  • NATed NIC - Since VMs have two networked cards, we need to know which one is connected to the NAT gateway. In the diagram below, the VM's NATed NIC is eth0.

Once you know those two pieces of information, you can start PwnetizerClient with a command like the one below:

sudo java -jar pwnetizerclient.jar <NAT-gateway-IP> <NATed-NIC>

e.g. sudo java -jar pwnetizerclient.jar 192.168.122.1 eth0

Note: The VM must have Java installed for the program to execute (sudo apt-get install openjdk-6-jre).


We recommend that you use virt-manager's VNC console to connect to the VM and leave PwnetizerClient running on that console. For you to be able to log into the VM through VNC, you need to set up a password for the specific login that you plan on using. In order to do this, SSH into the VM and execute the passwd command like so:

ssh -i demokey.pem ubuntu@<VM-IP>

sudo su

sudo passwd ubuntu

In this case, “ubuntu” is the login, “demokey.pem” is the SSH key set up by OpenStack, and <VM-IP> is the VM's internal IP address.

Now that the login has a password assigned to it, you must configure an SSH connection to the host through virt-manager and connect to the VM's console. Once you login into the VM, you can invoke the PwnetizerClient command and leave the program running. Here are some screenshots illustrating the process:

FINAL STEP: Testing it out!
First, make sure that:
  • PwnetizerServer is running on the OpenStack controller (usually Essex1).
  • PwnetizerClient is running inside the VM that you want to clone. This should be done through the VM's VNC console (i.e. using virt-manager's GUI).

Cloning is triggered as if a VM were being migrated. The existence of /var/lib/nova/instances/clone_plz (i.e. MAGIC_FILE in pwnetizer_stuff.py) indicates that cloning should take place, while the absence of that file indicates that live migration should be carried out instead. Thus, you need to run these commands in the OpenStack controller:

CLONING

sudo source /root/OpenStackInstaller/demorc

touch /var/lib/nova/instances/clone_plz

nova live-migration <vm-name> <destination-compute-target>

e.g. nova live-migration migrateMe Essex1

MIGRATION

sudo source /root/OpenStackInstaller/demorc

rm /var/lib/nova/instances/clone_plz

nova live-migration <vm-name> <destination-compute-target>

e.g. nova live-migration migrateMe Essex1

CLONING FAILED?

If a cloning operation fails, we recommend you kill (ctrl+c) the PwnetizerServer process and restart the Libvirt daemon on the two hosts that were involved with the following command:

service libvirt-bin restart

Then, start PwnetizerServer again and trigger cloning once more.

DEBUGGING
These commands and files should help debug problems. You should analyze both OpenStack nodes involved in a failed cloning procedure when diagnosing a problem:

sudo cat /var/log/nova/nova-compute.log | grep ERROR

sudo cat /var/log/nova/nova-api.log | grep ERROR

sudo cat /var/log/nova/nova-scheduler.log | grep ERROR

~/PwnetizerServer_v3/src/log_communications.txt

~/PwnetizerServer_v3/src/log_mirroring.txt

/var/log/libvirt/libvirtd.log

FAQs
After telling OpenStack to delete a VM instance, the dashboard says “deleting” and never finishes. How do I get rid of that VM instance?

Appendix

Benchmarking Pwnetizer
Running Standard Cloud Computing Benchmarks
 
how_to.txt · Last modified: 2013/06/18 08:15 (external edit) · []
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki