Dynamic iptables port-forwarding for NAT-ed libvirt networks

Libvirt is particularly awesome when it comes to managing virtual machines, their underlying storage and networks. However, if you happen to use NAT-ed networking and want to allow external access to services offered by your VMs, you’ve got to do some manual work. The simplest way to get access is to set up some  iptables rules to do port-forwarding. So for quite a while, I had the following in my /etc/init.d/boot.local and things just worked:


function iptables_forward() {
    # Drop rules just in case they are already present:
    iptables -t nat -D PREROUTING -p tcp -d $HOST_IP --dport $HOST_PORT -j DNAT --to $FORWARD_IP:$FORWARD_PORT 2> /dev/null
    iptables -D FORWARD -o $BRIDGE -p tcp --dport $FORWARD_PORT -j ACCEPT 2> /dev/null
    echo "iptables_forward: Forward service $1 from $HOST_IP:$HOST_PORT to $FORWARD_IP:$FORWARD_PORT"
    #iptables -t nat -A PREROUTING -p tcp -d $HOST_IP --dport $HOST_PORT -j DNAT --to $FORWARD_IP:$FORWARD_PORT
    #iptables -I FORWARD -o $BRIDGE -p tcp --dport $FORWARD_PORT -j ACCEPT

declare -A service
# Declare array of forwarding rules for VM services in the following form:

# libvirt 'default' network:
#service["devstack_dashboard"]="1011 80 virbr0"
#service["obs_api"]="1011 4040 virbr0"
#service["obs_webui"]="1022 80 virbr0"
#service["quickstart_crowbar"]="1030 3000 virbr0"
#service["quickstart_dashboard"]="1031 443 virbr0"
#service["quickstart_chef_webui"]="1032 4040 virbr0"
#service["quickstart_ssh"]="1033 22 virbr0"

# libvirt 'cloud' network:
service["cloud_crowbar"]="1100 3000 virbr1"
service["cloud_dashboard"]="1101 80 virbr1"
service["cloud_dashboard_ssl"]="1102 443 virbr1"

for key in ${!service[@]} ; do
    iptables_forward "$key" "${service[$key]}"

Pretty simple. However, with systemd things got more complicated. Not only is /etc/init.d/boot.local not evaluated anymore, but it likes to fight with libvirt over when to create bridges (and VLANs). Thus I had to manually invoke the script after libvirt was running.  After re-reading libvirt’s awesome documentation, it was clear that this really rather belongs into a hook script. For qemu domains, the script has to be put in /etc/libvirt/hooks and named qemu. I has to comply to this rather simply interface:

/etc/libvirt/hooks/qemu VIR_DOMAIN ACTION ...

Where VIR_DOMAIN is the exact name of the libvirt domain (virtual machine) for which you want to add / remove iptables port-forwarding rules. ACTION is either “start”, “stopped” or “reconnect”. It could be something like this Python script:


"""Libvirt port-forwarding hook.

Libvirt hook for setting up iptables port-forwarding rules when using NAT-ed
__author__ = "Sascha Peilicke <saschpe@gmx.de>"
__version__ = "0.1.1"

import os
import json
import subprocess
import sys

CONFIG_PATH = os.path.dirname(os.path.abspath(__file__))
CONFIG_FILENAME = os.path.join(CONFIG_PATH, "qemu.json")
CONFIG_SCHEMA_FILENAME = os.path.join(CONFIG_PATH, "qemu.schema.json")
IPTABLES_BINARY = subprocess.check_output(["which", "iptables"]).strip()

def host_ip():
    """Returns the default route interface IP (if any).

    In other words, the public IP used to access the virtualization host. It
    is used as default public IP for guest forwarding rules should they not
    specify a different public IP to forward from.
    if not hasattr(host_ip, "_host_ip"):
        cmd = "ip route | grep default | cut -d' ' -f5"
        default_route_interface = subprocess.check_output(cmd, shell=True).decode().strip()
        cmd = "ip addr show {0} | grep -E 'inet .*{0}' | cut -d' ' -f6 | cut -d'/' -f1".format(default_route_interface)
        host_ip._host_ip = subprocess.check_output(cmd, shell=True).decode().strip()
    return host_ip._host_ip

def config(validate=True):
    """Returns the hook configuration.

    Assumes that the file /etc/libvirt/hooks/qemu.json exists and contains
    JSON-formatted configuration data. Optionally tries to validate the
    configuration if the 'jsonschema' module is available.

        validate: Use JSON schema validation
    if not hasattr(config, "_conf"):
        with open(CONFIG_FILENAME, "r") as f:
            config._conf = json.load(f)
        if validate:
            # Try schema validation but avoid hard 'jsonschema' requirement:
                import jsonschema
                with open(CONFIG_SCHEMA_FILENAME, "r") as f:
                    config._schema = json.load(f)
            except ImportError:
    return config._conf

def iptables_forward(action, domain):
    """Set iptables port-forwarding rules based on domain configuration.

        action: iptables rule actions (one of '-I', '-A' or '-D')
        domain: Libvirt domain configuration
    public_ip = domain.get("public_ip", host_ip())
    # Iterate over protocols (tcp, udp, icmp, ...)
    for protocol in domain["port_map"]:
        # Iterate over all public/private port pairs for the protocol
        for public_port, private_port in domain["port_map"].get(protocol):
            args = [IPTABLES_BINARY,
                    "-t", "nat", action, "PREROUTING",
                    "-p", protocol,
                    "-d", public_ip, "--dport", str(public_port),
                    "-j", "DNAT", "--to", "{0}:{1}".format(domain["private_ip"], str(private_port))]

            args = [IPTABLES_BINARY,
                    "-t", "filter", action, "FORWARD",
                    "-p", protocol,
                    "--dport", str(private_port),
                    "-j", "ACCEPT"]
            if "interface" in domain:
                args += ["-o", domain["interface"]]

if __name__ == "__main__": 
    vir_domain, action = sys.argv[1:3]
    domain = config().get(vir_domain)
    if domain is None:
    if action in ["stopped", "reconnect"]:
        iptables_forward("-D", domain)
    if action in ["start", "reconnect"]:
        iptables_forward("-I", domain)

It has a very simple configuration file that is expected to live at /etc/libvirt/hooks/qemu.json:

    "cloud-admin": {
        "interface": "virbr1",
        "private_ip": "",
        "port_map": { 
            "tcp": [[1100, 3000]],
            "udp": [[1200, 163]]
    "cloud-node1": {
        "interface": "virbr1",
        "private_ip": "",
        "port_map": {
            "tcp": [[1101, 80],
                    [1102, 443]]

With that in place, iptables rules are added and removed when the domain is started / stopped. Pretty neat, huh? You can find the full code together with some tests and documentation on the libvirt-hook-qemu Github repository.

GoDaddy DynDNS for the poor

Recently, I bought a fresh new domain from godaddy.com for my personal homepage that hosted on a server at home. Admittedly, I didn’t really spend any time on customer satisfaction or the stuff they support, they just had the cheapest offer for a .me domain 🙂 So after getting used to their cluttered web interface, I discovered they don’t support dynamic DNS in any way. In such a case, you have several options:

1. Transfer the domain to a registrar that offers dynamic DNS service

The obvious though costly solution if you just bought the domain. It can also take months to complete so that usually a non-option.

2. Use a CNAME and point it do a your DynDNS provider-supplied domain

However, this means only a subdomain (usually www) would point to your dynamic DNS domain. As an example, this is how it would look in my case:

www.peilicke.me -> duff.i324.me

There’s a simple guide in the GoDaddy forum to achieve that with their web interface. But that means wasting the actual domain, not nice.

3. Use the name servers of your DynDNS provider

GoDaddy (and other registrars) allow to replace the name servers and you could use those from your DynDNS provider given they allow it. This is how you could do it with dyndns.org. However, they started charging for that a while ago, to bad.

4. Do it yourself

You only need a script that discovers the public IP assigned to you by your ISP and write a little screen-scraper that logs into go GoDaddy’s ZoneFile Editor ™ fills and submit the A record form. Turns out that other people already had (and solved) those issues, so this is how it could look like:

#!/usr/bin/env python

import logging
import pif
import pygodaddy

logging.basicConfig(filename='godaddy.log', format='%(asctime)s %(message)s', level=logging.INFO)
client = pygodaddy.GoDaddyClient()

for domain in client.find_domains():
    dns_records = client.find_dns_records(domain)
    public_ip = pif.get_public_ip()
    logging.debug(&quot;Domain '{0}' DNS records: {1}&quot;.format(domain, dns_records))
    if public_ip != dns_records[0].value:
        client.update_dns_record(domain, public_ip)
        logging.info(&quot;Domain '{0}' public IP set to '{1}'&quot;.format(domain, public_ip))

Depending on where you want to run the script, you may need to fetch the dependencies. In my case it’s running on a Synology DiskStation 213. Underneath it’s amazing software stack is an embedded Linux with Busybox. Luckily, it already has a Python interpreter, so for me a wrapper script looks like:

ROOT_DIR=$(dirname $0)
if [ ! -d .venv27 ] ; then
  curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.9.tar.gz
  tar xvfz virtualenv-1.9.tar.gz
  python virtualenv-1.9/virtualenv.py .venv27
source .venv27/bin/activate
pip install -q --upgrade pif pygodaddy

Since your ISP will cut your connection every 24 hours (usually during the night), you would want to continuously run this script by putting it into /etc/crontab. On the DiskStation, you have to use tabs, not spaces and you have to restart crond after editing (It’s explained in the German wiki):

&lt;br /&gt;*/5 * * * * root /path/to/godaddy.sh&lt;br /&gt;

That’s it.

UPDATE: The code is now versioned on github: https://github.com/saschpe/godaddy-dyndns

openSUSE Board candidacy

Hello fellow geekos,

Let’s start with a little introduction for those that don’t know me. I’ve been involved in the openSUSE project for more than 3 years by now while being among the top ten contributors to Factory for most of the time. I mainly develop the Python and Go stacks as well as OpenStack. Since I was part of the OBS team in the past, I still contribute from time to time to source services, OBS and osc. But I guess most contributors know me for my involvement in the reviewing process of Factory submissions and maintenance updates. After taking over this responsibility from darix and doing it alone for a while, I helped shaping up the review team we have today. As a result, the review process that was solely done by SUSE employees in the past shifted to a team that consists of both SUSE employees and community members.  I also created and maintain the Continuous Integration infrastructure that allowed to move formerly internal-only testing processes into the hands of the community. There, we’re testing the OBS, our OpenStack packages and Yast.

As a board member, I would like to continue on that route and help others opening up more of the processes and tools and transition them into the wider openSUSE community. I don’t have an easy answer nor the authority to address challenging topics such as the foundation, but I believe in small, steady step. So if you choose to vote for me, you shouldn’t expect just a talker or a regular meeting attendee, rather you would get our practical issues addressed.

Ok, that was a little longer than intended, so without further ado, I hereby would like to announce my candidacy for the openSUSE Board.

OBS: Introducting the “refresh_patches” source service

As you know, RPM (and DEB and …) package building is a repetitive process and you would want to automate it as much as possible. In the context of the Open Build Service(OBS), source services can help you with exactly that. Over the time, the OBS community has implemented a whole range of source services. For instance, you can use them to fetch from git, mercurial or any other SCM repositories. You can auto-update the spec file or generate changes entries and what not. Here’s what’s currently hosted on Github:

github openSUSE services

Without much ado, we’ve got another one today, obs-service-refresh_patches. Whenever you automatically update your package with source services, there is a chance that patches applied to the package break. This could be because your local fix was merged upstream and became obsolete, or the upstream code changed and you have to rebase your patch. In the packaging context, most people just use quilt to help with that and that’s what this service is about. So let’s assume a package with the following _service file:

  <service name="tar_scm" mode="disabled">
    <param name="url">git://github.com/crowbar/barclamp-swift.git</param>
    <param name="scm">git</param>
    <param name="exclude">.git</param>
    <param name="versionformat">1.7+git.%ct.%h</param>
    <param name="revision">release/roxy/master</param>
  <service name="recompress" mode="disabled">
    <param name="file">crowbar-barclamp-swift-*git*.tar</param>
    <param name="compression">bz2</param>
  <service name="set_version" mode="disabled">;
    <param name="basename">crowbar-barclamp-swift</param>

So if you invoke the services locally with osc, it would fetch from a specific git repository, tar the whole thing up and adjust the Version: tag in the spec file:

saschpe@duff:% osc service dr
Found git://github.com/crowbar/barclamp-swift.git in /root/.obs/cache/tar_scm/repo/16116456bfcb1e0bf47d540a1e517c9450cd5569d5e423d29b18ca045c555939; updating ...
HEAD is now at 7860d00 Merge pull request #149 from MirantisDellCrowbar/bug/smoke/roxy/master
Created crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar
Compressed crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar to crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar.bz2
Detected version as 1.7+git.1383238227.7860d00
Updated first occurrence (if any) of Version in crowbar-barclamp-swift.spec to 1.7+git.1383238227.7860d00

Now let’s invoke refresh_patches by hand:

saschpe@duff:% /usr/lib/obs/service/refresh_patches
Patch pull-request-124.patch ok
Patch pull-request-148.patch refreshed
Patch fix-swift-defaults.patch ok
Patch suse-branding.patch ok
Applying patch hide-unneeded-options.patch
patching file crowbar_framework/app/views/barclamp/swift/_edit_attributes.html.haml
Hunk #1 succeeded at 11 (offset -5 lines).
Hunk #2 succeeded at 34 (offset -5 lines).
Hunk #3 FAILED at 47.
Hunk #4 succeeded at 65 (offset -14 lines).
1 out of 4 hunks FAILED -- rejects in file crowbar_framework/app/views/barclamp/swift/_edit_attributes.html.haml
Patch hide-unneeded-options.patch does not apply (enforce with -f)

So it told us all patches except the last are ok. Let’s fix that.

saschpe@duff:% vi hide-unneeded-options.patch
"hide-unneeded-options.patch" 43L, 3681C written

Now we can run it again:

saschpe@duff:% /usr/lib/obs/service/refresh_patches
Patch pull-request-124.patch ok
Patch pull-request-148.patch ok
Patch fix-swift-defaults.patch ok
Patch suse-branding.patch ok
Patch hide-unneeded-options.patch ok
Patch proposal-keystone-dep.patch refreshed
Finished refreshing patches for crowbar-barclamp-swift.spec

saschpe@duff:% osc st
?    crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar.bz2
M    crowbar-barclamp-swift.spec
M    hide-unneeded-options.patch

Everything refreshed, nice and tidy.As you can see, the first patch was refreshed during the first run, then things broke and we fixed it. After running again, everything was ok. So this is what you would end up with your local osc checkout:

saschpe@duff:% osc st
?    crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar.bz2
M    crowbar-barclamp-swift.spec
M    hide-unneeded-options.patch
M    pull-request-148.patch

Now we only have to issue osc addremove and osc build and commit everything afterwards. We didn’t had to touch the spec file, we didn’t had to untar anything and we didn’t had to invoke quilt. Ah and did I tell you that it autogenerates changes entries for you too:

Index: crowbar-barclamp-swift.changes
--- crowbar-barclamp-swift.changes      (revision caef0f9f0d1cb92298ad184a4f2a1efe)
+++ crowbar-barclamp-swift.changes      (working copy)
@@ -1,3 +1,10 @@
+Fri Nov 08 14:19:31 UTC 2013 - speilicke@suse.com
+- Rebased patches:
+  + fix-swift-defaults.patch (manually)
+  + pull-request-148.patch (only offset)
 Fri Nov  8 10:24:11 UTC 2013 - speilicke@suse.com

So if you want to give it a try, you can install the package obs-service-refresh_patches from the openSUSE:Tools OBS repository. Then you have to put the following at the end of your _service file (after the other services that modify sources):

<service name="refresh_patches" mode="disabled">
  <param name="changesgenerate">enable</param>

Now you are only one osc service disabledrun away from one issue less to care for. It is probably best to run this service only locally, i.e. with either mode=”disabled” or mode=”localonly”. This way you can still check that the refreshed patches won’t break anything.

Happy packaging!

OBS 101: How to treat packages with multiple spec files

If you have an OBS package containing multiple spec files, you may have discovered that OBS only builds the spec file matching the OBS package name. If you want to have the other(s) spec file(s) built, you should use a link, don’t use copypac!

For example, devel:languages:python / python-nose is a OBS package containing four (!) spec files. In this case, documentation building is separate because doc building dependencies (python-Sphinx) would create a build cycle. A second set of packages are Python3-related, because devel:languages:python builds both Python and Python3 at the moment. So we end up with the following list of spec files:


As you can see here, only python-nose.spec is build, so we have to do the following (on a command line near you, given you have the rights to do it in the project):

    $ osc linkpac devel:languages:python python-nose \
                  devel:languages:python python-nose-doc
    $ osc linkpac devel:languages:python python-nose \
                  devel:languages:python python3-nose
    $ osc linkpac devel:languages:python python-nose \
                  devel:languages:python python3-nose-doc

Even though you end up with four OBS packages, you only have to change or fix python-nose due to the links. This is better than using copypac (as I’ve seen recently). Of course you should only split up into several spec files if there’s a very good reason for the extra work. Here are some:

  • To avoid build cycles
  • To off-load looong-running parts of a package build, like:
    • Running a testsuite ($PACKAGE-testsuite.spec)
    • Building documentation ($PACKAGE-doc.spec)
  • When building the same thing against a different set of (build) requirements, like:
    • Different $DYNAMIC_LANGUAGE interpreter versions
      (usually $INTERPRETER-$PACKAGE.spec)

Lastly, to make sure other people branch from the right package, you should set the “base” package as the devel package of the linked ones like this:

    $ osc changedevelrequest devel:languages:python python-nose-doc \
                             devel:languages:python python-nose
    $ osc changedevelrequest devel:languages:python python3-nose \
                             devel:languages:python python-nose
    $ osc changedevelrequest devel:languages:python python3-nose-doc \
                             devel:languages:python python3-nose

This way, people always end up branching from python-nose when they try to branch python-nose-doc (or python3-nose / python3-nose-doc).  Thanks to vuntz for reminding me of the last point!

On splitting strings

Splitting strings is cool, but most languages have their subtle differences in how it is done. The three contenders are JavaScript, Python and Ruby. As an example, suppose you’re getting a string in the form “type_role_name” and you want to split it into type, role, name. The little twist here is that ‘name’ can also contain underscores. Let’s start reversed, it’s an easy job in Ruby:

irb> type,role,name = "user_admin_john_doe".split('_', 3)
=> ["user", "admin", "john_doe"]

Ruby’s split method want’s to know how much pieces you want. Onwards with Python:

>>> type,role,name = 'user_admin_john_doe'.split('_', 2)
('user', 'admin', 'john_doe')

I prefer this style slightly, as you have to express how many splits you want. The mindset here is that when you’re saying ‘split twice’, you’re expecting that the last element may contain the remainder of the string. By contrast, when saying, ‘give me three pieces’, it is unclear whether you want the remainder or not. Besides that, both are equally cool. Last but not least, how to do it in JavaScript? Well, turns out it’s rather messy:

“user_admin_john_doe”.split(‘_’, 3)
[“user”, “admin”, “john”]

Altough this is somewhat similar to Ruby (i.e. ‘gimme three pieces’), it continues to split and discards the remaining stuff. Furthermore, JavaScript doesn’t support multiple assigment (could also be called iterable unpacking), so you have to store the result into an array first and assign individually. But more importantly, how to get “john_doe”? Turns out that you have to fiddle with splitting off substrings:

str = "user_admin_john_doe"
type = str.slice(0, str.indexOf('_'));
str = str.slice(str.indexOf('_') + 1);
role = str.slice(0, str.indexOf('_'));
str = str.slice(str.indexOf('_') + 1);
name = str

Not exactly what I would call elegant but it does the trick. I’d say, it’s a draw game between Python and Ruby, whereas JavaScript failed big time 🙂

Braindead Python packaging

As you all know, distributing and building packages with the openSUSE Build Service is easy and fun. The only party pooper is that you have to write a spec file to get your RPMs out there. Thanks to darix, we have a decent solution at least for Ruby packages: gem2rpm, a script which auto-generates RPM spec files for Ruby gems. Ever wondered why we don’t have something similar for Python? Well, I did so too. Thus, after a half a week of hackery, I’d like to introduce py2pack, my take on braindead Python packaging. Here’s how it goes:

Lets suppose you want to package zope.interface and you don’t know how it is named exactly or where to download from. First of all, you can search for it and download the source tarball automatically if you found the correct module:

$ py2pack search zope.interface
searching for module zope.interface...
found zope.interface-3.6.1
$ py2pack fetch zope.interface
downloading package zope.interface-3.6.1...
from http://pypi.python.org/packages/source/z/zope.interface/zope.interface-3.6.1.tar.gz

As a next step you may want to generate a package recipe for your distribution. For RPM-based distributions (let’s use openSUSE as an example), you want to generate a spec file named python-zopeinterface.spec:

$ py2pack generate zope.interface -t opensuse.spec -f python-zopeinterface.spec

The above command has the parameter “-t opensuse.spec”, which is also the default (and thus optional). Well, this seems to imply that you can generate other files (and rumor has it that Debian support is on the way, too) 🙂

Back to the topic, the source tarball and package recipe is all you need to generate the RPM file. This final step may depend on which distribution you use. Again, for openSUSE (and by using the openSUSE Build Service), the complete recipe becomes:

$ osc mkpac python-zopeinterface
$ cd python-zopeinterface
$ py2pack fetch zope.interface
$ py2pack generate zope.interface -f python-zopeinterface.spec
$ osc build
$ osc vc
$ osc commit

The first line uses osc, the Build Service command line tool to generate a new package (preferrably in your Build Service home project). The py2pack steps are known already. Finally, the package is tested (built locally), a changes (package changelog) file is generated (with ‘osc vc’) and the result is sent back to the Build Service for public consumption. However, depending on the Python module, you may have to adapt the generated spec file slightly. Py2pack is quite clever and tries to auto-generate as much as it can, but it depends on the metadata that the module provides. Thus, bad metadata implies mediocre results. To get further help about py2pack usage, issue the following command:

$ py2pack help

To demonstrate its capabilities, I re-packaged some Python packages that are in devel:languages:python. As you can see (when logged in to the Build Service), those package now build for all RPM-based distros we currently offer in the Build Service. Currently, you can download py2pack from the Python Package Index or install the package python-py2pack from the devel:languages:python repository.

Happy Python packaging!