BitTorrent Sync on openSUSE

Recently, I discovered BitTorrent Sync, which seems to satisfy most of my file syncing demands. It’s encrypted client-side, cross-platform and works behind NATs and firewalls. While it is currently still proprietary (who cares, really), it is available for many devices. Besides the usual Windows / Mac binaries, you can find it on Android’s Play Store. Most interestingly, they provide ARM binaries. If you are a happy Synology NAS user, you can add the SynoCommunity package repository directly. That’s been the elevator pitch, check the community forums for more details.


First of all, you need to add Packman’s Essentials repository to install the btsync package. This is necessary due to the licensing terms of BitTorrent Sync, which don’t allow redistribution. Thus the btsync package will run a script (during installation) that downloads the btsync binary from BitTorrent servers (very much like the flash plugin installer on openSUSE). Either way, you’ll end up with btsync on your disk. For openSUSE Factory, the steps are (as root):

$ zypper ar
$ zypper refresh
$ zypper install btsync

On recent openSUSE releases, systemd allows to start daemons as non-root users. Running btsync under your user rather than root avoids messing up file ownership and allows several people on the same machine to have their own distinct btsync configuration. So for the user saschpe (replace with your username), the commands are (as root):

$ systemctl start btsync@saschpe
$ systemctl enable btsync@saschpe


To allow parallel usage for multiple users, the btsync daemon listens on port 8888 + $YOUR_USER_ID per default. So if your user’s UID (check /etc/passwd) is 1000, you can find btsync’s web interface at http://localhost:9888. This is how looks like:


The credentials can be found at $HOME/.btsync/sync.conf. The interesting part is the auto-generated password (which you could change), the username will match your Unix user account. However, you may also want to change the listen address to something different. So this is the config section you want to adjust:

"webui" :
"listen" : "",
"login" : "saschpe",
"password" : "supersecretpasswordhere"


Sharing folders and further setup can be done directly from the web interface, no need to mess with the config file again. Have fun.

eBay’s funny mail restrictions vs. the plus extension

Since eBay recently lost a complete database dumb, I thought let’s join the other 145 million guys and change me good old password. While doing that I wanted to update my mail address too. BTW, I recently changed to and I am very (very very) happy with it. Unfortunatley though eBay’s very (very very) secure e-mail address validation disallows having the eBay username as localpart (as in Wow I thought, that is what I call secure by default! Funny side note, my old e-mail address also includes my eBay user name, but nobody complained about that some 10 years ago. Well then I thougth, let’s check support. After almost going nuts in their craptastic present-stupid-solutions-but-avoid-revealing-the-bloody-support-form manor, I went for the (free) hotline. So after a 28 minutes long patience test (which I obviously won), I was told by a very friendly lady that she isn’t allowed to change the mail address by hand. German data security laws, she said. Fuck that, I thought 🙂 She also recommended not using this mail address 😉 But that led me to suddenly remember that supports the plus-extensions (or whatever it’s called in the RFCs). So I ended up trying and it worked! So kudos to eBay for not disallowing plus signs in mail addresses (no sarcasm here, many services do). Even better, the next time they loose all their users data, spammers will only get the alias address and I can just block that. Nice!

Dynamic iptables port-forwarding for NAT-ed libvirt networks

Libvirt is particularly awesome when it comes to managing virtual machines, their underlying storage and networks. However, if you happen to use NAT-ed networking and want to allow external access to services offered by your VMs, you’ve got to do some manual work. The simplest way to get access is to set up some  iptables rules to do port-forwarding. So for quite a while, I had the following in my /etc/init.d/boot.local and things just worked:


function iptables_forward() {
    # Drop rules just in case they are already present:
    iptables -t nat -D PREROUTING -p tcp -d $HOST_IP --dport $HOST_PORT -j DNAT --to $FORWARD_IP:$FORWARD_PORT 2> /dev/null
    iptables -D FORWARD -o $BRIDGE -p tcp --dport $FORWARD_PORT -j ACCEPT 2> /dev/null
    echo "iptables_forward: Forward service $1 from $HOST_IP:$HOST_PORT to $FORWARD_IP:$FORWARD_PORT"
    #iptables -t nat -A PREROUTING -p tcp -d $HOST_IP --dport $HOST_PORT -j DNAT --to $FORWARD_IP:$FORWARD_PORT
    #iptables -I FORWARD -o $BRIDGE -p tcp --dport $FORWARD_PORT -j ACCEPT

declare -A service
# Declare array of forwarding rules for VM services in the following form:

# libvirt 'default' network:
#service["devstack_dashboard"]="1011 80 virbr0"
#service["obs_api"]="1011 4040 virbr0"
#service["obs_webui"]="1022 80 virbr0"
#service["quickstart_crowbar"]="1030 3000 virbr0"
#service["quickstart_dashboard"]="1031 443 virbr0"
#service["quickstart_chef_webui"]="1032 4040 virbr0"
#service["quickstart_ssh"]="1033 22 virbr0"

# libvirt 'cloud' network:
service["cloud_crowbar"]="1100 3000 virbr1"
service["cloud_dashboard"]="1101 80 virbr1"
service["cloud_dashboard_ssl"]="1102 443 virbr1"

for key in ${!service[@]} ; do
    iptables_forward "$key" "${service[$key]}"

Pretty simple. However, with systemd things got more complicated. Not only is /etc/init.d/boot.local not evaluated anymore, but it likes to fight with libvirt over when to create bridges (and VLANs). Thus I had to manually invoke the script after libvirt was running.  After re-reading libvirt’s awesome documentation, it was clear that this really rather belongs into a hook script. For qemu domains, the script has to be put in /etc/libvirt/hooks and named qemu. I has to comply to this rather simply interface:

/etc/libvirt/hooks/qemu VIR_DOMAIN ACTION ...

Where VIR_DOMAIN is the exact name of the libvirt domain (virtual machine) for which you want to add / remove iptables port-forwarding rules. ACTION is either “start”, “stopped” or “reconnect”. It could be something like this Python script:


"""Libvirt port-forwarding hook.

Libvirt hook for setting up iptables port-forwarding rules when using NAT-ed
__author__ = "Sascha Peilicke <>"
__version__ = "0.1.1"

import os
import json
import subprocess
import sys

CONFIG_PATH = os.path.dirname(os.path.abspath(__file__))
CONFIG_FILENAME = os.path.join(CONFIG_PATH, "qemu.json")
CONFIG_SCHEMA_FILENAME = os.path.join(CONFIG_PATH, "qemu.schema.json")
IPTABLES_BINARY = subprocess.check_output(["which", "iptables"]).strip()

def host_ip():
    """Returns the default route interface IP (if any).

    In other words, the public IP used to access the virtualization host. It
    is used as default public IP for guest forwarding rules should they not
    specify a different public IP to forward from.
    if not hasattr(host_ip, "_host_ip"):
        cmd = "ip route | grep default | cut -d' ' -f5"
        default_route_interface = subprocess.check_output(cmd, shell=True).decode().strip()
        cmd = "ip addr show {0} | grep -E 'inet .*{0}' | cut -d' ' -f6 | cut -d'/' -f1".format(default_route_interface)
        host_ip._host_ip = subprocess.check_output(cmd, shell=True).decode().strip()
    return host_ip._host_ip

def config(validate=True):
    """Returns the hook configuration.

    Assumes that the file /etc/libvirt/hooks/qemu.json exists and contains
    JSON-formatted configuration data. Optionally tries to validate the
    configuration if the 'jsonschema' module is available.

        validate: Use JSON schema validation
    if not hasattr(config, "_conf"):
        with open(CONFIG_FILENAME, "r") as f:
            config._conf = json.load(f)
        if validate:
            # Try schema validation but avoid hard 'jsonschema' requirement:
                import jsonschema
                with open(CONFIG_SCHEMA_FILENAME, "r") as f:
                    config._schema = json.load(f)
            except ImportError:
    return config._conf

def iptables_forward(action, domain):
    """Set iptables port-forwarding rules based on domain configuration.

        action: iptables rule actions (one of '-I', '-A' or '-D')
        domain: Libvirt domain configuration
    public_ip = domain.get("public_ip", host_ip())
    # Iterate over protocols (tcp, udp, icmp, ...)
    for protocol in domain["port_map"]:
        # Iterate over all public/private port pairs for the protocol
        for public_port, private_port in domain["port_map"].get(protocol):
            args = [IPTABLES_BINARY,
                    "-t", "nat", action, "PREROUTING",
                    "-p", protocol,
                    "-d", public_ip, "--dport", str(public_port),
                    "-j", "DNAT", "--to", "{0}:{1}".format(domain["private_ip"], str(private_port))]

            args = [IPTABLES_BINARY,
                    "-t", "filter", action, "FORWARD",
                    "-p", protocol,
                    "--dport", str(private_port),
                    "-j", "ACCEPT"]
            if "interface" in domain:
                args += ["-o", domain["interface"]]

if __name__ == "__main__": 
    vir_domain, action = sys.argv[1:3]
    domain = config().get(vir_domain)
    if domain is None:
    if action in ["stopped", "reconnect"]:
        iptables_forward("-D", domain)
    if action in ["start", "reconnect"]:
        iptables_forward("-I", domain)

It has a very simple configuration file that is expected to live at /etc/libvirt/hooks/qemu.json:

    "cloud-admin": {
        "interface": "virbr1",
        "private_ip": "",
        "port_map": { 
            "tcp": [[1100, 3000]],
            "udp": [[1200, 163]]
    "cloud-node1": {
        "interface": "virbr1",
        "private_ip": "",
        "port_map": {
            "tcp": [[1101, 80],
                    [1102, 443]]

With that in place, iptables rules are added and removed when the domain is started / stopped. Pretty neat, huh? You can find the full code together with some tests and documentation on the libvirt-hook-qemu Github repository.

GoDaddy DynDNS for the poor

Recently, I bought a fresh new domain from for my personal homepage that hosted on a server at home. Admittedly, I didn’t really spend any time on customer satisfaction or the stuff they support, they just had the cheapest offer for a .me domain 🙂 So after getting used to their cluttered web interface, I discovered they don’t support dynamic DNS in any way. In such a case, you have several options:

1. Transfer the domain to a registrar that offers dynamic DNS service

The obvious though costly solution if you just bought the domain. It can also take months to complete so that usually a non-option.

2. Use a CNAME and point it do a your DynDNS provider-supplied domain

However, this means only a subdomain (usually www) would point to your dynamic DNS domain. As an example, this is how it would look in my case: ->

There’s a simple guide in the GoDaddy forum to achieve that with their web interface. But that means wasting the actual domain, not nice.

3. Use the name servers of your DynDNS provider

GoDaddy (and other registrars) allow to replace the name servers and you could use those from your DynDNS provider given they allow it. This is how you could do it with However, they started charging for that a while ago, to bad.

4. Do it yourself

You only need a script that discovers the public IP assigned to you by your ISP and write a little screen-scraper that logs into go GoDaddy’s ZoneFile Editor ™ fills and submit the A record form. Turns out that other people already had (and solved) those issues, so this is how it could look like:

#!/usr/bin/env python

import logging
import pif
import pygodaddy

logging.basicConfig(filename='godaddy.log', format='%(asctime)s %(message)s', level=logging.INFO)
client = pygodaddy.GoDaddyClient()

for domain in client.find_domains():
    dns_records = client.find_dns_records(domain)
    public_ip = pif.get_public_ip()
    logging.debug(&quot;Domain '{0}' DNS records: {1}&quot;.format(domain, dns_records))
    if public_ip != dns_records[0].value:
        client.update_dns_record(domain, public_ip);Domain '{0}' public IP set to '{1}'&quot;.format(domain, public_ip))

Depending on where you want to run the script, you may need to fetch the dependencies. In my case it’s running on a Synology DiskStation 213. Underneath it’s amazing software stack is an embedded Linux with Busybox. Luckily, it already has a Python interpreter, so for me a wrapper script looks like:

ROOT_DIR=$(dirname $0)
if [ ! -d .venv27 ] ; then
  curl -O
  tar xvfz virtualenv-1.9.tar.gz
  python virtualenv-1.9/ .venv27
source .venv27/bin/activate
pip install -q --upgrade pif pygodaddy

Since your ISP will cut your connection every 24 hours (usually during the night), you would want to continuously run this script by putting it into /etc/crontab. On the DiskStation, you have to use tabs, not spaces and you have to restart crond after editing (It’s explained in the German wiki):

&lt;br /&gt;*/5 * * * * root /path/to/;br /&gt;

That’s it.

UPDATE: The code is now versioned on github:

openSUSE Board candidacy

Hello fellow geekos,

Let’s start with a little introduction for those that don’t know me. I’ve been involved in the openSUSE project for more than 3 years by now while being among the top ten contributors to Factory for most of the time. I mainly develop the Python and Go stacks as well as OpenStack. Since I was part of the OBS team in the past, I still contribute from time to time to source services, OBS and osc. But I guess most contributors know me for my involvement in the reviewing process of Factory submissions and maintenance updates. After taking over this responsibility from darix and doing it alone for a while, I helped shaping up the review team we have today. As a result, the review process that was solely done by SUSE employees in the past shifted to a team that consists of both SUSE employees and community members.  I also created and maintain the Continuous Integration infrastructure that allowed to move formerly internal-only testing processes into the hands of the community. There, we’re testing the OBS, our OpenStack packages and Yast.

As a board member, I would like to continue on that route and help others opening up more of the processes and tools and transition them into the wider openSUSE community. I don’t have an easy answer nor the authority to address challenging topics such as the foundation, but I believe in small, steady step. So if you choose to vote for me, you shouldn’t expect just a talker or a regular meeting attendee, rather you would get our practical issues addressed.

Ok, that was a little longer than intended, so without further ado, I hereby would like to announce my candidacy for the openSUSE Board.

OBS: Introducting the “refresh_patches” source service

As you know, RPM (and DEB and …) package building is a repetitive process and you would want to automate it as much as possible. In the context of the Open Build Service(OBS), source services can help you with exactly that. Over the time, the OBS community has implemented a whole range of source services. For instance, you can use them to fetch from git, mercurial or any other SCM repositories. You can auto-update the spec file or generate changes entries and what not. Here’s what’s currently hosted on Github:

github openSUSE services

Without much ado, we’ve got another one today, obs-service-refresh_patches. Whenever you automatically update your package with source services, there is a chance that patches applied to the package break. This could be because your local fix was merged upstream and became obsolete, or the upstream code changed and you have to rebase your patch. In the packaging context, most people just use quilt to help with that and that’s what this service is about. So let’s assume a package with the following _service file:

  <service name="tar_scm" mode="disabled">
    <param name="url">git://</param>
    <param name="scm">git</param>
    <param name="exclude">.git</param>
    <param name="versionformat">1.7+git.%ct.%h</param>
    <param name="revision">release/roxy/master</param>
  <service name="recompress" mode="disabled">
    <param name="file">crowbar-barclamp-swift-*git*.tar</param>
    <param name="compression">bz2</param>
  <service name="set_version" mode="disabled">;
    <param name="basename">crowbar-barclamp-swift</param>

So if you invoke the services locally with osc, it would fetch from a specific git repository, tar the whole thing up and adjust the Version: tag in the spec file:

saschpe@duff:% osc service dr
Found git:// in /root/.obs/cache/tar_scm/repo/16116456bfcb1e0bf47d540a1e517c9450cd5569d5e423d29b18ca045c555939; updating ...
HEAD is now at 7860d00 Merge pull request #149 from MirantisDellCrowbar/bug/smoke/roxy/master
Created crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar
Compressed crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar to crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar.bz2
Detected version as 1.7+git.1383238227.7860d00
Updated first occurrence (if any) of Version in crowbar-barclamp-swift.spec to 1.7+git.1383238227.7860d00

Now let’s invoke refresh_patches by hand:

saschpe@duff:% /usr/lib/obs/service/refresh_patches
Patch pull-request-124.patch ok
Patch pull-request-148.patch refreshed
Patch fix-swift-defaults.patch ok
Patch suse-branding.patch ok
Applying patch hide-unneeded-options.patch
patching file crowbar_framework/app/views/barclamp/swift/_edit_attributes.html.haml
Hunk #1 succeeded at 11 (offset -5 lines).
Hunk #2 succeeded at 34 (offset -5 lines).
Hunk #3 FAILED at 47.
Hunk #4 succeeded at 65 (offset -14 lines).
1 out of 4 hunks FAILED -- rejects in file crowbar_framework/app/views/barclamp/swift/_edit_attributes.html.haml
Patch hide-unneeded-options.patch does not apply (enforce with -f)

So it told us all patches except the last are ok. Let’s fix that.

saschpe@duff:% vi hide-unneeded-options.patch
"hide-unneeded-options.patch" 43L, 3681C written

Now we can run it again:

saschpe@duff:% /usr/lib/obs/service/refresh_patches
Patch pull-request-124.patch ok
Patch pull-request-148.patch ok
Patch fix-swift-defaults.patch ok
Patch suse-branding.patch ok
Patch hide-unneeded-options.patch ok
Patch proposal-keystone-dep.patch refreshed
Finished refreshing patches for crowbar-barclamp-swift.spec

saschpe@duff:% osc st
?    crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar.bz2
M    crowbar-barclamp-swift.spec
M    hide-unneeded-options.patch

Everything refreshed, nice and tidy.As you can see, the first patch was refreshed during the first run, then things broke and we fixed it. After running again, everything was ok. So this is what you would end up with your local osc checkout:

saschpe@duff:% osc st
?    crowbar-barclamp-swift-1.7+git.1383238227.7860d00.tar.bz2
M    crowbar-barclamp-swift.spec
M    hide-unneeded-options.patch
M    pull-request-148.patch

Now we only have to issue osc addremove and osc build and commit everything afterwards. We didn’t had to touch the spec file, we didn’t had to untar anything and we didn’t had to invoke quilt. Ah and did I tell you that it autogenerates changes entries for you too:

Index: crowbar-barclamp-swift.changes
--- crowbar-barclamp-swift.changes      (revision caef0f9f0d1cb92298ad184a4f2a1efe)
+++ crowbar-barclamp-swift.changes      (working copy)
@@ -1,3 +1,10 @@
+Fri Nov 08 14:19:31 UTC 2013 -
+- Rebased patches:
+  + fix-swift-defaults.patch (manually)
+  + pull-request-148.patch (only offset)
 Fri Nov  8 10:24:11 UTC 2013 -

So if you want to give it a try, you can install the package obs-service-refresh_patches from the openSUSE:Tools OBS repository. Then you have to put the following at the end of your _service file (after the other services that modify sources):

<service name="refresh_patches" mode="disabled">
  <param name="changesgenerate">enable</param>

Now you are only one osc service disabledrun away from one issue less to care for. It is probably best to run this service only locally, i.e. with either mode=”disabled” or mode=”localonly”. This way you can still check that the refreshed patches won’t break anything.

Happy packaging!