GoDaddy DynDNS for the poor

Recently, I bought a fresh new domain from godaddy.com for my personal homepage that hosted on a server at home. Admittedly, I didn’t really spend any time on customer satisfaction or the stuff they support, they just had the cheapest offer for a .me domain :-) So after getting used to their cluttered web interface, I discovered they don’t support dynamic DNS in any way. In such a case, you have several options:

1. Transfer the domain to a registrar that offers dynamic DNS service

The obvious though costly solution if you just bought the domain. It can also take months to complete so that usually a non-option.

2. Use a CNAME and point it do a your DynDNS provider-supplied domain

However, this means only a subdomain (usually www) would point to your dynamic DNS domain. As an example, this is how it would look in my case:

www.peilicke.me -> duff.i324.me

There’s a simple guide in the GoDaddy forum to achieve that with their web interface. But that means wasting the actual domain, not nice.

3. Use the name servers of your DynDNS provider

GoDaddy (and other registrars) allow to replace the name servers and you could use those from your DynDNS provider given they allow it. This is how you could do it with dyndns.org. However, they started charging for that a while ago, to bad.

4. Do it yourself

You only need a script that discovers the public IP assigned to you by your ISP and write a little screen-scraper that logs into go GoDaddy’s ZoneFile Editor ™ fills and submit the A record form. Turns out that other people already had (and solved) those issues, so this is how it could look like:

#!/usr/bin/env python

import logging
import pif
import pygodaddy

logging.basicConfig(filename='godaddy.log', format='%(asctime)s %(message)s', level=logging.INFO)
GODADDY_USERNAME="@USERNAME@"
GODADDY_PASSWORD="@PASSWORD@"
client = pygodaddy.GoDaddyClient()
client.login(GODADDY_USERNAME, GODADDY_PASSWORD)

for domain in client.find_domains():
    dns_records = client.find_dns_records(domain)
    public_ip = pif.get_public_ip()
    logging.debug("Domain '{0}' DNS records: {1}".format(domain, dns_records))
    if public_ip != dns_records[0].value:
        client.update_dns_record(domain, public_ip)
        logging.info("Domain '{0}' public IP set to '{1}'".format(domain, public_ip))

Depending on where you want to run the script, you may need to fetch the dependencies. In my case it’s running on a Synology DiskStation 213. Underneath it’s amazing software stack is an embedded Linux with Busybox. Luckily, it already has a Python interpreter, so for me a wrapper script looks like:

#!/bin/sh
OLD_PWD=$PWD
ROOT_DIR=$(dirname $0)
cd $ROOT_DIR
if [ ! -d .venv27 ] ; then
  curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.9.tar.gz
  tar xvfz virtualenv-1.9.tar.gz
  python virtualenv-1.9/virtualenv.py .venv27
fi
source .venv27/bin/activate
pip install -q --upgrade pif pygodaddy
./godaddy.py
deactivate
cd $OLD_PWD

Since your ISP will cut your connection every 24 hours (usually during the night), you would want to continuously run this script by putting it into /etc/crontab. On the DiskStation, you have to use tabs, not spaces and you have to restart crond after editing (It’s explained in the German wiki):

<br />*/5 * * * * root /path/to/godaddy.sh<br />

That’s it.

Hard disc benchmarks and data safety

Out of a discussion with darix regarding optimal hard disc layout (using RAID levels and LVM), I happened to benchmark my setup with bonnie++. The machine runs openSUSE-11.3 with the 2.6.34.7 kernel. Storage-wise it has two identical Samsung HD103UJ 1TB discs using Intel Matrix Raid. The main area of the discs use RAID-0 to max out performance. To be blunt, I don’t believe in RAID-1 security at all. IMO 99% of all disc failures are caused by human errors. A big portion of the rest goes to the mainboard controller (your real single point of failure). The remainder can be covered by regular backups to external media. That said, here’s the disc layout:

As you can see, basically everything that matters runs on a RAID-0. The backmost part of the discs is a RAID-1 backup partition that is only mounted occasionally (for writing backups). This assures moderate data security (way better than RAID-1 for your root-filesystem alone) but still depends on occasional external backup. However, the main advantage of this layout is that it is fast. Long story short, here are the numbers:

Maybe not as awesome as this HPC machine, but still quite nice :-)

What a weather!

It is summer, it is warm and it is nice. Yet I’m hanging around at home trying to bring my thesis to perfection :-) Actually, it’s the coding part right now and it’s the usual hunting ‘em bugaroos down. While I’m sitting right now at a particular tough one, I stumbled up on this awesome movie trailer (thanks Marc):

I’d say it’s at least somewhat related *grin*.

Productivity

Backups! Yep, that thing you’re ought to do regularly. Today’s the day for me to do it. As being not quite content with the available solutions out there I tend to do it all by hand. This involves some tedious repetitions like packing directories. So here’s a little script that might be handy:

#!/bin/sh
#
# Create a packed tarball of a directory or file in the
# current working directory with an optional filename prefix.
#
DATE=`date +%Y%m%d-%H%M%S`
case “$#” in
“1″)
NAME=`echo $1-$DATE | sed “{s/ /_/g;s/\/$//}”`
;;
“2″)
NAME=`echo $2-$1-$DATE | sed “{s/ /_/g;s/\/$//}”`
;;
* )
echo “Usage: pack FILE [PREFIX]“
echo “Create a packed tarball of FILE with an optional filename PREFIX.”
exit 1
;;
esac
tar -cf “$NAME.tar” “$1″ && bzip2 “$NAME.tar”

Just paste that into for example /usr/local/bin/pack and set the executable bit. Afterwards, if you want to pack a directory test you can do a simple pack test or provide a prefix to the resulting file with a more complex pack test $HOST-$USER which results in something like festor-saschpe-test- 20091120-122101.tar.bz2 on my machine. And don’t forget to listen to classy music like this while backing up your stuff:

That’s it, have a nice weekend.