Testing a Small Query Language in Python with Hypothesis

An experimental setup with three beakers, illustration from a book

This entry was intended to be cross-posted to the CERN Databases blog, but is currently pending review. Consider it a pre-release version.

Hypothesis is an implementation of Property-based testing for Python, similar to QuickCheck in Haskell/Erlang and test.check in Clojure (among others). Basically, it allows the programmer to formulate invariants about their programs, and have an automated system attempt to generate counter-examples that invalidates them.

A Small Query Language

During my internship at CERN, I am developing a small (partial) two-way monitoring system to propagate alerts from filers to CERN’s incident management system. In the course of developing this monitor, I decided to invent a very minimal query/filtering language for logging events. It maps directly against Python objects using regular expressions (basically: “does object x have a property y matching regex z”?). The following is its grammar (written for the Grako parser-generator):

start = expression ;

expression
        =
        '(' expression ')' binary_operator '(' expression ')'
        | unary_operator '(' expression ')'
        | statement
       ;

binary_operator = 'AND' | 'OR';
unary_operator = 'NOT';
statement = field ':' "'" regex "'";
field =
      /[0-9A-Za-z_]+/
      ;

regex
    = /([^'])*/
    ;

An example (from a test configuration file) could be event_type:'disk.failed' (disk failures) or (source_type:'(?i)Aggregate') AND (NOT(source_name:'aggr0')) (log events from aggregates, but not aggr0).

The following invariants should hold, where q is any valid query:

  • NOT(NOT(q))q
  • (q) AND (q)q
  • (q) OR (q)q
  • (q) OR (NOT(q)) is always True

In addition, the following properties should also hold:

  • key:'value' matches every object containing a property key with exact value value for any valid values of key and value (that is, valid Python variable names for key and more or less any string for value)
  • key:'' matches every object that has an attribute key regardless of its value

Generating Examples

There are several types of inputs we need to generate to test the system. Let’s break them down:

  • objects with various fields
  • regular expressions
  • valid statements
  • valid queries

Let’s start from the top. As Python is a dynamic language, we can do crazy things, like dynamically generating objects from dictionaries. The following is a fairly common hack:

class objectview(object):
    def __init__(self, d):
        self.__dict__ = d

    def __repr__(self):
        return str(self.__dict__)

    def __str__(self):
        return str(self.__dict__)

This allows us to instantiate an object with (almost) arbitrary fields:

cat = objectview({'colour': 'red', 'fav_food_barcode': '1941230190'})
>>> cat.colour
'red'
>>> cat.fav_food_barcode
'1941230190'

Given this, we can just generate valid objects using the @composite decorator in Hypothesis:

@composite
def objects(draw):
    ds = draw(dictionaries(keys=valid_properties,
                           values=valid_values,
                           min_size=1))

    return objectview(ds)

Generating valid values is much simpler:

valid_values = text()

Any text string is a valid string value. Of course! Properties are a bit trickier though:

valid_properties = (characters(max_codepoint=91,
                               whitelist_categories=["Ll", "Lu", "Nd"])
                    .filter(lambda s: not s[0].isdigit()))

Variable names can’t start with a number, and has to be basically mostly ASCII, so we slightly modify and filter the characters strategy.

Statements can be generated in much the same way, using composite strategies:

@composite
def statements(draw):
    # any valid key followed by a valid regex
    key = draw(valid_properties)
    regex = draw(regexes)

    return u"{key}:'{regex}'".format(key=key, regex=regex)

However, how do we produce regular expressions? Let’s start with some valid candidates:

regex_string_candidates = characters(blacklist_characters=[u'?', u'\\', u"'"])

Then we can generate regular expressions using Hypothesis’ back-tracking functionality through assume(), which causes it to discard bad examples (in this instance is_valid_regex() simply tries to compile the string as a Python regular expression, and returns False if it fails):

@composite
def regex_strings(draw):
    maybe_regex = draw(regex_string_candidates)
    assume(is_valid_regex(maybe_regex))
    return maybe_regex

But we can also use recursive generation strategies to produce more complex regular expressions:

regexes = recursive(regex_strings(), lambda subexps:
                    # match one or more
                    subexps.map(lambda re: u"({re})+".format(re=re)) |

                    # match zero or more
                    subexps.map(lambda re: u"({re})*".format(re=re)) |

                    # Append "match any following"
                    subexps.map(lambda re: u"{re}.*".format(re=re)) |

                    # Prepend "match any following"
                    subexps.map(lambda re: u".*{re}".format(re=re)) |

                    # Prepend start of string
                    subexps.map(lambda re: u"^{re}".format(re=re)) |

                    # Append end of string
                    subexps.map(lambda re: u"{re}$".format(re=re)) |

                    # Append escaped backslash
                    subexps.map(lambda re: u"{re}\\\\".format(re=re)) |

                    # Append escaped parenthesis
                    subexps.map(lambda re: u"{re}\(".format(re=re)) |

                    # Append dot
                    subexps.map(lambda re: u"{re}.".format(re=re)) |

                    # Match zero or one
                    subexps.map(lambda re: u"({re})?".format(re=re)) |

                    # Match five to six occurrences
                    subexps.map(lambda re: (u"({re})"
                                            .format(re=re)) + u"{5,6}") |

                    # concatenate two regexes
                    tuples(subexps, subexps).map(lambda res: u"%s%s" % res) |

                    # OR two regexes
                    tuples(subexps, subexps).map(lambda res: u"%s|%s" % res))

The same strategy also works for the highly recursive structure of the query language:

queries = recursive(statements(),
                    lambda subqueries:
                    subqueries.map(negated_query) |
                    tuples(subqueries, subqueries).map(ored_queries) |
                    tuples(subqueries, subqueries).map(anded_queries))

Read as: “a valid query is any statement, or a any valid query negated, or two valid queries AND:ed or OR:ed”.

Making Assertions

To finally assert properties, we assert things similarly to how we would in normal unit tests. For example, let’s verify that the empty regular expression matches anything:

@given(target=objects(), key=valid_properties)
def test_query_for_empty_regex_always_matches(target, key):
    q = "{key}:''".format(key=key)
    assert query.matches_object(q, target)

Hypothesis immediately finds a counter-example: ``` > assert query.matches_object(q, target) E assert False E + where False = <function matches_object at 0x7fa76dc6f5f0>(“A:’’”, {u’B’: u’’}) E + where <function matches_object at 0x7fa76dc6f5f0> = query.matches_object

key = ‘A’ q = “A:’’” target = {u’B’: u’’}

syncd/eql/test/test_hypothesis.py:188: AssertionError ———- Hypothesis ——— Falsifying example: test_query_for_empty_regex_always_matches(target={u’B’: u’’}, key=u’A’) ```

An object which doesn’t have the specified property will not match the query, even if the query is looking for the empty string. Ok, so that’s a bad example depending on how we want to treat this edge-case. If we really did want the empty regular expression to match even objects which does not have their keys, this would have been a proper bug in the implementation. However, it makes more sense to require the object to have the property checked for, and so this is a bad counter-example. We can exclude it by adding assume(hasattr(target, key)) to the test, causing it to back-track on any examples where the target object does not have the key:

@given(target=objects(), key=valid_properties)
def test_query_for_empty_regex_always_matches(target, key):
    assume(hasattr(target, key))

    q = "{key}:''".format(key=key)

    assert query.matches_object(q, target)

And now, the test passes.

The image is from “Chemistry: general, medical and pharmaceutical…” from 1894, courtesy of the Internet Archive Book Images

Shipping Out: One Month Later

Entrance to the CERN Computer Centre

This September, I started a one-year internship at CERN’s Database department (more specifically the IT-DB-IMS department, as they are clearly very fond of hierarchies). My assignment is related to logging and storage monitoring, and my primary task will be to provide a solution for automatic propagation and reporting of storage-related errors (think dead hard drives, power failures etc).

Shipping Out

Early morning August the 27th, I started my journey. Split between a suitcase and two backpacks, I brought the following with me on the plane:

  • Soft-shell jacket and rain gear
  • Clothes for at least ten days, vacuum packed (*)
  • Micro-fibre towel
  • Sleeping bag + ultra-light sleeping pad
  • Various toiletries
  • Computer with charger
  • 6 USB chargers with cables
  • Spare AA cells
  • Bluetooth keyboard
  • Bluetooth headphones
  • Running gear for at least two weeks
  • Swimming trunks and goggles
  • 3 single-board computers (two CHIPs, one Raspberry Pi)
  • Stuffed tiger, vacuum packed
  • USB microphone
  • A few USB thumb drives
  • External disk drive (for back-ups)
  • Various tools (including lock-picks, various special-purpose screwdrivers and a side cutter)
  • Thermos mug
  • Pyjamas
  • Mountaineering/hiking boots

(*) this turned out to be largely false – perhaps ten normal days, but not ten really hot days.

View from my office

After an uneventful (and very WiFi-less) flight of about two hours, I landed in Geneva, found my luggage after walking an impressive number of hallways, each one plastered with advertisements for luxury watches, various banking services, perfumes etc, and proceeded to board the bus to Ferney-Voltaire, the small French village near the Swiss border where I would be staying.

The process involved much frustration and about one hour of pacing the arrivals hall trying to find someone who could help me buy a ticket from the vending machine. It turned out the Right Thing To Do was to buy a ticket in the machine for a different company than the one who operated the bus I ended up taking. From there, the rest of the trip was mostly uncomplicated, as I had practised walking most of the way from the bus stop to the place I would be staying at on Google Maps in advance. Something which, I suppose, is evidence of the same combination of absolute neuroticism and ruthless pragmatism as vacuum-packing one’s stuffed animal for the journey.

During the first month of my stay, I was living without a mobile data plan. The experience reminds me of Cory Doctorow’s short story After the Siege (available online as part of Overclocked). Everything works mostly as normal (only worse) – until I leave the apartment. Then everything immediately stops working. Accidentally closed an email attachment when trying to read the email that contained it? Sorry, it’s now unavailable. The song I just added to my playlist? Gone. Too bad the EU hasn’t succeeded in outlawing roaming fees yet.

On the Betrayal of Things

The pensioned synchrotron from the official CERN Tour

Living in another country for any longer duration is a lot like being at the wrong end of a set of really evil unit tests. In life in general, as in programming, at any given point in time one holds a lot of preconceptions about how the world works – unreflected habits and assumptions. Living in another country immediately invalidates a non-empty, randomly selected set of these assumptions. It is precisely the experience that Foucault describes in the Preface to Les mots de choses of the “laugh that shatters”; at first one laughs at the three different positions of dried beans in the supermarket, the consistent placement of the potato crisps next to the hard liquor, or the three different, non-consecutive candy aisles – until one realises that categories such as “baking supplies”, “candy”, etc are entirely arbitrary, and just took on an air of ontological truths by virtue of being a widely accepted convention. There are, after all, several ways to skin a cat.

Ils sont fous ces gaules

The culture shock I experience with the French culture is admittedly a very mild one, but there are definitely subtle but unsettling differences. Milk typically comes high-temperature-pasteurised, which gives it a different flavour than fresh milk. And everything is perfumed and/or tinted pink; toilet paper (!), trash bags – everything. Loose-leaf tea is not available anywhere, except in gift tin boxes without any option for refills. Muesli is hard to come by, and the “healthy” granola/muesli mixes “without added sugar” contains little chocolate drops. Not to mention the entire pastries-for-breakfast thing. I’m still not over that.

If anything, French administrative culture is a confusing jumble of obstinate strictness and unregulated chaos. Applying for a bank account? Sign 200 papers and prove your residence with a phone or utilities bill, which is apparently the standard way of doing it. Going for a swim? Well, certain types of swimming trunks are forbidden but there is no standardised vocabulary for describing swimming trunks so you just have to figure out which ones are ok through trial and error. But at least they gave me the rebate for residents after I dutifully showed my phone bill at the reception (the de facto standard proof of residence). I may also have slammed my swimwear at the desk and asked “CECI SONT INTERDITS?!”. Lucky me, they weren’t.

How’s Work?

A fiber switch in the CERN Computer centre

I have never seen anyone suffer so badly from NIH syndrome as CERN does. For almost every given task, they have their own, subtly different solution from the industry standard; Mattermost in stead of Slack, Terrible Exchange Web Mail That Probably Escaped From 2007 in stead of Google Apps (the only instance where I wish they’d have had more NIH and just gone with their own solution), self-hosted GitLab in stead of Github, and a customized variant of Dropbox called “CERNBox”. Oh, and they have their own GNU/Linux distros. Two of them. And they are both RPM-based.

From an economical standpoint this makes sense. They are already running operations on huge server farms. Adding a few extra services is probably very cheap, both in terms of labour and other resources, especially compared to paying for a service offered by someone else. And in the long run, their involvement in several FLOSS replacements for common industry applications is most likely greatly improving them. I just wish they would have put more effort into the UX on some of these apps, so that they didn’t all suffer from flakiness and the general “a cheaper copy of X” look and feel (well-known from Libre/OpenOffice, which somehow manages to look even worse than their proprietary counterparts – all things considered no small feat). Frankly, I like my FLOSS implementations as I like my tech products in general – either better or at least as good as the things they are replacing, or nonexistent.

Other than that, the work is interesting, if a bit mundane. I learn a lot every day, but it is really hard to muster any real enthusiasm for a small python daemon with the task of basically carting data from one API to another. The only real challenges I have found so far is to implement a query language and to test things really, really well, about which I might go into some detail in the future.

Scheduling Periodic Cleaning in macOS

On most of my machines, I treat the default Downloads folder as a staging area: it’s where things arrive before being filed away wherever they belong. Most files, however, are only useful for a very brief period of time and belong nowhere really. Therefore, the Downloads folders on my systems tend to accumulate all sorts of meaningless gunk, as sorting and taking out the digital trash is precisely the sort of menial task we built computers to avoid having to do. Or, they did before I automated the process of taking out the trash. Here’s how.

Begin by setting up an Automator workflow to do the actual cleaning. You might be tempted to choose a workflow along the lines of “if file creation date is not within…”, but that will – for some reason – exclude files created today. The proper way of doing it is to set up a NOT-AND criterion: if a file is not created today and not created within the last 60 days, move it to Trash as seen below.

Screenshot of an Automator workflow for moving files in Downloads to Trash

The next step is to periodically schedule the Automator workflow using Launchd. If your computer is always on at a given time, you could use Cron, but Launchd has more advanced trigger options, and will also make sure to reschedule your task, should your computer have been off or sleeping at the time when it should have run. In this instance, this is not very important, as the script runs every hour, but if you would – say – clean your Downloads folder every first monday of the month, it suddenly becomes more important. In the script below, the workflow is called when it is first loaded (e.g. on login etc) as well as periodically every hour (on the 0:th minute), which may or may not be excessive for your use case (it probably is for mine).

Change the path to your Automator workflow file below (mine is in ~/Documents and is called clean-downloads.workflow). It may be a good idea to avoid spaces in the file name. Save the Launchd configuration file in ~/Library/LaunchAgents/com.orgname.scriptname.plist (as you can see below, i used org.albin and cleandownloads for organisation name and script/agent name respectively).

Once you are done, you may or may not need to load the script using launchctl load <path-to-script>, e.g. launchctl load ~/Library/LaunchAgents/org.albin.cleandownloads.plist.

If you want to do more in-depth editing of the Launchd script, I’d recommend using LaunchControl. See also this StackOverflow thread on creating Launchd tasks and where to place them. It also contains other ways of scheduling periodic tasks under macOS.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>Label</key>
	<string>org.albin.cleandownloads</string>
	<key>ProgramArguments</key>
	<array>
		<string>automator</string>
		<string>/Users/albin/Documents/clean-downloads.workflow</string>
	</array>
	<key>RunAtLoad</key>
	<true/>
	<key>StartCalendarInterval</key>
	<dict>
		<key>Minute</key>
		<integer>0</integer>
	</dict>
</dict>
</plist>

Please note that this script (for security reasons) doesn’t actually delete anything, it just moves them to the Trash. But it pairs excellently with macOS Sierra’s feature for automatically purging Trash of old files!

Shoebox Time Machine backup server with Raspberry Pi ("poor man's Time Capsule")

IKEA Kassett box with HDD sticking out, labelled "DATA"

It is possible and fairly easy to build an acceptable Time Machine backup server using a Raspberry Pi and a spare USB drive. Basically, the process is as follows:

  1. Install dependencies
  2. Format and set up the drive
  3. Download and compile netatalk
  4. Configure netatalk

1. Install dependencies

Just do sudo apt-get install build-essential libevent-dev libssl-dev libgcrypt-dev libkrb5-dev libpam0g-dev libwrap0-dev libdb-dev libtdb-dev libmysqlclient-dev avahi-daemon libavahi-client-dev libacl1-dev libldap2-dev libcrack2-dev systemtap-sdt-dev libdbus-1-dev libdbus-glib-1-dev libglib2.0-dev libio-socket-inet6-perl tracker libtracker-sparql-0.16-dev libtracker-miner-0.16-dev libgcrypt20-dev

2. Format and set up the drive

Some guides call for using a hfsplus-formatted drive. If you have one that you previously used with macOS, by any means – use it. If not, I would prefer a more well-adapted GNU/Linux file system. I would recommend ext4, which is what most distros use, unless you have any particular reason to use something else.

First use sudo fdisk /dev/sda to partition your hard drive (this is assuming that the hard drive is the only USB harddisk or pendrive attached to your Raspberry Pi – if it is not, make sure you pick the right one!).

  1. Press o to create a new partition table.
  2. Create a new partition using the n command. Choose primary (p) and use the suggested sizes (unless you have reason to change them). Choose the Linux filesystem, which should be the default.
  3. When you are back at the main menu, press w to write everything to disk and quit.

Now, format the drive as ext4: sudo mkfs.ext4 -L "Time Machine" /dev/sda1. When you are done, determine the unique identifier (UUID) of the new partition with blkid /dev/sda1 and copy the entire UUID="LOTS OF TEXT" part.

Create a mount point at /media/time_machine with sudo mkdir /media/time_machine.

Edit /etc/fstab and add a line like this: UUID="ca92ada0-e97a-44fc-badb-f0e934d775c9" /media/time_machine ext4 defaults,rw,user,auto 0 0

Mount it using sudo mount -a and try writing anything to /media/time_machine (e.g. touch /media/time_machine/test.file). If you get permissions errors, do sudo chown -R user /media/time_machine, where user is your user name (again, proably pi).

3. Download and compile netatalk

Netatalk is a free implementation of Apples network/file sharing protocols. It is available from the official repositories, but that version is ancient, so we need to compie it ourselves.

First, download it. You can either do it on your local machine and copy it to your raspberry pi using scp, or you can find a link on the download page and use wget <file link> to download it directly. After you have it, just extract it with the usual tar xvf netatalk-3.1.9.tar.bz2 (assuming you have the same version).

You can follow the official guide, but basically it is a matter of running configure with the right flags.

First, see which version of tracker you got from apt-get: pkg-config --list-all | grep tracker and insert it after –with-tracker-pkgconfig-version in the proposed command. In my case, I got 1.0:

./configure --with-init-style=debian-systemd  \
            --without-libevent  --without-tdb \
            --with-cracklib --enable-krbV-uam \
            --with-pam-confdir=/etc/pam.d     \
            --with-dbus-daemon=/usr/bin/dbus-daemon \
            --with-dbus-sysconf-dir=/etc/dbus-1/system.d \
            --with-tracker-pkgconfig-version=1.0

Verify that DHX2 is enabled (you need it for password authentication on recent versions of macOS, at least El Capitan):

UAMS:
         DHX     (PAM SHADOW)
         DHX2    (PAM SHADOW)

Then make && sudo make install it!

4. Configure netatalk

After that, update your /usr/local/etc/afp.conf:

[Global]
; Global server settings
vol preset = default_for_all_vol
log file = /var/log/netatalk.log
uam list = uams_dhx.so,uams_dhx2.so
mimic model = TimeCapsule6,106

[default_for_all_vol]
file perm = 0664
directory perm = 0774
cnid scheme = dbd
valid users = albin

[Time Machine]
path = /media/time_machine
time machine = yes

Substitute your own user name for albin at valid users, or omit the line entirely to allow all users. Remember, this is a user account on the server. If you are using a Raspberry Pi, this is probably the “pi” user and “raspberry” password (but you really should create a user of your own and change the password of the default account!). mimic model doesn’t really do much, except giving your Raspberry Pi a nice Time Cube icon in Finder.

Start up netatalk:

sudo service netatalk start

Keep track of the logs using sudo tail -f /var/log/netatalk.log, and try to browse and log in from your computer. The Raspberry Pi should appear in Finder, and you should be able to log in as your chosen user (e.g. pi).

When you are confident everything works, you can set up netatalk to start when the server does:

sudo systemctl enable netatalk

After this, you should be able to find and select your shoebox server as a target in Time Machine. Now, go and make a very big cup of tea as your computer runs its initial backup.

Time Machine: Backup Completed

Sources

Monoprice tablets and multiple screens under ElementaryOS (and other GNU/Linuxes)

I recently got a 10 x 6.25-inch Graphic Monoprice drawing tablet via Massdrop. It turns out that these tablets are actually re-branded Huion devices, supported by recent kernels and by the Digimend driver. Mine is a Huion H610. ElementaryOS Freya’s kernel is apparently patched to contain the necessary drivers, and everything worked out of the box in terms of reading input.

However, the coordinates on the tablet are mapped to X11 coordinates. Since I have two monitors, this means that every coordinate on the tablet is mapped against a corresponding coordinate on the combined surface of both my screens, which makes navigation extremely awkward. This can be solved by modifying the coordinate transformation matrix. The easiest way to do this is to use the neat tool xrestrict, which simply prompts you to click on the desired screen and sets up the coordinate transformation matrix to restrict inputs to that screen for the tool you just clicked with. It is currently not available in the official repos, but building and running it was fairly straightforward.