Jakob Gillich Personal blog

Converting a project to TypeScript 2.0

If there is one thing the TypeScript people should to improve, it’s documentation. I just went through a small Odyssey trying to convert a project to TypeScript 2.0 and the new way of getting definition files. Hint: Look at how the Rust project does documentation.

Updating TypeScript

Go into your package.json, set TypeScript to version 2.something and run npm install. Done. Try to compile your project, it probably won’t because the compiler is a bit more strict. The error messages however should give you an idea what needs to be changed, go and fix your code. Visual Studio Code uses the globally installed TypeScript version, update that one as well by running npm install -g - yes, not update, don’t ask why. I make this mistake every. single. time.

Use @types

You can now install type definitions via npm, typings/tsd are no longer needed. Installation goes like this:

npm install --save @types/react

All DefinitelyTyped definitions are available, so you might as well do this now. After installing all typings, remove the reference path to the old definitions, try to build and observe how TypeScript cannot resolve a single module. First, we have to tell TypeScript that we’re using node modules:

// tsconfig.json
  "compilerOptions": {
    "moduleResolution": "node"

Now, you might actually be able to compile. Unless you’re using any global non-standard functions like require or core-js shims. Remember that you had to explicitly load typings using the reference path before? This is no longer necessary, but this also means TypeScript has no idea what typings are available. When you import something, they are loaded automatically, but if something should always be loaded, you need to configure that:

// tsconfig.json
  "compilerOptions": {
    "types": [
      "node", "mocha", "core-js"

Done, now your project should work as usual and with one less tool required. This wasn’t actually hard, was it? It still took me around an hour to figure out, which could’ve been prevented by a simple mention of these things in the release announcement or elsewhere (search the title of this post, there is zero documentation about this).

Getting started with bhyve

I’ve been running a home server for a few years, but my upload is just too poor to do anything serious with it, so I got myself a cheap dedicated server. Installed FreeBSD, because lets try bhyve, their new-ish hypervisor.

The default “frontend” to bhyve is quite complex, so I used vm-bhyve instead, which can definitely compete with Docker in ease of use.

So let’s install it. It is in ports, but the package usually outdated, so make sure you install from source.

# if you don't have the ports tree yet
portsnap fetch extract
cd /usr/ports/sysutils/vm-bhyve && make install clean

If you plan to run anything other than FreeBSD, you’ll also need grub2-bhyve:

cd /usr/ports/sysutils/grub2-bhyve && make install clean

Some initial config:

mkdir /var/vm
zfs create -o mountpoint=/var/vm zroot/vm
echo 'vm_enable="YES"' >> /etc/rc.conf
echo 'vm_dir="zfs:zroot/vm"' >> /etc/rc.conf
vm init
cp /usr/local/share/examples/vm-bhyve/* /var/vm/.templates/

This is enough to be able to launch VMs, but we want networking as well.

echo 'net.inet.ip.forwarding=1' >> /etc/sysctl.conf
echo 'pf_enable="YES"' >> /etc/rc.conf
vm switch create public
vm switch add public em0
vm switch nat public on
pkg install dnsmasq
echo 'dnsmasq_enable="YES"' >> /etc/rc.conf
mv /usr/local/etc/dnsmasq.conf.bhyve /usr/local/etc/dnsmasq.conf
service dnsmasq start

vm-bhyve will add a include line to /etc/pf.conf, you might have to move it up a bit (check with pfctl -f /etc/pf.conf).

Now, we need an ISO, which vm-bhyve can download for us:

vm iso ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/10.3/FreeBSD-10.3-RELEASE-amd64-disc1.iso

If you want to download iso manually, just put them in /var/vm/.iso/. Let’s launch a VM:

vm create -t freebsd-zpool -s 50G freebsd1
vm install freebsd1 FreeBSD-10.3-RELEASE-amd64-disc1.iso
vm console freebsd1

Now, just go through the installer as usual. Easy!

Next step: figure out how to assign IPv6 addresses to VMs. Hopefully not too hard.

This Week in CloudFM 1

or: How to implement XML parsing in just 500 lines of Rust.

A weekly blog about my progress on CloudFM, a offline-first, multi-backend music player.

Not the best start for a series like this, but last week my SDD died. Then I wasted an entire evening trying to install OpenSuse Tumbleweed (something something SecureBoot). Bottom line, I did some stuff, but not even close to what I wanted to achieve.

What’s new

  • hyperdav got all required functionality. I’m not particularly proud about the code, especially the response parsing using xml-rs is extremely verbose, even though like 90% of the body is ignored anyway. Maybe real xml support in serde will happen one day.

  • WebDAV indexing is now implemented. This change broke some parts of the app, since the URI format has changed to now always include the backend id.

  • All components are now dockerized. I want to do some form of automated deployment soon-ish. Not because it makes sense right now, but because playing with ops stuff is fun.

What’s next

In case my notebook decides to explode tomorrow, let’s set the goals a bit lighter:

  • Make the web app usable for everyday listening - same as last week
  • Implement the UI to add ownCloud/Box.com backends, which will be stored as webdav behind the scenes


Over the last few months, I’ve been working on a next-generation music player for the age of “servers connected to the internet”, also known as the “cloud”. Because I am bad at naming things, I called it CloudFM.

You don’t actually need to click the link, because there’s nothing to see there. I’m mainly putting this out there because I want to regularly share progress I’ve made. But let’s start with what I’ve done so far.

Why I’m doing this

Long story short, I loved Rdio, then Rdio shut down. Turns out the alternatives aren’t as good as Rdio and a lot of them even require Flash (2016??). I switched to Subsonic, which works ok, but Rdio was just so much better in so many ways. So I’m building my own thing instead.

CloudFM is a music player that integrates a plethora of cloud services (think Spotify, YouTube, Dropbox, SoundCloud and more) into a single player interface. Since mobile internet is expensive and not always available, I want to make it offline-capable as much as possible. And nowadays, you have so much storage space on your phone, meanwhile your tiny SSD on your notebook is constantly running out of space - CloudFM will let you store your music on your phone, and listen to it on your desktop.

Micro-services written in Rust

To be honest, I did not intend to use a micro-service architecture from the start. I actually wrote a monolithic server first, until I realized: I’m going to need a lot of this code at different places. For example, indexing code will have to run on the server side, but also as part of a GUI desktop app. That is why I turned my server into a library that compiles to a couple of binaries:

  • indexd: The indexing daemon, it indexes music from various online services (and local files).
  • proxyd: Give it a track ID, it will respond with a audio file, not matter where it is stored. In the future, it will also do things like on the fly re-encoding of files, and more.
  • manager: A desktop app, to index and serve local files. Will probably use the excellent GTK bindings for Rust. Or maybe the Qt Quick bindings, because GTK isn’t actually that great on platforms other than Linux. Ideally, both.

Web app written in TypeScript/React/Redux

Initially, I started writing it in Elm. It was an interesting experiment, and there are a lot of things I like about the language, but the cons didn’t quite outweight the pros. The short version: The language has a few shortcomings even in its domain (web apps), the ecosystem is rather small, integrating existing JavaScript libraries and APIs is a lot of work.

Searching for an alternative, I decided to use TypeScript first. I treat it as a very powerful linter, if your code passes the linter (compiles), it’s very likely correct. Less edit-tab-reload-tab-edit, more instant feedback through your editor. While the type system is not as good as Rust’s, and Redux is not very TypeScript-friendly, I do not regret it at all, partially because of the awesome TypeScript integration in Visual Studio Code.

Choosing a front-end framework was a really straightforward process: I knew I wanted a native mobile app, because hybrid apps just aren’t that great. And since we bascially require PouchDB for offline availability of the database, React and React Native are pretty much the only viable option. Together with Redux, we’ve got a pretty nice and Elm-like stack with a big ecosystem.

What works

It’s been a bit over a week since I started rewriting the web app, and here’s what it looks like:

This is just a very early prototype, expect things to change, a lot. A lot of the features you’d expect from any player aren’t there, yet. The design will also definitely change in the future.

What’s not visible on the screenshot is that the music is not just local, but also from Jamendo. In the future, a lot more backends will follow.

What’s next

My goals for this week:

  • Continue work on my WebDAV client for Rust and then implement the WebDAV backend
  • Make the web app usable for everyday listening
  • Start working on a Musicbrainz client
  • Read Design for Hackers


I wrote a Jamendo API client in Rust today. And it was easy.

Yes, Rust is quite hard to learn. But after you’ve grasped the concepts it’s built around, like ownership and lifetimes, it all makes sense. Unlike JavaScript, which never made any sense.

Including npm modules in TypeScript

I’ve written quite a lot JavaScript and Node.js code over the last few years. Unlike many, I think it’s a great language and server platform. However, the lack of static types does regularly introduce errors that are hard to find. That’s why I decided to use TypeScript for a new project.

The website describes it as a superset of JavaScript, with typings being optional. I was quite surprised to find out this is not true, at all. I started by installing TypeScript, a webpack loader, and react with npm. Now, we should be able to do this:

import * as React from "react";
ERROR in ./src/main.ts
(2,24): error TS2307: Cannot find module 'react'.

Uh, what? I just installed it, why can’t the compiler find it? After doing a bit of reading (the official docs are severely lacking in that regard, I have to say), I found out that typings are actually not optional. There are two solutions:

  1. Install the require typings and load modules with require(). This works because the typings for require define it as returning any, therefore disabling all type checks for it.

  2. Install the typings for react. This is the recommended approach, however the typings have to be created by hand, therefore they don’t exist for all modules.

How do you install typings, you might ask. There is a project with the same name that provides a package manager for them, similar to npm. Install it via npm install -g typings, then you can install type definitions using typings install react --save --ambient. --save stores them in the typings.json, which is like your package.json but for typings. --ambient is required for modules that export to the global namespace - I don’t yet know why it’s required for react, only that it won’t work without.

After you’ve installed them, you need to add one special line to the top of your code:

/// <reference path='../typings/browser.d.ts'/>

The path is relative from your source file and tells TypeScript to load the definitions that you’ve installed via typings. Note that browser.d.ts is for browser projects, if you target Node.js, use main.d.ts instead.

Initially, I also had issues with multiple definitions. This is because TypeScript, by default, loads all .ts files. What we usually want is just a single file that includes the rest of our code. To fix this, we need to create a tsconfig.json in our project root:

  "files": [

Now, finally, we are able to use React from TypeScript. I feel like there is a lot that needs to be improved here, starting with the compiler error message.

Now, what if there are no typings for the module you’d like to use? As I mentioned earlier, you can use require() to completely bypass type checks by installing the typings for it and then doing:

import h = require('react-hyperscript-helpers');

Unfortunately, this is not possible with ES6 modules, so we don’t get the ability to destructure during import. I think that creating typings is what you should be doing instead, that’s why you’re using TypeScript in the first place, right?

The Handbook tells you how to do it, here’s just a quick example, my react-hyperscript-helpers.d.ts:

declare namespace ReactHyperscriptHelpers {
  function h1(text: string): any;

declare module "react-hyperscript-helpers" {
  export = ReactHyperscriptHelpers;

As you can see, it defines a single function h1 that takes a string and returns any. Now, we can do this:

/// <reference path='./react-hyperscript-helpers.d.ts'/>
import { h1 } from "react-hyperscript-helpers";

  h1('Hello, World!'),

I don’t think it ever took me longer to get a Hello World up and running. Microsoft, please make this stuff easier to get started with.

Self-contained development environments using Nix

Nix is a package manager that works a bit differently. It allows you to install any version of any package alongside each other. Since you can’t, for example, have multiple python executables, nix-shell is used to give you a environment with all the dependencies you need.

Let’s say you have a Rust project. You create a default.nix in your project directory:

with import <nixpkgs> { };

rustPlatform.buildRustPackage rec {
  name = "my-project-${version}";
  version = "0.1";
  src = ./.;
  buildInputs = [ openssl pkgconfig ];
  depsSha256 = "160ar8jfzhhrg5rk3rjq3sc5mmrakysynrpr4nfgqkbq952il2zk";

This defines a Rust package in the current directory, with the openssl dependency. Note that buildInputs only lists native dependencies, your crates are specified in your Cargo.toml as usual.

To build a package out of this, you can run nix-build default.nix (or just nix-build .). However, this will always build the project from a clean state, which we don’t really want during development. So instead, we do nix-shell ., which puts us in a new shell that not only has openssl and pkgconfig, but also all dependencies of rustPlatform, like rustc and cargo.

Now, what if we need a database? Well, we’d have to install that through the usual channels - right? Wrong! This is where things get really interesting: Nix has packages for pretty much all databases, and nix-shell allows us to run custom commands when we enter a shell. This property is called shellHook:

rustPlatform.buildRustPackage rec {
  name = "my-project-${version}";
  // ...
  shellHook = ''
    ${couchdb}/bin/couchdb -a couchdb.ini &

This would start CouchDB every time we enter our development environment. And if you’re still using Make to run your build commands, consider specifying them in your shellHook instead:

shellHook = ''
  function ci {
    cargo build
    cargo test

You can of course use Nix on your continuous integration platform, like Travis, by setting the script to:

nix-shell default.nix --command ci

By using Nix, the environment on Travis is exactly the same as the one you use locally. No longer will you have issues because Travis hasn’t updated their sqlite version in the last 5 years.

Another Year, Another Static Site Generator

This year’s hotness: Hugo. Being the web hipster that I am, of course I switched. Not that I didn’t have a good reason, I had already written two or three posts with Middleman, so it felt really old and used.

On a serious note, Middleman does feel a bit limiting when you build more complex sites with it. But maybe using static site generators for anything other than simple blogs and documentation is just a bad idea. It’s certainly a lot better than Jekyll and its awful Liquid templating syntax.

CORS Proxying with nginx

CORS is a very advanced security technology designed to waste your time. It works for production environments, but oh Firefox can I please just send some requests to that API to test my app? The answer is no, so this is how to configure nginx and make your local dev environment so much more secure:

server {
  listen 8080;
  location / {
    if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Credentials' 'true';
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
      add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Typ';
      add_header 'Content-Type' 'text/plain charset=UTF-8';
      add_header 'Content-Length' 0;
      return 204;

    add_header 'Access-Control-Allow-Origin' '*' always;
    add_header 'Access-Control-Allow-Credentials' 'true' always;
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
    add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Typ' always;

    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

To make your life extra difficult, the creators decided you can’t use wildcards for Access-Control-Allow-Headers, enjoy changing this config for any new headers you want to use. Oh, and please don’t run this in production, please?


After reading about it so many times, I finally tried NixOS. Never heard of it? Definitely check out their web site, but if you’re not into functional programming, you probably won’t understand what it is all about. At least I didn’t, until I tried it. And it blew my mind. But let me break it down for you.


NixOS is basically a regular Linux distro, it runs on the desktop as well as on the server. What makes NixOS special is the way it is configured. The entire system is based on a single configuration file at /etc/nixos/configuration.nix. NixOS configuration uses the Nix programming language, having programming knowledge is not required, but makes things easier. Nix is also a package manager, but more about that later. Unlike configuration management tools like Ansible, there is zero state in the NixOS configuration. If you remove a service from your NixOS configuration, it is gone, there is no uninstall step.

Traditional configuration management works by checking the system state and performing the required actions. For example, installing a service usually goes like this:

  • Manually write the configuration file
  • Ensure required packages are installed
  • Ensure the configuration is correct
  • Ensure the service exists
  • Ensure the service is started

On NixOS, you add this to your configuration:

services.syncthing =
  { enable = true;
    user = "jakob";
  • There is not installation. enable = true; implies that.
  • There is no separate configuration, it can be done in the NixOS configuration

If you install something on NixOS, nothing is actually written to paths like /usr or /bin. Instead, every package and every service gets its own file system structure at /nix/store. The actual system that you are running is made of symlinks to these directories (and to each other). Why is this the best thing ever? Whenever you change something in your configuration, NixOS creates a new copy of your system, again made of links to /nix/store. These copies are called profiles, and they occupy almost no space. When ever you’ve made an error in your configuration, you can just roll back to any previous state. This is especially great when you’ve made changes that result in you being unable to boot, you can simply select a older profile during boot.


Nix is a package manager and not specific to NixOS, it works on other Linux distributions, too. Its syntax is very similar to other package managers, installing something is as easy as:

nix-env -i git

But of course it’s not exactly like other package managers. First of all, it can operate in multi-user-mode, which means you don’t need root access to install software. Just like NixOS, Nix uses profiles and is able to roll back installations.

Something that is probably unique to Nix is the ability to override packages and to create derivatives. By default, Nix uses binary packages, but you can make changes to packages and it will then compile the package with your changes. To give you a example, you can use this to create your own version of vim with the plugins you need. This means you don’t have to manually manage and update plugins, Nix can do it for you. These overrides can be done in your system-wide configuration or on a per-user basis at ~/.nixpks/config.nix.

There is also an incredible amount of packages for NixOS. Often, smaller Linux distributions have a hard time maintaining so many packages, but that doesn’t seem to be the case for Nix. I actually found that there are more packages in Nix than there are in Fedora - not based on the raw numbers, but on what I use.

Updates in Nix are based on channels, which are just releases, but you can mix them without problems. I’ve wanted some newer versions of some packages, so I downloaded the nixpkgs repository and ran:

> sudo -s nix-env -i rkt -f ~/devel/nixpkgs/
replacing old ‘rkt-0.8.0’
installing ‘rkt-0.10.0’

Yes, it’s that simple. What is also simple is installing non-free software. Nix has packages for all the drivers and Steam, you just need to allow them in your config:

nixpkgs.config.allowUnfree = true;


The way NixOS works offers many advantages, but there are problems. Any software that relies on standard paths does not work on NixOS. Any bash script that uses #!/bin/bash does not work on NixOS. The included packages have all been changed to ensure they work, but anything you get from elsewhere might not work. Sometimes when you just need to do this one thing, and do it quickly, NixOS can get in the way. I personally just use Docker for anything that’s not Nix-compatible, but I’m also working on packaging a few things. This would be a valid reason why there are so many packages - packaging is easy, but you need it to really get anything working.

I also have to say, there is not a lot of information about NixOS on the web. I’ve been reading more of the nixpkgs source code than anything else, but that’s not a bad thing. I feel like it’s actually a strong point about Nix, the source is easy to understand and it is never outdated. But it is really not a system where you can just search for you problem and find the answer in some shitty forum.

One example for problems I’ve had is setting the GTK+2 theme. It defaults to the very ugly Raleigh. But how do you change it? To set the theme, you need to set GTK2_RC_FILES to the theme path - which is hard on Nix because the regular /usr/share doesn’t exist. And there wasn’t a single mention of this problem on the web - which really surprised me. The solution is, you might have guessed it, just a little bit of configuration:

environment.variables =
  { GTK2_RC_FILES = "${pkgs.gnome_themes_standard}/share/themes/Adwaita/gtk-2.0/gtkrc"; };

That being said, there’s great documentation that covers a lot of topics.


I love the way the NixOS configuration works.I can write the configuration once and deploy it on any machine or even to a container. Or do it the other way around, run the server configuration in a container on your workstation.

There is also nixops to deploy NixOS machines to the various cloud providers.

To wrap this up - NixOS is awesome and solves a lot of problems, you should try it.