Jakob Gillich Personal blog

CORS Proxying with nginx

CORS is a very advanced security technology designed to waste your time. It works for production environments, but oh Firefox can I please just send some requests to that API to test my app? The answer is no, so this is how to configure nginx and make your local dev environment so much more secure:

server {
  listen 8080;
  location / {
    if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Credentials' 'true';
      add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
      add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Typ';
      add_header 'Content-Type' 'text/plain charset=UTF-8';
      add_header 'Content-Length' 0;
      return 204;
    }

    add_header 'Access-Control-Allow-Origin' '*' always;
    add_header 'Access-Control-Allow-Credentials' 'true' always;
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
    add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Typ' always;

    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://127.0.0.1:1234;
  }
}

To make your life extra difficult, the creators decided you can’t use wildcards for Access-Control-Allow-Headers, enjoy changing this config for any new headers you want to use. Oh, and please don’t run this in production, please?

NixOS

After reading about it so many times, I finally tried NixOS. Never heard of it? Definitely check out their web site, but if you’re not into functional programming, you probably won’t understand what it is all about. At least I didn’t, until I tried it. And it blew my mind. But let me break it down for you.

NixOS

NixOS is basically a regular Linux distro, it runs on the desktop as well as on the server. What makes NixOS special is the way it is configured. The entire system is based on a single configuration file at /etc/nixos/configuration.nix. NixOS configuration uses the Nix programming language, having programming knowledge is not required, but makes things easier. Nix is also a package manager, but more about that later. Unlike configuration management tools like Ansible, there is zero state in the NixOS configuration. If you remove a service from your NixOS configuration, it is gone, there is no uninstall step.

Traditional configuration management works by checking the system state and performing the required actions. For example, installing a service usually goes like this:

  • Manually write the configuration file
  • Ensure required packages are installed
  • Ensure the configuration is correct
  • Ensure the service exists
  • Ensure the service is started

On NixOS, you add this to your configuration:

services.syncthing =
  { enable = true;
    user = "jakob";
  };
  • There is not installation. enable = true; implies that.
  • There is no separate configuration, it can be done in the NixOS configuration

If you install something on NixOS, nothing is actually written to paths like /usr or /bin. Instead, every package and every service gets its own file system structure at /nix/store. The actual system that you are running is made of symlinks to these directories (and to each other). Why is this the best thing ever? Whenever you change something in your configuration, NixOS creates a new copy of your system, again made of links to /nix/store. These copies are called profiles, and they occupy almost no space. When ever you’ve made an error in your configuration, you can just roll back to any previous state. This is especially great when you’ve made changes that result in you being unable to boot, you can simply select a older profile during boot.

Nix

Nix is a package manager and not specific to NixOS, it works on other Linux distributions, too. Its syntax is very similar to other package managers, installing something is as easy as:

nix-env -i git

But of course it’s not exactly like other package managers. First of all, it can operate in multi-user-mode, which means you don’t need root access to install software. Just like NixOS, Nix uses profiles and is able to roll back installations.

Something that is probably unique to Nix is the ability to override packages and to create derivatives. By default, Nix uses binary packages, but you can make changes to packages and it will then compile the package with your changes. To give you a example, you can use this to create your own version of vim with the plugins you need. This means you don’t have to manually manage and update plugins, Nix can do it for you. These overrides can be done in your system-wide configuration or on a per-user basis at ~/.nixpks/config.nix.

There is also an incredible amount of packages for NixOS. Often, smaller Linux distributions have a hard time maintaining so many packages, but that doesn’t seem to be the case for Nix. I actually found that there are more packages in Nix than there are in Fedora - not based on the raw numbers, but on what I use.

Updates in Nix are based on channels, which are just releases, but you can mix them without problems. I’ve wanted some newer versions of some packages, so I downloaded the nixpkgs repository and ran:

> sudo -s nix-env -i rkt -f ~/devel/nixpkgs/
replacing old ‘rkt-0.8.0’
installing ‘rkt-0.10.0’

Yes, it’s that simple. What is also simple is installing non-free software. Nix has packages for all the drivers and Steam, you just need to allow them in your config:

nixpkgs.config.allowUnfree = true;

Problems

The way NixOS works offers many advantages, but there are problems. Any software that relies on standard paths does not work on NixOS. Any bash script that uses #!/bin/bash does not work on NixOS. The included packages have all been changed to ensure they work, but anything you get from elsewhere might not work. Sometimes when you just need to do this one thing, and do it quickly, NixOS can get in the way. I personally just use Docker for anything that’s not Nix-compatible, but I’m also working on packaging a few things. This would be a valid reason why there are so many packages - packaging is easy, but you need it to really get anything working.

I also have to say, there is not a lot of information about NixOS on the web. I’ve been reading more of the nixpkgs source code than anything else, but that’s not a bad thing. I feel like it’s actually a strong point about Nix, the source is easy to understand and it is never outdated. But it is really not a system where you can just search for you problem and find the answer in some shitty forum.

One example for problems I’ve had is setting the GTK+2 theme. It defaults to the very ugly Raleigh. But how do you change it? To set the theme, you need to set GTK2_RC_FILES to the theme path - which is hard on Nix because the regular /usr/share doesn’t exist. And there wasn’t a single mention of this problem on the web - which really surprised me. The solution is, you might have guessed it, just a little bit of configuration:

environment.variables =
  { GTK2_RC_FILES = "${pkgs.gnome_themes_standard}/share/themes/Adwaita/gtk-2.0/gtkrc"; };

That being said, there’s great documentation that covers a lot of topics.

Solutions

I love the way the NixOS configuration works.I can write the configuration once and deploy it on any machine or even to a container. Or do it the other way around, run the server configuration in a container on your workstation.

There is also nixops to deploy NixOS machines to the various cloud providers.

To wrap this up - NixOS is awesome and solves a lot of problems, you should try it.

The state of video production on Linux

As a regular listener of the Linux Action Show and similar shows JB produces, I’ve always known video production on Linux isn’t the best experience ever. Never would I have imagined how bad things really are.

I had a really simple task. Record a video, add a few titles, done. Doesn’t sound hard, does it? Well, apparently it is - on Linux. I tried pretty much all editing software that is available:

Pitivi

Pitivi looks like a modern GNOME 3 app, great. Adding titles was very simple, unfortunately the app freezes every two clicks and you have to kill it.

Blender

Blender can actually do video editing, but Fedora does not compile it with ffmpeg support, so it supports zero formats. From what I’ve read its really not the best editing sofware anyway, so I didn’t bother building it myself.

Kdenlive

Great features, but some of them hidden at odd places. It has the potential to be really great, but sadly I also experienced a lot of crashes - less than Pitivi, but still, does anyone really work with this stuff?

OpenShot

OpenShot is missing some basic features any program should have. The actual video editing part is ok (yes, I had issues, but not as bad as with the others), but you can not ever move any of the files used in a project because they hard code a million paths in the project file. Paths to python, the local configuration directory and even the desktop. Open a project file after moving anything and OpenShot just crashes right away.

Lightworks

The only closed source app here. They announced they would go open source in 2011, I doubt it’s ever going to happen. It seems to be a pretty good software overall, but the free version is basically demo mode - adding titles requires the paid version, which I’m not really willing to get at this point.

What now?

I was able to get the results I wanted with OpenShot. But I don’t think I am ever going to use it again because it is impossible to share the project files with anyone. Renaming a single folder alone breaks your project, that’s just not acceptable.

I have to agree with others that video production on Linux is nowhere near being viable, unless you buy Lightworks. Kdenlive is probably the best open source editor out there, but you still have to deal with it crashing, a lot.

Using GitHub Pages and Travis for automatically deployed web sites

Due to some issues with upgrading my Ghost installation, and considering that I’m not blogging that much anyway, I’ve decided to move this blog over to GitHub Pages using the Middleman static site generator.

To write a blog post, all I have to do is to push it to GitHub. Travis then builds it and pushes it to the gh-pages branch, from where GitHub serves it, and thanks to Cloudflare I even get free SSL. Zero cost, zero maintenance required.

Now, on how to do that. Obviously you need a GitHub repository that contains your sources. This method works with pretty much anything that can output a static site, it is not limited to Middleman. Anyway, create your repository and add two files.

One, .travis.yml:

language: ruby
script: bundle exec middleman build
sudo: false
after_success: |
  export PATH=$HOME/.local/bin:$PATH &&
  [ $TRAVIS_BRANCH = master ] &&
  [ $TRAVIS_PULL_REQUEST = false ] &&
  pip install ghp-import --user `whoami` &&
  ghp-import -n build &&
  git push -fq https://${TOKEN}@github.com/${TRAVIS_REPO_SLUG}.git gh-pages
env:
  global:
    - secure: YOUR_ENCRYPTED_GITHUB_TOKEN

This tells Travis to build the site and uses ghp-import to push the build folder back to GitHub. For authentication, you need a token from GitHub, then encrypt it using the Travis command line client (gem install travis) from within your repository:

travis encrypt TOKEN=YOUR_GITHUB_TOKEN

One more file you will need is a CNAME file. It tells GitHub pages that incoming requests from your domain belong to that repository. All it contains is your domain, in my case:

www.jakobgillich.com

Note that this has to be in the root path of your gh-pages branch, in the case of Middleman you would put it under source/CNAME. Once you have created these two files, you can enable Travis for your repository and your site should be automatically deployed to GitHub Pages whenever you push to your repository.

To set up your domain, go to Cloudflare and add two DNS records to your domain:

CNAME   @     jgillich.github.io
CNAME   www   jgillich.github.io

You can also set SSL to Full, Cloudflare will then always communicate with GitHub over SSL.

At this point, we are pretty much done, but one more thing. Since we got free SSL from Cloudflare, there is no reason not to use it. You can force HTTPS by going to Page Rules and checking Always use https for the URL pattern http://*jakobgillich.com/*.

How To Install Redmine on CentOS 7

The following describes the process to get the Redmine project management application running on CentOS (RHEL) 7 with a MariaDB database.

Install dependencies:

# yum install @development mariadb-server mariadb-devel ruby ruby-devel ImageMagick ImageMagick-devel rubygem-rake rubygem-bundler

Enable and start MariaDB:

# systemctl enable mariadb
# systemctl start mariadb

Configure database:

# mysql
> CREATE DATABASE redmine CHARACTER SET utf8;
> CREATE USER 'redmine'@'localhost' IDENTIFIED BY 'my_password';
> GRANT ALL PRIVILEGES ON redmine.* TO 'redmine'@'localhost';
> exit

Create user to run redmine under:

# adduser redmine

Download and extract redmine:

# curl -O http://www.redmine.org/releases/redmine-2.5.2.tar.gz
# tar xvf redmine-2.5.2.tar.gz
# mv redmine-2.5.2/ /home/redmine/
# mv /home/redmine/redmine-2.5.2 /home/redmine/redmine
# chown -R redmine:redmine /home/redmine/redmine

Setup redmine:

# su redmine
$ cd ~/redmine

$ cp config/database.yml.example config/database.yml
$ vi config/database.yml # set user & password for production

$ bundle install --without development test
$ rake generate_secret_token
$ RAILS_ENV=production rake db:migrate
$ # load default data (optional):
$ RAILS_ENV=production rake redmine:load_default_data

$ mkdir -p tmp tmp/pdf public/plugin_assets
$ chown -R redmine:redmine files log tmp public/plugin_assets
$ chmod -R 755 files log tmp public/plugin_assets

To start redmine with Ruby’s own webserver, run:

$ ruby script/rails server webrick -e production

You can also add -p PORTNUMBER to set a port other than the default (3000). If you want to access redmine over the network, you have to add a firewall rule:

# firewall-cmd --add-port=3000/tcp --permanent
# firewall-cmd --reload

Gulp, Yet Another JavaScript Build Tool

Grunt is unarguably the most popular build tool for web projects. I like a lot of things about it, but dislike a few others like the configuration syntax and its performance.

So I’ve finally tried Gulp and I’m impressed. Porting my Gruntfile was very easy because Gulp is just so simple to use, it took me maybe 30 minutes to port around 80 lines of Grunt configuration, resulting in 60 lines of gulp tasks. And it doesn’t just run faster (build time down to 2 seconds from 3 seconds; much more important however is that the watcher reacts faster), it is also more readable.

There are a few differences of course; Gulp doesn’t try to be the best tool for everything. Tasks run in parallel by default, which doesn’t work very well when one of them wipes the build directory while another one writes into it. Workarounds are available though.

Still, if you know how streams in Node work, you almost know how to use Gulp. Grunt on the other hand is a complex beast that does everything you want, but requires you to have some patience while learning and running it.

CentOS 7

[root@centos7 ~]# yum install nginx
No package nginx available.

Yes, they are dead serious. But they ship nginx in RHSCL 1.1, which sadly isn’t available on CentOS yet.

Redirecting Ctrl+A with JavaScript

While building a simple paste service, I wanted to catch Ctrl+A and redirect it to only select a single element. This does exactly that:

document.addEventListener('keydown', function (event) {
    if(event.ctrlKey && event.keyCode == 65) {
        var range = document.createRange();
        range.selectNode(document.getElementsByTagName('main')[0]);
        window.getSelection().addRange(range);
        event.preventDefault();
    }
});

What it does:

  • Attach a keydown handler to document to get all key presses
  • Make sure Ctrl+A have been pressed (A is keyCode 65)
  • Add a selection to an element (<main> in this case)
  • Call preventDefault to make sure the event is ignored by the browser

StartSSL & Heartbleed

Until today, I used StartCom’s StartSSL for all of my domains. While their web interface is terrible, they are probably the only company that offers free certificates. But in reality, they are not really free (anymore).

You surely heard of the Heartbleed bug, which allowed anyone to get access to the private certificate keys, making encryption totally useless. Since I was affected by this as well, there were two things I had to do:

  1. Update OpenSSL. By the time I heard about the vulnerability, FreeBSD did already provide an updated port. Easy.
  2. Replace all certificates. Turns out that in order to generate a new certificate at StartSSL, the old either has to expire or you have to revoke it - which costs $25.

Considering almost all of their customers need a new certificate, their free certificate effectively costs $25 now. I guess a lot of their customers aren’t willing to pay this fee and will rather risk leaked keys. That’s just irresponsible - if you are not able to offer free certificates that are actually secure, then don’t.

MProgress, a slim progress bar written in Vanilla JS

MProgress is a small JavaScript library I’ve been working on last weekend. If you know NProgress, my project is a much smaller (70 lines vs. 300+ lines) clone that has no dependency on jQuery. To be fair, it also has fewer features and only works on more modern (ES5-compatible) browsers.

I have never done a lot of DOM stuff without jQuery so this has been quite interesting. I had to look up how to do really basic things like inserting an element, but I also realized that there are very few cases where I will ever want to prefer Vanilla JS over jQuery. For example, to remove a element from the DOM, you have to call removeChild on the parent:

el.parentNode.removeChild(el);

Now the same code in jQuery:

$(el).remove()

The latter is not only shorter, it also easier to understand.

Anyway, in this case it made a lot of sense to write vanilla JavaScript: You can load it, show a progress bar and then start loading the huge chunk of minified scripts that your application consists of. Users with a slow (mobile) connection will then see a progress bar that informs them the page is being loaded instead of just a blank page.

To find out more, I’ve put a small page together that includes a live example of the library.