Tag Archives: HowTo

Building multiple inter-dependent autotools based projects

One of the main reasons for this blog is documenting things for my future self. This post is one of these.

Building a HTTP REST API with C++ is a quite unique type of challenge, but doing it with libhttpserver takes a lot of the lower level protocol (read: HTTP and below) burden away from you, and lets you focus on the content part of your problem. This is what we did for a new project at work lately. But sometimes when working with external dependencies you hit some bugs and corner cases and have to dig into those projects.

Building libhttpserver is almost straight forward, if you had already built autotools based projects before. You’ll notice however it depends on a quite recent version of libmicrohttpd. So the task for hacking on libhttpserver is to build both libraries from source, without interfering with the rest of your system.

The usual way to build autotools based projects is like this:

% ./configure
% make

(It works exactly like this if building from an extracted source tarball. There’s another step in front of those, if you build from a Git working copy.)

Now after the naïve way to build libmicrohttpd, how can we tell libhttpserver to pick that up as a dependency? What I do is “installing” them both to the same place in a tree just for this purpose, and I use the --prefix option of ./configure for that with a directory I created before. So for libmicrohttpd from Git it would look like this:

% ./bootstrap
% mkdir -p build && cd build
% ../configure --prefix=/home/adahl/build/sysroot/http
% make
% make install

After that I have the following file tree:

% tree ~/build/sysroot/http
/home/adahl/build/sysroot/http
├── include
│   └── microhttpd.h
├── lib
│   ├── libmicrohttpd.a
│   ├── libmicrohttpd.la
│   ├── libmicrohttpd.so -> libmicrohttpd.so.12.60.0
│   ├── libmicrohttpd.so.12 -> libmicrohttpd.so.12.60.0
│   ├── libmicrohttpd.so.12.60.0
│   └── pkgconfig
│       └── libmicrohttpd.pc
└── share
    ├── info
    │   ├── dir
    │   ├── libmicrohttpd.info
    │   ├── libmicrohttpd_performance_data.png
    │   └── libmicrohttpd-tutorial.info
    └── man
        └── man3
            └── libmicrohttpd.3

7 directories, 12 files

So let’s try to build libhttpserver from Git master now:

% ./bootstrap
% mkdir -p build && cd build
% ../configure --prefix=/home/adahl/build/sysroot/http

But that gives:

checking for microhttpd.h... no
configure: error: "microhttpd.h not found"

So just giving the prefix is not enough. We need to pass some directories to preprocessor (for finding header files) and linker (to link against the other shared lib). You can do it like this:

% CPPFLAGS=-I/home/adahl/build/sysroot/http/include LDFLAGS=-L/home/adahl/build/sysroot/http/lib ../configure --prefix=/home/adahl/build/sysroot/http

Looking complicated? I bet. In our case we have only two projects, but what if there are even more? Gnu autotools has a nice feature which can help here though: Overriding Default Configuration Setting with config.site. In short: Put those preprocessor, compiler, and linker flags into a special file in your install tree, and ./configure will pick it up automatically and use the settings. In my case, I create a file /home/adahl/build/sysroot/http/share/config.site and put in the following:

test -z "$LDFLAGS" && LDFLAGS=-L/home/adahl/build/sysroot/http/lib
test -z "$CPPFLAGS" && CPPFLAGS=-I/home/adahl/build/sysroot/http/include

We can omit that stuff when calling ./configure now and everything is found and compiled and linked together. We can tell it’s picked up by the very first line of ./configure output:

% ../configure --prefix=/home/adahl/build/sysroot/http  
configure: loading site script /home/adahl/build/sysroot/http/share/config.site
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
…

So that’s all for today, I hope it will help my future self to look up what that magic file is called, where it must be placed, and where the documentation for that can be found.

Update: I make use of this mechanism in some of my CMake superbuild projects, e.g. in glowing-tribble-build.

Moving to HGKeeper

Motivation

So, where do we come from? I started using the Mercurial version control system around 2009 if I remember correctly. Had used Subversion and SVK (does anyone remember that?) before and was curious about distributed version control. Back then Mercurial was better suited for platform independent use on Linux and Windows, and I still had to use the latter at work. Mercurial’s user interface was very much the same as Subversion’s and basically just push and pull were added, switching from Subversion to Mercurial was easy. Git was weird at the time, at least for me. Meanwhile we use Git at work exclusively and it got a lot better on Windows over time. However for nostalgic reasons and to stay somewhat fluent in other VCS I kept most of my private projects on Mercurial.

For “hosting” my own repos I used the Debian package mercurial-server, at least up to Debian 9 (stretch), but after upgrading that server to Debian 10 (buster) things started falling apart, and I looked out for a new hosting solution. For the record: I thought about converting all those repos to Git, but opted for not doing that, because I have accumulated quite a number of repos and although I did convert one or two already, I supposed it would be easier to switch the hosting instead of each repo.

Speaking of hosting: I don’t need a huge forge for myself, just some rather simple solution for having server side “central” repos so I can easily work from different laptops or workstations. So I scanned over MercurialHosting in the Mercurial wiki and every self hosting solution seemed like cracking a nut with a sledgehammer, except HGKeeper.

Introducing HGKeeper

HGKeeper introduces itself like this in its own repo:

HGKeeper is an server for mercurial repositories. It provides access control for SSH access and public HTTP access via hgweb.

It was originally designed to be run in a container but recently support has been added to run it from an existing openssh-server.

SSH and simple HTTP is all I need and running in a container suits me right, especially after I had started deploying things with Docker and Ansible and could a little more practice with that. Running in a container is especially helpful when running things implemented in those fancy new languages like Go or Rust on an old fashioned Linux like Debian. (For reference see for example Package managers all the way down on lwn about how modern languages create a dependency hell for classical Linux distributions.)

Running the HGKeeper Docker container itself was easy, however SSH access would go through a non-standard port, at least if I wanted to keep accessing the host machine through port 22.

The README promised HGKeeper can also be run together with OpenSSH running on default port. But is it possible to do both or all of this? Run in a container, access HGKeeper through port 22 and keep access to the host on the same port? I reached out to Gary Kramlich, the author of HGKeeper and that was a very nice experience. Let’s say I nerd sniped him somehow?!

Installing HGKeeper

So the goal is to run HGKeeper in a Docker container and access that through OpenSSH. While doing the setup I decided to go through an SSH server on a different machine, the one that’s exposed to the internet from my local network anyways, and where mercurial-server was installed before. So access from outside goes through OpenSSH on standard port 22 hg.example.com which is an alias for the virtual machine falbala.internal.example.com. That machine tunnels the traffic to another virtual machine miraculix.internal.example.com where HGKeeper runs on in the Docker container, with SSH port 22022 exposed to the local network.

Preparations

We follow the HGKeeper README and prepare things on the Docker host (miraculix) first. I created a directory /srv/data/hgkeeper where all related data is supposed to live. In the subfolder host-keys I created the SSH host keys as suggested in section “SSH Host Keys”:

$ ssh-keygen -t rsa -b 4096 -o host-keys/ssh_host_rsa_key

The docker container itself needs some preparation, so we run it once manually like suggested in section “Running in a Container”. The important part here is to pass the SSH public key of the client workstation you will access the HG repos first from. I copied that from my laptop to /srv/data/hgkeeper/tmp/ before. The admin username passed here (alex) should also be adapted to your needs:

cd /srv/data/hgkeeper
docker run --rm \
    -v $(pwd)/repos:/repos \
    -v $(pwd)/tmp/id_rsa.pub:/admin-pubkey:ro \
    -e HGK_ADMIN_USERNAME=alex \
    -e HGK_ADMIN_PUBKEY=/admin-pubkey \
    -e HGK_REPOS_PATH=/repos \
    docker.io/rwgrim/hgkeeper:latest \
    hgkeeper setup

Setting up OpenSSH

As stated before, I tried to setup as much things as possible with Ansible. The preparation stuff above could probably also done with Ansible, but I had that in place from playing around and did not bother at the time. It depends on your philosophy anyways if you want to automate such sensitive tasks as creating crypto keys. However from here on I have everything in a playbook, and will show that snippets to illustrate my setup. The OpenSSH config is for the host falbala and thus in falbala.yml so see the first part here:

---
- hosts: falbala
 become: true

 vars:
   hg_homedir: /var/lib/hg

 tasks:
   - name: Add system user hg
     user:
       name: hg
       comment: Mercurial Server
       group: nogroup
       shell: /bin/sh
       system: yes
       create_home: yes
       home: "{{ hg_homedir }}"

That system needs a local user. You can name it as you like, but it was named hg on my old setup and I wanted to keep that, so I don’t have to change my working copies. The user needs to have a shell set, otherwise OpenSSH won’t be able to call commands needed later. I use the $HOME dir to put the SSH known_hosts file in it, so it does not clutter my global settings. Doing this manually on Debian would look like this:

% sudo adduser --system --home /var/lib/hg --shell /bin/sh hg

Next step is that known_hosts file. You can do this manually by logging into that hg user once and do a manual connection to the SSH server on the other machine like this:

$ sudo -i -u hg
$ ssh -p 22022 hg@miraculix.internal.example.com

For Ansible I prepared a known_hosts file and that was somewhat tricky due to the different port used. You can not just look into your present files for reference, because host and port are hashed in there, and the documentation (man 8 sshd) does not cover that part. I had to guess from ssh -v output. The file I came up with is named pubkeys/hgkeeper in my Ansible project and it looks like this:

[miraculix.internal.example.com]:22022 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD8hrAkg7z1ao3Hq1w/4u9Khxc4aDUfJiKfbhin0cYRY7XrNIn3mix9gwajGWlV1m0P9nyXiNTW4E/Z
W0rgTF4I1PZs3dbh66dIAH7Jif4YLFj5VPj350TF5XeytyFjalecWBa36S1Y+UydIY/o/yC104D5Hg7M27bQzL+blqk1eIlY0aaM+faEuxFYHexK5fa+Xq150F6NswHdsVPCYOKu6t+myGHpe2+X6qhVNuftDOP5J
QO6BzxhN6MmG1arZ9dkeBb6Ry++R4o3soeV1k9uZ33jbJGnqFryvL3cyOPq7mVdoSffwqef1i4+0fNTGgO8U93w2An6z5fRjvPufA+VIVvFDwRoREFKvO1Q+WdeOSUWOl6QwVjKPrv0M3QnSnTJHpZpNlshOaDZyQ
NHLXLEO43vdbGr6rk7l9ApUcF34Y7eLWp42XktQLlzDitua009v7uNBAuIzKR3+UAWaFpj+CGl1jDm7a3n8kXlJjumVN5hfXo0lLz7n+G/Yd/U87dHftL0kiYcVRR4n1qMmhV5UL4lq0FNDBwwzRzSKyNw80mRoMH
RiKBBUTFXJApzlIAXiJ7g1JThM2rcNnskpyhZSrL38ses5Ns2GBOzEZsi51U+S5O91+KwHDTb10sxoJskUvIyJxCUILkOGZpbd4uWI+6tAWycP4QMT33MUHFEQ==

With that in place, it’s straight forward in the playbook:

   - name: Ensure user hg has .ssh dir
     file:
       path: "{{ hg_homedir }}/.ssh"
       state: directory
       owner: hg
       group: nogroup
       mode: '0700'

   - name: Ensure known_hosts entry for miraculix exists
     known_hosts:
       path: "{{ hg_homedir }}/.ssh/known_hosts"
       name: "[miraculix.internal.example.com]:22022"
       key: "{{ lookup('file', 'pubkeys/hgkeeper') }}"

   - name: Ensure access rights for known_hosts file
     file:
       path: "{{ hg_homedir }}/.ssh/known_hosts"
       state: file
       owner: hg
       group: nogroup

For the SSH daemon configuration two things are needed. First, if you use domain names instead of IP addresses, you have to set UseDNS yes in sshd_config. This snipped does it in Ansible:

   - name: Ensure OpenSSH does remote host name resolution
     lineinfile:
       path: /etc/ssh/sshd_config
       regexp: '^#?UseDNS'
       line: 'UseDNS yes'
       validate: /usr/sbin/sshd -T -f %s
       backup: yes
     notify:
       - Restart sshd

The second, and most important part is matching the hg user, authenticating against the HGKeeper app running on the other host and tunneling the traffic through. This is done with the following snippet, which contains the literal block to be added to /etc/ssh/sshd_config if you’re doing it manually:

   - name: Ensure SSH tunnel block to hgkeeper is present
     blockinfile:
       path: /etc/ssh/sshd_config
       marker: "# {mark} ANSIBLE MANAGED BLOCK"
       insertafter: EOF
       validate: /usr/sbin/sshd -T -C user=hg -f %s
       backup: yes
       block: |
         Match User hg
             AuthorizedKeysCommand /usr/bin/curl -q --data-urlencode 'fp=%f' --get http://miraculix.internal.example.com:8081/hgk/authorized_keys
             AuthorizedKeysCommandUser hg
             PasswordAuthentication no
     notify:
       - Restart sshd

You might have noticed two things. curl has to be installed, and sshd should be restarted after its config has change. Here:

   - name: Ensure curl is installed
     package:
       name: curl
       state: present

  handlers:
    - name: Restart sshd
      service:
        name: sshd
        state: reloaded

Running HGKeeper Docker Container with Ansible

Running a Docker container from Ansible is quite easy. I just translated the call to docker run and its arguments from the HGKeeper documentation:

---
- hosts: miraculix
  become: true

  vars:
    data_basedir: /srv/data
    hgkeeper_data: "{{ data_basedir }}/hgkeeper"
    hgkeeper_host_keys: host-keys
    hgkeeper_repos: repos
    hgkeeper_ssh_port: "22022"

  tasks:
   - name: Setup docker container for HGKeeper
     docker_container:
       name: hgkeeper
       image: "docker.io/rwgrim/hgkeeper:latest"
       pull: true
       state: started
       detach: true
       restart_policy: unless-stopped
       volumes:
         - "{{ hgkeeper_data }}/{{ hgkeeper_host_keys }}:/{{ hgkeeper_host_keys }}:ro"
         - "{{ hgkeeper_data }}/{{ hgkeeper_repos }}:/{{ hgkeeper_repos }}"
       env:
         HGK_SSH_HOST_KEYS: "/{{ hgkeeper_host_keys }}"
         HGK_REPOS_PATH: "/{{ hgkeeper_repos }}"
         HGK_EXTERNAL_HOSTNAME: "miraculix.internal.example.com"
         HGK_EXTERNAL_PORT: "{{ hgkeeper_ssh_port }}"
       ports:
         - "8081:8080"   # http
         - "{{ hgkeeper_ssh_port }}:22222"   # ssh
       command: hgkeeper serve

Client Settings

Almost done, after trivially copying over my old repos from the old virtual machine to the new, the server is ready. For the laptops and workstations nothing in my setup has to be changed, but one thing. The new setup needs Agent Forwarding in the SSH client config. But is simple, see the lines I added to ~/.ssh/config here:

Host hg.example.com hg
HostName hg.example.com
ForwardAgent yes

After all this was a pleasant endeavor in both working on the project itself as well as the outcome I have now.

Running EAGLE 9.6 on Debian 10 (buster)

For some side project I wanted to look at the schematics of the Adafruit PowerBoost 500C, which happened to be made with Autodesk EAGLE.

Having run EAGLE on Debian 9 (stretch) for a while now without great hassle, I did not expect much difficulties, I was wrong. First I downloaded the tarball from their download site. Don’t worry, there’s still the “free” version for hobbyists, however it’s not Free Software, but precompiled binaries for amd64 architecture, better than nothing.

After extracting, I tried to start it like this and got the first error:

alex@lemmy ~/opt/eagle-9.6.0 % ./eagle
terminate called after throwing an instance of 'std::runtime_error'
  what():  locale::facet::_S_create_c_locale name not valid
[1]    20775 abort      ./eagle

There’s no verbose or debug option. And according to the comments on the blog post “How to Install Autodesk EAGLE On Windows, Mac and Linux” the problem also affects other users. I vaguely remembered somewhere deep in the back of my head, I already had this problem some time ago on another machine, and tried something not obvious at all. My system locale is German, looked like this before:

alex@lemmy ~ % locale
LANG=de_DE.UTF-8
LANGUAGE=
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
LC_COLLATE="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_PAPER="de_DE.UTF-8"
LC_NAME="de_DE.UTF-8"
LC_ADDRESS="de_DE.UTF-8"
LC_TELEPHONE="de_DE.UTF-8"
LC_MEASUREMENT="de_DE.UTF-8"
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL=

alex@lemmy ~ % locale -a
C
C.UTF-8
de_DE.utf8
POSIX

As you can see, no English locale, so I added one. On Debian you do it like this. Result below:

% sudo dpkg-reconfigure locales

alex@lemmy ~/opt/eagle-9.6.0 % locale -a
C
C.UTF-8
de_DE.utf8
en_US.utf8
POSIX

This seems like a typical “works on my machine” problem from an US developer, huh? Next try starting eagle, you’ll see the locale problem is gone, the next error appears:

alex@lemmy ~/opt/eagle-9.6.0 % ./eagle 
./eagle: symbol lookup error: /lib/x86_64-linux-gnu/libGLX_mesa.so.0: undefined symbol: xcb_dri3_get_supported_modifiers

That’s the problem if you don’t build from source against the libraries of the system. In that case EAGLE ships a shared object libxcb-dri3.so.0, which itself is linked against the libxcb.so.1 from my host system, and those don’t play well together. I found a solution to that in a forum thread “Can’t run EAGLE on Debian 10 (testing)” and it is using the ld_preload trick:

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libxcb-dri3.so.0 ./eagle

This works. Hallelujah.

Wireshark USB capture setup with groups and udev

Wireshark does not only capture network traffic, but also different things like USB traffic. I needed that today and it needs some additional setup on Linux. There’s something in the Wireshark wiki on that topic, but I consider that not an elegant solution: USB capture setup.

The solution I use is basically one proposed on stackoverflow and uses a separate Linux system group and udev: usbmon (wireshark, tshark) for regular user.

On Debian you can do this:

addgroup usbmon
addgroup adahl usbmon

You have to log off and on again, check if you are in that group with the command id.

Now create a new file /etc/udev/rules.d/75-usbmon.rules and put this into it:

SUBSYSTEM=="usbmon", GROUP="usbmon", MODE="640"

After doing modprobe usbmon your devices /dev/usbmon* should belong to the new usbmon group and you can start capturing things with Wireshark.

HowTo: Debug core dumps with ptxdist 2017.07.0

Debugging for embedded projects is a little harder than for your own computer. In many cases you can not run gdb on the target, and even if you can use gdbserver1 this does not cover all use cases. For post mortem analysis (e.g. after a segmentation fault of your program) you want to examine so called core dumps. Given you successfully found out how to let your target create those2, copied it to your workstation and unpacked it, you still need to know how to analyze it.

With the release of ptxdist 2017.07.0 the handling of debug information changed. Quoting from the release announcement:

The debug symbol handling was reworked. The debug files are now named based
on build-ids and (optional) debug IPKGs can be created. They are not
installed by default but can be installed manually as needed. This is
useful to gdb on the target or with valgrind and perf.

In my BSP those debug info is put to /usr/lib/debug in the root folder from which the target files are copied. This looks like this now:

% tree -a ./platform-foo/root/usr/lib/debug | head
platform-foo/root/usr/lib/debug
└── .build-id
    ├── 00
    │   └── ba20cb0e075c4dc0a792a9062b0864ced517b1.debug
    ├── 03
    │   └── 3b4fc351317376388fadb19fc63b4c8ab6c0d9.debug
    ├── 04
    │   ├── 01fe993aa2bed2155514c676d7001625732396.debug
    │   ├── 7bdbc5fd44a4444de24762c76a3313d1fda2c0.debug
    │   └── d0ecb6611e590a036cbdd5909cc5bfc9158af8.debug

You can imagine it is not possible anymore to load this manually, the debugger will have to find out by itself. Getting this to work caused me some headaches, but this it how I got it work: Create a file ‘gdb-config’ with the following content:3

set debug-file-directory /home/adahl/Work/bsp/foo/platform-foo/root/usr/lib/debug
set sysroot /home/adahl/Work/bsp/foo/platform-foo/root

Note: the order of the commands is important, it does not work the other way round!

Then load your core dump:

% ./platform-foo/selected_toolchain/arm-v5te-linux-gnueabi-gdb -x ./tmp/gdb-config -e ./platform-foo/root/usr/local/bin/yourtool -c ./tmp/cores/2017-08-16/core 

So you run the gdb from your toolchain, load the previously crafted file with gdb commands with -x, give the path to your executable with -e and finally the core dump file with -c and that’s it. You may now have a look at a backtrace and find out what caused the segfault …

Update: I was pointed to an easier possibility to invoke the right gdb with the necessary options in #ptxdist IRC channel (on freenode). The previous call would be like this:

% ptxdist gdb -e ./platform-foo/root/usr/local/bin/yourtool -c ./tmp/cores/2017-08-16/core 

No need to pick the correct gdb and find and configure the right directories, just add the path to your tool and your core file, ptxdist handles everything else. I bet this would also work with older ptxdist versions, where the debug symbols where placed somewhere else, but didn’t try it. This however was also just added with ptxdist 2017.07.0:

There is also a new ptxdist command ‘gdb’ for remote debugging that sets up
the sysroot correctly and a wrapper script that can be used by graphical
development environments.

  1. we already had this topic here: KDevelop: Debuggen von Programmen, die root-Rechte benötigen (German) []
  2. this would be content for another post []
  3. of course you adapt the paths to the ones you use on your machine []

Let a LED blink in different frequencies on Linux

There’s an embedded Linux board on my desk, where a LED is connected to some GPIO pin. Everything is set up properly through device tree and with a recent kernel 4.9.13 the usual LED trigger mechanisms work fine, so no problem using heartbeat or just switching the LED on and off.

Now I wanted to have the LED blink with certain patterns and it turns out, this is quite easy given you know how. You have to set LEDS_TRIGGER_TIMER in your kernel config first. Now go to the sysfs folder of your LED, here it is:

cd /sys/class/leds/status

Have a look at the available triggers:

$ cat trigger 
[none] timer oneshot mtd nand-disk heartbeat gpio default-on panic

Switch to the timer trigger:

$ echo timer > trigger

Now two new files appear, delay_on and delay_off. Per default both contain the value 500 which lets the LED blink with 1 Hz. Without further looking into the trigger code or searching for documentation I assume those values are the on and off times in milliseconds. So have the LED blink with a certain frequency the following formular could be used:

f_LED = 1000 / ( delay_on + delay_off )

So to set my LEDs to blink at 2 Hz frequency, I set it up like this:

$ echo 250 > delay_on
$ echo 250 > delay_off

Happy blinking!

Raspberry Pi automatisch mit dem Freifunk-WLAN verbinden

Ich setze gerade diverse Mini-Knoten auf Raspberry-Pi-Basis auf, die sich für den Internet-Zugang automatisch mit dem Magdeburger Freifunk verbinden sollen.

Anleitungen zur Verbindung mit einem WPA-verschlüsselten WLAN gibt es genug. Nur leider schweigen die zur Frage, was man denn mit einem unverschlüsselten WLAN1 macht.

Schließlich bin ich doch auf Stack Exchange und auf der man page des wpa_supplicant fündig geworden. An dieser Stelle sei die notwendige Konfiguration noch einmal zusammengefasst.

Ich nutze einen USB-WLAN-Adapter von CLS mit externer Antenne; intern ist das ein RTL8191SU 802.11n WLAN Adapter,der direkt unterstützt wird.

In /etc/network/interfaces wird für das WLAN-Device (wlan0) folgendes angefügt:

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

iface default inet dhcp

Der Eintrag in /etc/wpa_supplicant/wpa_supplicant.conf sieht dann so aus:

network={
    ssid="md.freifunk.net"
    key_mgmt=NONE
    id_str="default"
}

Durch den Eintrag key_mgmt=NONE , der  meist verschwiegen wird, stellt der wpa_supplicant keine verschlüsselte Verbindung her, sondern nutzt das WLAN unverschlüsselt, wie es für Freifunk notwendig ist. In einem anderen Freifunk-Netzwerk muss natürlich die ESSID angepasst werden.

Nach einem Neustart verbindet der Raspberry Pi sich nun automatisch mit dem Freifunk-Netzwerk, sofern es erreichbar ist.

  1. zum Beispiel einem Freifunk-WLAN []

HowTo: ownCloud mit lighttpd auf Debian Jessie

Der Fortschritt macht nicht halt und dass jemand™ sich jetzt ein Fairphone mit Android drauf gekauft hat, war eine willkommene Gelegenheit sich mal eine eigene ownCloud-Instanz aufzusetzen. Debian Jessie ist zwar derzeit noch »testing« aber schon ganz gut nutzbar und da es dort fertige Pakete für ownCloud 7 gibt und eine passende VM auf dem Server zu Haus bereits lief, fiel die Wahl darauf. Ein lighttpd mit PHP lief auch schon und der Rest war auch so schwer nicht.

Im derzeitigen Stand des Paktes muss man die Datenbank für ownCloud noch selbst anlegen. Ich hab das mit phpmyadmin gemacht, wo das Debian-Paket sich selbst schon einfach in lighttpd integriert. Die Zugangsdaten gibt man später beim Einrichtungswizard vom ownCloud an.

Im ownCloud Paket ist eine Möglichkeit dokumentiert, das Paket mit lighttdp zu nutzen, die automatische Integration wie bspw. bei phpmyadmin gibt es so nicht. Der falsch vorgeschlagene Pfad in der Doku, wurde von mir gemeldet und ich hab noch ein paar Dinge ergänzt:

$HTTP["host"] == "owncloud.example.com" {
        url.redirect = (
                "^/.well-known/caldav$" => "/remote.php/caldav/",
                "^/.well-known/carddav$" => "/remote.php/carddav/"
        )

        alias.url += ( "" => "/usr/share/owncloud" )

        $HTTP["url"] =~ "^/data/" {
                url.access-deny = ("")
        }

        $HTTP["url"] =~ "^($|/)" {
                dir-listing.activate = "disable"
        }
} else $HTTP["host"] =~ "" {
        url.redirect = ( 
                "^/owncloud/.well-known/caldav$" => "/owncloud/remote.php/caldav/",
                "^/owncloud/.well-known/carddav$" => "/owncloud/remote.php/carddav/"
        )
        
        # Make ownCloud reachable under /owncloud
        alias.url += ( "/owncloud" => "/usr/share/owncloud" )
          
        # Taken from http://owncloud.org/support/distro-notes, section "lighttpd":
        # As .htaccess files are ignored by lighttpd, you have to secure the /data
        # folder by yourself, otherwise your owncloud.db database and user data is
        # puplicly readable even if directory listing is off.
        $HTTP["url"] =~ "^/owncloud/data/" {
                url.access-deny = ("")
        }
        
        $HTTP["url"] =~ "^/owncloud($|/)" {
                dir-listing.activate = "disable"
        }
}

Der untere Teil ist aus dem Vorschlag vom Paket übernommen, nur der alias-Pfad korrigiert. Der obere Teil ist für den direkten Zugriff über eine andere Subdomain statt über den hostname des Servers und einen Unterordner. An sich steht da aber das gleiche drin. Besonderes Augenmerk sei noch auf die redirects für die sogenannten well-known URLs nach RFC 5785 gelegt, die den Zugriff mit bestimmten Kalenderprogrammen deutlich vereinfachen.

Nachtrag: Dieses Mini-HowTo unterschlägt die Einrichtung von verschlüsseltem Zugriff über HTTPS. Ich habe das weggelassen, weil es a) an anderer Stelle gut beschrieben ist und b) ich hier noch einen nginx als Reverse Proxy vor dem lighty habe, der mit diesem HowTo rein gar nichts zu tun hat. ;-)

JavaMail and TLS: Turn on the security switch!

While talking to Ge0rg about latest issues in Java TLS we stumbled upon the question whether the JavaMail API would have similar problems.

Naturally one would expect that Java’s SSL implementation is secure. However, this is not the case: Special care needs to be taken regarding Man-In-The-Middle attacks: While a certificate may turn out to be valid, you cannot be sure that it has the right origin!

The problem is known for a while and library maintainers are taking steps to avoid it. However, for compatibility reasons those features may need to be turned on.

For JavaMail version 1.5.2 the SSLNOTES.TXT says specifically:

— Server Identity

Check RFC 2595 specifies addition checks that must be performed on the server’s certificate to ensure that the server you connected to is the server you intended to connect to. This reduces the risk of “man in the middle” attacks. For compatibility with earlier releases of JavaMail, these additional checks are disabled by default. We strongly recommend that you enable these checks when using SSL. To enable these checks, set the “mail..ssl.checkserveridentity” property to “true”.

Here is the thing that most examples forget: You need to switch that feature on!

final Authenticator auth = ... // somewhere in your application
final Properties p = new Properties();

// add your JavaMail configuration here

// this is implied by the protocol "imaps"
p.put("mail.imap.starttls.enable", "true");

// not only check the certificate, but also make sure that we are
// connected to the right server.
p.put("mail.imap.ssl.checkserveridentity", "true");

try {
	Session session = Session.getDefaultInstance(p, auth);
	Store store = session.getStore();
	store.connect();

	// do something with the store
} catch (MessagingException e) {
	// do something meaningful(!) with the exception
}

// close the store when you are done

To use SSL at all, you need to turn it on, either by specifying “imaps” in the property mail.store.protocol or by setting mail.imap.starttls.enable to “true”. Replace imap respectively for other protocol suites (e.g. smtp).

Update 2014-08-05: Inserted the Link to Georg’s blog post about latest issues in Java TLS.

HowTo: gitolite auf Debian Wheezy

Das Netz ist natürlich voll von HowTos zum Thema gitolite, in diesem hier nutze ich aber nicht die bleeding edge Version von upstream sondern das Debian-Paket. Eine gewisse Hilfe für’s Verständnis und bei der Installation war die Dokumentation für gitolite 2.x, denn in Wheezy ist Version 2.3 verpackt.

Installation

Geht los mit Installation des Pakets gitolite. Wenn der bei der Installation dpkg nichts konfiguriert haben will, dann gibt man danach nochmal ein:

dpkg-reconfigure gitolite

Dort wird dann ein SSH public key abgefragt für den Admin-Nutzer. In meinem Fall hab ich hier für die Testinstallation auf dem selben Rechner den folgenden Pfad angegeben, ansonsten kopiert man seinen public key ins Eingabefeld:

/home/adahl/.ssh/id_rsa.pub

Damit liegt jetzt hier unterhalb von /var/lib/gitolite1 folgendes:

% ls -la /var/lib/gitolite
insgesamt 28
drwxr-xr-x  5 gitolite gitolite 4096 Jun 17 13:51 ./
drwxr-xr-x 58 root     root     4096 Jun 17 13:47 ../
drwx------  8 gitolite gitolite 4096 Jun 17 13:47 .gitolite/
-rw-r--r--  1 gitolite gitolite 4217 Jun 17 13:47 .gitolite.rc
-rw-------  1 gitolite gitolite    0 Jun 17 13:47 projects.list
drwx------  4 gitolite gitolite 4096 Jun 17 13:47 repositories/
drwx------  2 gitolite gitolite 4096 Jun 17 13:47 .ssh/

Da mein public key bereits hinterlegt ist, kann ich direkt zum nächsten Schritt übergehen und das Admin-Repo clonen, welches bei der Installation angelegt wurde:

git clone gitolite@localhost:gitolite-admin

Wie man sieht, hatte ich den dedizierten Nutzer gitolite genannt, für Produktionsumgebung ist vielleicht git eine bessere Wahl.

Ganz wichtig: beim Ändern der Config muss man stets drauf achten, dass man sich nicht selbst aussperrt. Für das Repository gitolite-admin muss man Schreibrechte für irgendeinen Key behalten, auf den man auch Zugriff hat und der unterhalb von keydir im Repo liegt, sonst hat man keine Möglichkeit mehr ohne große Schmerzen diese Rechte wiederzuerlangen.

Konfiguration

Als nächstes habe ich die Datei /var/lib/gitolite/.gitolite.rc angepasst. Berücksichtigt sind hier bereits, dass ich sowohl git-daemon als auch gitweb einsetzen und die Repos auf einen anderen Rechner spiegeln will. D.h. ich habe folgende Einträge geändert:

$REPO_UMASK = 0027;
$GL_GITCONFIG_KEYS = "gitolite.mirror.*";
$REPO_BASE="/srv/repos/git";

Der letzten Zeile ist erhöhte Beachtung zu schenken. Die bereits angelegten Repositories gitolite-admin und testing aus /var/lib/gitolite/repositories müssen nämlich in den geänderten Pfad verschoben werden.2 Damit der Zugriff über gitweb funktioniert, habe ich den Nutzer www-data der Gruppe git hinzugefügt und die Rechte des Repository-Verzeichnisses gelockert:3

sudo addgroup www-data git
sudo chmod g+rx /var/lib/gitolite/repositories

Die eigentliche Konfiguration erfolgt dann wie bei gitolite üblich über das admin-Repository nachdem man es geklont hat:

git clone gitolite@myfancyserver:gitolite-admin

Wie man das im Einzelnen macht, kann man in der Doku zu gitolite v2 nachlesen. Das ist nicht Debian-spezifisch.

gitweb mit lighttpd

Für einen schnellen Überblick mit dem Webbrowser ist das Paket gitweb installiert. Als Webserver kann vermutlich irgendein Webserver dienen. Für lighttpd habe ich eine Datei /etc/lighttpd/conf-available/80-gitweb.conf angelegt und mit den für diesen Browser üblichen Mechanismen aktiviert. Der Inhalt sieht so aus:4

# -*- depends: cgi -*-
server.modules += ("mod_setenv")
setenv.add-environment = ("GITWEB_CONFIG" => "/etc/gitweb.conf")
alias.url += ( "/gitweb" => "/usr/share/gitweb" )

$HTTP["url"] =~ "^/gitweb/" {
        server.indexfiles = ("gitweb.cgi")
        cgi.assign = ( ".cgi" => "" )
}

Es ist außerdem das Modul 10-cgi.conf zu aktivieren:

sudo lighty-enable-mod cgi
sudo lighty-enable-mod gitweb

In der /etc/gitweb.conf sind auch noch Anpassungen zu tätigen. Hier meine geänderten Zeilen:

$projectroot = "/var/lib/gitolite/repositories";
$projects_list = "/var/lib/gitolite/projects.list";
@diff_opts = ('--find_renames', '--minimal');

Urspünglich war jetzt hier noch eine Beschreibung zum Mirroring, ich verschiebe das auf später und tu es in einen separaten Artikel … O:-)

  1. auch den Pfad kann man beim dpkg-reconfigure angeben. []
  2. Dieser Schritt kann natürlich entfallen, wenn man die Repositories im Standard-Pfad belässt. []
  3. Die Rechte der bei der Installation angelegten Repositories sind allerdings immernoch so restriktiv, dass sie nicht über gitweb angezeigt werden können. Bei <code>gitolite-admin</code> will man das sowieso nicht und statt <code>testing</code> legt man sich vielleicht ein weiteres Testrepository an, um zu sehen, ob das mit den Rechten beim neu anlegen richtig klappt. []
  4. Siehe auch gitweb im ArchWiki. []