Tag Archives: Linux

Moving to HGKeeper

Motivation

So, where do we come from? I started using the Mercurial version control system around 2009 if I remember correctly. Had used Subversion and SVK (does anyone remember that?) before and was curious about distributed version control. Back then Mercurial was better suited for platform independent use on Linux and Windows, and I still had to use the latter at work. Mercurial’s user interface was very much the same as Subversion’s and basically just push and pull were added, switching from Subversion to Mercurial was easy. Git was weird at the time, at least for me. Meanwhile we use Git at work exclusively and it got a lot better on Windows over time. However for nostalgic reasons and to stay somewhat fluent in other VCS I kept most of my private projects on Mercurial.

For “hosting” my own repos I used the Debian package mercurial-server, at least up to Debian 9 (stretch), but after upgrading that server to Debian 10 (buster) things started falling apart, and I looked out for a new hosting solution. For the record: I thought about converting all those repos to Git, but opted for not doing that, because I have accumulated quite a number of repos and although I did convert one or two already, I supposed it would be easier to switch the hosting instead of each repo.

Speaking of hosting: I don’t need a huge forge for myself, just some rather simple solution for having server side “central” repos so I can easily work from different laptops or workstations. So I scanned over MercurialHosting in the Mercurial wiki and every self hosting solution seemed like cracking a nut with a sledgehammer, except HGKeeper.

Introducing HGKeeper

HGKeeper introduces itself like this in its own repo:

HGKeeper is an server for mercurial repositories. It provides access control for SSH access and public HTTP access via hgweb.

It was originally designed to be run in a container but recently support has been added to run it from an existing openssh-server.

SSH and simple HTTP is all I need and running in a container suits me right, especially after I had started deploying things with Docker and Ansible and could a little more practice with that. Running in a container is especially helpful when running things implemented in those fancy new languages like Go or Rust on an old fashioned Linux like Debian. (For reference see for example Package managers all the way down on lwn about how modern languages create a dependency hell for classical Linux distributions.)

Running the HGKeeper Docker container itself was easy, however SSH access would go through a non-standard port, at least if I wanted to keep accessing the host machine through port 22.

The README promised HGKeeper can also be run together with OpenSSH running on default port. But is it possible to do both or all of this? Run in a container, access HGKeeper through port 22 and keep access to the host on the same port? I reached out to Gary Kramlich, the author of HGKeeper and that was a very nice experience. Let’s say I nerd sniped him somehow?!

Installing HGKeeper

So the goal is to run HGKeeper in a Docker container and access that through OpenSSH. While doing the setup I decided to go through an SSH server on a different machine, the one that’s exposed to the internet from my local network anyways, and where mercurial-server was installed before. So access from outside goes through OpenSSH on standard port 22 hg.example.com which is an alias for the virtual machine falbala.internal.example.com. That machine tunnels the traffic to another virtual machine miraculix.internal.example.com where HGKeeper runs on in the Docker container, with SSH port 22022 exposed to the local network.

Preparations

We follow the HGKeeper README and prepare things on the Docker host (miraculix) first. I created a directory /srv/data/hgkeeper where all related data is supposed to live. In the subfolder host-keys I created the SSH host keys as suggested in section “SSH Host Keys”:

$ ssh-keygen -t rsa -b 4096 -o host-keys/ssh_host_rsa_key

The docker container itself needs some preparation, so we run it once manually like suggested in section “Running in a Container”. The important part here is to pass the SSH public key of the client workstation you will access the HG repos first from. I copied that from my laptop to /srv/data/hgkeeper/tmp/ before. The admin username passed here (alex) should also be adapted to your needs:

cd /srv/data/hgkeeper
docker run --rm \
    -v $(pwd)/repos:/repos \
    -v $(pwd)/tmp/id_rsa.pub:/admin-pubkey:ro \
    -e HGK_ADMIN_USERNAME=alex \
    -e HGK_ADMIN_PUBKEY=/admin-pubkey \
    -e HGK_REPOS_PATH=/repos \
    docker.io/rwgrim/hgkeeper:latest \
    hgkeeper setup

Setting up OpenSSH

As stated before, I tried to setup as much things as possible with Ansible. The preparation stuff above could probably also done with Ansible, but I had that in place from playing around and did not bother at the time. It depends on your philosophy anyways if you want to automate such sensitive tasks as creating crypto keys. However from here on I have everything in a playbook, and will show that snippets to illustrate my setup. The OpenSSH config is for the host falbala and thus in falbala.yml so see the first part here:

---
- hosts: falbala
 become: true

 vars:
   hg_homedir: /var/lib/hg

 tasks:
   - name: Add system user hg
     user:
       name: hg
       comment: Mercurial Server
       group: nogroup
       shell: /bin/sh
       system: yes
       create_home: yes
       home: "{{ hg_homedir }}"

That system needs a local user. You can name it as you like, but it was named hg on my old setup and I wanted to keep that, so I don’t have to change my working copies. The user needs to have a shell set, otherwise OpenSSH won’t be able to call commands needed later. I use the $HOME dir to put the SSH known_hosts file in it, so it does not clutter my global settings. Doing this manually on Debian would look like this:

% sudo adduser --system --home /var/lib/hg --shell /bin/sh hg

Next step is that known_hosts file. You can do this manually by logging into that hg user once and do a manual connection to the SSH server on the other machine like this:

$ sudo -i -u hg
$ ssh -p 22022 hg@miraculix.internal.example.com

For Ansible I prepared a known_hosts file and that was somewhat tricky due to the different port used. You can not just look into your present files for reference, because host and port are hashed in there, and the documentation (man 8 sshd) does not cover that part. I had to guess from ssh -v output. The file I came up with is named pubkeys/hgkeeper in my Ansible project and it looks like this:

[miraculix.internal.example.com]:22022 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD8hrAkg7z1ao3Hq1w/4u9Khxc4aDUfJiKfbhin0cYRY7XrNIn3mix9gwajGWlV1m0P9nyXiNTW4E/Z
W0rgTF4I1PZs3dbh66dIAH7Jif4YLFj5VPj350TF5XeytyFjalecWBa36S1Y+UydIY/o/yC104D5Hg7M27bQzL+blqk1eIlY0aaM+faEuxFYHexK5fa+Xq150F6NswHdsVPCYOKu6t+myGHpe2+X6qhVNuftDOP5J
QO6BzxhN6MmG1arZ9dkeBb6Ry++R4o3soeV1k9uZ33jbJGnqFryvL3cyOPq7mVdoSffwqef1i4+0fNTGgO8U93w2An6z5fRjvPufA+VIVvFDwRoREFKvO1Q+WdeOSUWOl6QwVjKPrv0M3QnSnTJHpZpNlshOaDZyQ
NHLXLEO43vdbGr6rk7l9ApUcF34Y7eLWp42XktQLlzDitua009v7uNBAuIzKR3+UAWaFpj+CGl1jDm7a3n8kXlJjumVN5hfXo0lLz7n+G/Yd/U87dHftL0kiYcVRR4n1qMmhV5UL4lq0FNDBwwzRzSKyNw80mRoMH
RiKBBUTFXJApzlIAXiJ7g1JThM2rcNnskpyhZSrL38ses5Ns2GBOzEZsi51U+S5O91+KwHDTb10sxoJskUvIyJxCUILkOGZpbd4uWI+6tAWycP4QMT33MUHFEQ==

With that in place, it’s straight forward in the playbook:

   - name: Ensure user hg has .ssh dir
     file:
       path: "{{ hg_homedir }}/.ssh"
       state: directory
       owner: hg
       group: nogroup
       mode: '0700'

   - name: Ensure known_hosts entry for miraculix exists
     known_hosts:
       path: "{{ hg_homedir }}/.ssh/known_hosts"
       name: "[miraculix.internal.example.com]:22022"
       key: "{{ lookup('file', 'pubkeys/hgkeeper') }}"

   - name: Ensure access rights for known_hosts file
     file:
       path: "{{ hg_homedir }}/.ssh/known_hosts"
       state: file
       owner: hg
       group: nogroup

For the SSH daemon configuration two things are needed. First, if you use domain names instead of IP addresses, you have to set UseDNS yes in sshd_config. This snipped does it in Ansible:

   - name: Ensure OpenSSH does remote host name resolution
     lineinfile:
       path: /etc/ssh/sshd_config
       regexp: '^#?UseDNS'
       line: 'UseDNS yes'
       validate: /usr/sbin/sshd -T -f %s
       backup: yes
     notify:
       - Restart sshd

The second, and most important part is matching the hg user, authenticating against the HGKeeper app running on the other host and tunneling the traffic through. This is done with the following snippet, which contains the literal block to be added to /etc/ssh/sshd_config if you’re doing it manually:

   - name: Ensure SSH tunnel block to hgkeeper is present
     blockinfile:
       path: /etc/ssh/sshd_config
       marker: "# {mark} ANSIBLE MANAGED BLOCK"
       insertafter: EOF
       validate: /usr/sbin/sshd -T -C user=hg -f %s
       backup: yes
       block: |
         Match User hg
             AuthorizedKeysCommand /usr/bin/curl -q --data-urlencode 'fp=%f' --get http://miraculix.internal.example.com:8081/hgk/authorized_keys
             AuthorizedKeysCommandUser hg
             PasswordAuthentication no
     notify:
       - Restart sshd

You might have noticed two things. curl has to be installed, and sshd should be restarted after its config has change. Here:

   - name: Ensure curl is installed
     package:
       name: curl
       state: present

  handlers:
    - name: Restart sshd
      service:
        name: sshd
        state: reloaded

Running HGKeeper Docker Container with Ansible

Running a Docker container from Ansible is quite easy. I just translated the call to docker run and its arguments from the HGKeeper documentation:

---
- hosts: miraculix
  become: true

  vars:
    data_basedir: /srv/data
    hgkeeper_data: "{{ data_basedir }}/hgkeeper"
    hgkeeper_host_keys: host-keys
    hgkeeper_repos: repos
    hgkeeper_ssh_port: "22022"

  tasks:
   - name: Setup docker container for HGKeeper
     docker_container:
       name: hgkeeper
       image: "docker.io/rwgrim/hgkeeper:latest"
       pull: true
       state: started
       detach: true
       restart_policy: unless-stopped
       volumes:
         - "{{ hgkeeper_data }}/{{ hgkeeper_host_keys }}:/{{ hgkeeper_host_keys }}:ro"
         - "{{ hgkeeper_data }}/{{ hgkeeper_repos }}:/{{ hgkeeper_repos }}"
       env:
         HGK_SSH_HOST_KEYS: "/{{ hgkeeper_host_keys }}"
         HGK_REPOS_PATH: "/{{ hgkeeper_repos }}"
         HGK_EXTERNAL_HOSTNAME: "miraculix.internal.example.com"
         HGK_EXTERNAL_PORT: "{{ hgkeeper_ssh_port }}"
       ports:
         - "8081:8080"   # http
         - "{{ hgkeeper_ssh_port }}:22222"   # ssh
       command: hgkeeper serve

Client Settings

Almost done, after trivially copying over my old repos from the old virtual machine to the new, the server is ready. For the laptops and workstations nothing in my setup has to be changed, but one thing. The new setup needs Agent Forwarding in the SSH client config. But is simple, see the lines I added to ~/.ssh/config here:

Host hg.example.com hg
HostName hg.example.com
ForwardAgent yes

After all this was a pleasant endeavor in both working on the project itself as well as the outcome I have now.

Running EAGLE 9.6 on Debian 10 (buster)

For some side project I wanted to look at the schematics of the Adafruit PowerBoost 500C, which happened to be made with Autodesk EAGLE.

Having run EAGLE on Debian 9 (stretch) for a while now without great hassle, I did not expect much difficulties, I was wrong. First I downloaded the tarball from their download site. Don’t worry, there’s still the “free” version for hobbyists, however it’s not Free Software, but precompiled binaries for amd64 architecture, better than nothing.

After extracting, I tried to start it like this and got the first error:

alex@lemmy ~/opt/eagle-9.6.0 % ./eagle
terminate called after throwing an instance of 'std::runtime_error'
  what():  locale::facet::_S_create_c_locale name not valid
[1]    20775 abort      ./eagle

There’s no verbose or debug option. And according to the comments on the blog post “How to Install Autodesk EAGLE On Windows, Mac and Linux” the problem also affects other users. I vaguely remembered somewhere deep in the back of my head, I already had this problem some time ago on another machine, and tried something not obvious at all. My system locale is German, looked like this before:

alex@lemmy ~ % locale
LANG=de_DE.UTF-8
LANGUAGE=
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
LC_COLLATE="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_PAPER="de_DE.UTF-8"
LC_NAME="de_DE.UTF-8"
LC_ADDRESS="de_DE.UTF-8"
LC_TELEPHONE="de_DE.UTF-8"
LC_MEASUREMENT="de_DE.UTF-8"
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL=

alex@lemmy ~ % locale -a
C
C.UTF-8
de_DE.utf8
POSIX

As you can see, no English locale, so I added one. On Debian you do it like this. Result below:

% sudo dpkg-reconfigure locales

alex@lemmy ~/opt/eagle-9.6.0 % locale -a
C
C.UTF-8
de_DE.utf8
en_US.utf8
POSIX

This seems like a typical “works on my machine” problem from an US developer, huh? Next try starting eagle, you’ll see the locale problem is gone, the next error appears:

alex@lemmy ~/opt/eagle-9.6.0 % ./eagle 
./eagle: symbol lookup error: /lib/x86_64-linux-gnu/libGLX_mesa.so.0: undefined symbol: xcb_dri3_get_supported_modifiers

That’s the problem if you don’t build from source against the libraries of the system. In that case EAGLE ships a shared object libxcb-dri3.so.0, which itself is linked against the libxcb.so.1 from my host system, and those don’t play well together. I found a solution to that in a forum thread “Can’t run EAGLE on Debian 10 (testing)” and it is using the ld_preload trick:

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libxcb-dri3.so.0 ./eagle

This works. Hallelujah.

Performance Analysis on Embedded Linux with perf and hotspot

This is about profiling your applications on your embedded Linux target or let’s say finding the spots of high CPU usage, which is a common concern in practice. For an extensive overview see Linux Performance by Brendan Gregg. We will focus on viewing flame graphs with a tool called hotspot here, based on performance data recorded with perf. This proved to be helpful enough to solve most of the performance issues I had lately.

Installing the Tools

We have two sides here: the embedded Linux target and your Linux workstation host. For your computer you need to install hotspot. In Debian it is available from version 10 (buster). You can build from source of course, I did that with Debian 9 (stretch) a while ago. IIRC there are instructions for that upstream. Or you build it from the deb-src package from Debian unstable (sid) by following this BuildingTutorial.1

The embedded target part needs basically two parts. You have to set some options in the kernel config and you need the userland tool perf. For ptxdist here’s what I did:

  • Add -fno-omit-frame-pointer to global CFLAGS and CXXFLAGS
  • Enable PTXCONF_KERNEL_TOOL_PERF

Note: I had to update my kernel from v4.9 to v4.14, otherwise I got build errors when building perf.

Configuring the Kernel

I won’t quote the whole kernel config here, but I have a diff on what I had to set to make perf record useful things. These options are probably important, at least I had those on in my debug sessions (others might also be needed):

  • CONFIG_KALLSYMS
  • CONFIG_PERF_EVENTS
  • CONFIG_UPROBES
  • CONFIG_STACKTRACE
  • CONFIG_FTRACE

Using it

For embedded use, I basically followed the instructions of upstream hotspot. You might however want to dive a little into the options of perf, because it is a very powerful tool. What I did to record on the target was basically this to get samples from my daemon application mydaemon for 30 seconds:

perf record --call-graph dwarf --pid=$(pgrep mydaemon) sleep 30

This can produce quite a lot of data, so use it with short times first to not fill your filesystem. Luckily I had enough space on the flash memory of the embedded target available. Then just follow what the hotspot README says: copy the file and your kernel symbols to your host and call hotspot with the right options to your sysroot. This was the call I used (from a subfolder of my ptxdist BSP, where I copied those files to):

hotspot --sysroot ../../../platform-ncl/root --kallsyms kallsyms perf.data

Happy performance analysing!

  1. I did not test that with hotspot []

Bridges with meaningful error messages

Just wanted to create a network bridge interface on an embedded Linux system. First try was with the well known brctl and I got this:

$ brctl addbr br0
add bridge failed: Package not installed

Searching the web for this, led me to an old blog post which comes to this conclusion:

So that’s the most silly way to say: you forgot to compile in bridge support into your kernel.

ip from busybox is a little more helpful on that:

$ /sbin/ip link add name br0 type bridge
ip: RTNETLINK answers: Operation not supported

And yes, the “real” ip from iproute2 is also as helpful as that:

$ /usr/sbin/ip link add name br0 type bridge
RTNETLINK answers: Operation not supported

And yes, somehow they are all right, it is actually my fault:

# CONFIG_BRIDGE is not set

I’m going to change my kernel config now …

Do not change already released files!

tl;dr: Please upstream developers: Do not ever change what you already published, but make an additional version with your fix. This causes less trouble for people building your stuff.

As some of you might have noticed: I’m a little into embedded Linux software and contribute to some of the build systems around, mainly to buildroot (for fli4l) and to ptxdist (at work). This is a very special kind of fun meaning constantly trying to fix things gone wrong. Today is a day where the temperatures outside I’m stuck, because someone else fucked up his stuff.

Last week I built myself an image for testing iperf on a BeagleBone Black with the current buildroot master. This got me a tarball iperf-2.0.9.tar.gz from some mirror server. This worked.

Today I upgraded a ptxdist BSP from some older state, I think 2016.12.0, to the recent ptxdist 2017.06.0 which included an upgrade of the package iperf from 2.0.5 to 2.0.9. This got me a complaint about an invalid checksum. Those embedded build systems contain checksums for tarballs, buildroot uses mostly sha256, while ptxdist still uses md5. This is mostly to ensure transport integrity, but it also triggers when the upstream tarball changes. Which it should not.

So now the checksums in buildroot master from today are still from buildroot changeset 2016.05-1497-g11cc12e from 2016-07-29:

# From https://sourceforge.net/projects/iperf2/files/
sha1    9e215f6af8edd97f947f2b0207ff5487845d83d4        iperf-2.0.9.tar.gz
# Locally computed:
sha256  a5350777b191e910334d3a107b5e5219b72ffa393da4186da1e0a4552aeeded6  iperf-2.0.9.tar.gz

Those are the very same to the file I have locally. Note: both ptxdist and buildroot download archives to the same shared folder here. The md5sum of this file is 1bb3a1d98b1973aee6e8f171933c0f61 and ptxdist aborts with a warning this sum would not match. Well in ptxdist the iperf package was changed on 2016-12-19 last time, also upgrading from iperf 2.0.5 and here the changeset is ptxdist-2016.12.0-10-gd661f64 and the md5sum expected: 351b018b71176b8cb25f20eef6a9e37c. This is the same you can see today on sf.net, but why is it different from the one above?

To find out I downloaded the file currently available on sf.net, which was last changed 2016-09-08, after buildroot included the package update. The great tool diffoscope showed me, a lot of the content between those two archives was changed. But why?

Seems I was not the first one noticing: #20 Release file: iperf-2.0.9.tar.gz changed!!! And the maintainer set it to WONTFIX.

Now this is the point where I’m not sure whether to just get pissed, deeply sighing, or trying to fix the mess for those build systems. The clean way would be upstream releasing some new tarball, either the one or the other or even make a new release, named 2.0.9a or whatever.

What are possible solutions?

  • Wait for upstream to make a clean, new release. (And hope this doesn’t get changed in the future.)
  • Upgrade those hashes in buildroot. This obviously breaks old versions of buildroot.

According to the buildroot IRC channel, they want their package to be updated, even if older releases will break. And they said they have a fallback and use their own mirror, so that’s where my first package may have come from.

Update: buildroot accepted my patch updating those hashes quickly.

Let a LED blink in different frequencies on Linux

There’s an embedded Linux board on my desk, where a LED is connected to some GPIO pin. Everything is set up properly through device tree and with a recent kernel 4.9.13 the usual LED trigger mechanisms work fine, so no problem using heartbeat or just switching the LED on and off.

Now I wanted to have the LED blink with certain patterns and it turns out, this is quite easy given you know how. You have to set LEDS_TRIGGER_TIMER in your kernel config first. Now go to the sysfs folder of your LED, here it is:

cd /sys/class/leds/status

Have a look at the available triggers:

$ cat trigger 
[none] timer oneshot mtd nand-disk heartbeat gpio default-on panic

Switch to the timer trigger:

$ echo timer > trigger

Now two new files appear, delay_on and delay_off. Per default both contain the value 500 which lets the LED blink with 1 Hz. Without further looking into the trigger code or searching for documentation I assume those values are the on and off times in milliseconds. So have the LED blink with a certain frequency the following formular could be used:

f_LED = 1000 / ( delay_on + delay_off )

So to set my LEDs to blink at 2 Hz frequency, I set it up like this:

$ echo 250 > delay_on
$ echo 250 > delay_off

Happy blinking!

C/C++ developers please use -Wcast-align

Did you ever read the GCC documentation part Warning Options? If not, you may know the -Wall option. Yeah well, it enables a lot of options, but not literally all possible warnings. In my opinion setting -Wall should be the minimum you should set in every project, but there are more. You can also set -Wextra which enables even more warnings, but as you now might guess, still not all. Missing is especially one option, this post is about and the following describes why I consider it important to set: -Wcast-align

So what does the GCC doc say about it?

Warn whenever a pointer is cast such that the required alignment of the target is increased. For example, warn if a char * is cast to an int * on machines where integers can only be accessed at two- or four-byte boundaries.

In case you can not imagine what this means, let me explain. For example there are 32bit-CPUs out there which access memory correctly only at 32bit boundaries. This is to my knowledge by design. Let’s say you have some byte stream starting at an arbitrary aligned memory offset and it contains bytes starting from 0, followed by 1, then 2 and so on like this:

00 01 02 03 04 05 06 07

Now you set an uint32_t pointer to a non aligned address and dereference it. What would you expect? To help you a little, I have a tiny code snippet for demonstration:

uint8_t     buf[8];
uint32_t    *p32;
int         i;

/*  fill the buffer as described above  */
for ( i = 0; i < sizeof(buf); i++ ) {
    buf[i] = i;
}

for ( i = 0; i < sizeof(buf); i++ ) {
    p32 = (uint32_t *) &buf[i];
    printf( "0x%08X ", *p32 );
}
printf( "\n" );

The naïve assumption would be the following output:

0x03020100 0x04030201 0x05040302 0x06050403 0x07060504 0x15070605 0x02160706 0xC3021707

Note the last three containing some random bytes from memory behind our buffer! This is the output you get on a amd64 standard PC with little endian format (compiled on Debian GNU/Linux with some GCC 4.9.x).

Now look at this output:

0x03020100 0x00030201 0x01000302 0x02010003 0x07060504 0x04070605 0x05040706 0x06050407

This comes from an embedded Linux target with an AT91SAM9G20 Arm CPU, which is ARM9E family and ARMv5TEJ architecture or lets just say armv5 or older Arm CPU. Here It runs as little endian and was compiled with a GCC 4.7.x cross compiler.

Well those 32bit integers look somehow reordered, as if the CPU would shuffle the bytes of the word we point into? If you’re not aware of this this means silent data corruption on older Arm platforms! You can set the -Wcast-align option to let the compiler warn you, you may try this by yourself with the above snippet and your favorite cross compiler. Note: the warning does not solve the corruption issue, it just warns you to fix your code.

When reading the FAQ by Arm itself on this topic it’s not quite clear what the supposed behavior is, but what is clear is the following: unaligned access is not supported on older Arm CPUs up to ARM9 family or ARMv5 architecture.

Another point is interesting: even if the CPU supports unaligned access, whether it’s hard coded or an optional thing you must switch on first, it will give you a performance penalty. And coming back to my PC: this is also true for other processor families like Intel or AMD, although on recent processors it might not be that bad.

So what could or should we do as software developers? Assuming there are still a lot of old processors out there and architectures you might not know, and you never know where your code will end up: design your data structures and network protocols with word alignment in mind! If you have to deal with legacy stuff or bad protocols you can not change, you still have some other possibilities, have a look at The ARM Structured Alignment FAQ or search the net on how to let your kernel handle this.

If you want to handle it in code, memcpy() is one possibility. Assume we want to access a 32bit integer at offset 2, we could do it like this:

uint8_t *buf;
uint32_t myint;

memcpy( &myint, buf + 2, sizeof(uint32_t) );

And as said in the topic and above: turn on the -Wcast-align option!

(If you don’t want to be too scared about silent data corruption on your new IoT devices with those cheap old processors and your freshly compiled board support package, you might not want to turn it on on all those existing software out there. You might get a little depressed … )

Update: There’s a chapter on the Linux kernel documentation on this: Unaligned Memory Accesses.

Update: Also see EXP36-C. Do not cast pointers into more strictly aligned pointer types of the SEI CERT C Coding Standard.

KDevelop: Debuggen von Programmen, die root-Rechte benötigen

Häufig arbeite ich mit KDevelop und dort auch gern mit dem integrierten Debugger bzw. dem entsprechenden Frontend für gdb. Heute hatte ich ein Programm am Wickel, was einen lauschenden Socket auf einem privilegierten Port aufmachen will. Mit KDevelop konnte ich dies nicht direkt mit den nötigen Root-Rechten starten. Um es trotzdem debuggen zu können, kann man stattdessen mit gdbserver und remote debugging arbeiten. Das geht so:

In der bereits angelegten Launch Configuration geht man auf die Einstellungen für Debug und dort kann man unter »Remote Debugging« drei Dateien angeben. Man muss hier tatsächlich zwei bis drei Dateien anlegen und diese mit dem passenden Inhalt füllen. Die erste ist das gdb config script, wo man nochmal den Pfad zum ausgeführten Binary einträgt. Das sollte genau das sein, was auch über das Projekt kompiliert wird (mit Debug-Symbolen drin natürlich):

file /home/adahl/Work/build/tlue-gcc/src/tlue

Das dritte ist das run gdb script, hier sagt man dem gdb wohin er sich verbinden soll, in diesem Fall wird das ein gdbserver sein, der auf der selben Maschine auf Port 12345 lauschen wird:

target remote localhost:12345

Jetzt ist noch die Frage, was kommt bei run shell script rein? Wenn man es leer lässt, muss man den gdbserver von Hand starten, bevor man in KDevelop auf »Debug« klickt, das könnte auf einer entsprechenden Konsole in dem Build-Ordner des Programms so aussehen:

sudo gdbserver localhost:12345 ./src/tlue

Oder man baut sich noch eine dritte Datei, diesmal ein Shell-Skript, wo man den zuletzt genannten Befehl ausführt. Dieses gibt man dann an zweiter Stelle an. Klappte hier bei mir spontan nicht, weil sudo da noch nach einem Passwort fragt, was ich in KDevelop nicht eingeben kann.

Sometimes I’m in the mood to destruct my devices. Luckily some drivers may have an ioctl for that:

Most devices can perform operations beyond simple data transfers; user space must often be able to request, for example, that the device lock its door, eject its media, report error information, change a baud rate, or self desctruct.

(Linux Device Drivers, Third Edition, by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman, published 2005 by O’Reilly®)

Raspberry Pi automatisch mit dem Freifunk-WLAN verbinden

Ich setze gerade diverse Mini-Knoten auf Raspberry-Pi-Basis auf, die sich für den Internet-Zugang automatisch mit dem Magdeburger Freifunk verbinden sollen.

Anleitungen zur Verbindung mit einem WPA-verschlüsselten WLAN gibt es genug. Nur leider schweigen die zur Frage, was man denn mit einem unverschlüsselten WLAN1 macht.

Schließlich bin ich doch auf Stack Exchange und auf der man page des wpa_supplicant fündig geworden. An dieser Stelle sei die notwendige Konfiguration noch einmal zusammengefasst.

Ich nutze einen USB-WLAN-Adapter von CLS mit externer Antenne; intern ist das ein RTL8191SU 802.11n WLAN Adapter,der direkt unterstützt wird.

In /etc/network/interfaces wird für das WLAN-Device (wlan0) folgendes angefügt:

allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

iface default inet dhcp

Der Eintrag in /etc/wpa_supplicant/wpa_supplicant.conf sieht dann so aus:

network={
    ssid="md.freifunk.net"
    key_mgmt=NONE
    id_str="default"
}

Durch den Eintrag key_mgmt=NONE , der  meist verschwiegen wird, stellt der wpa_supplicant keine verschlüsselte Verbindung her, sondern nutzt das WLAN unverschlüsselt, wie es für Freifunk notwendig ist. In einem anderen Freifunk-Netzwerk muss natürlich die ESSID angepasst werden.

Nach einem Neustart verbindet der Raspberry Pi sich nun automatisch mit dem Freifunk-Netzwerk, sofern es erreichbar ist.

  1. zum Beispiel einem Freifunk-WLAN []