Tag Archives: ptxdist

Remote debugging with KDevelop and ptxdist

At work we use ptxdist for building firmware images for embedded devices. Our own software is built with CMake and my personal Linux desktop of choice is KDE with KDevelop as Integrated Development Environment. This is nothing new and I wrote about different aspects of that in the past. Today it’s about remote debugging.

Introduction

Writing C/C++ software and building it with CMake is well integrated in KDevelop. Running and debugging things on the same host (your personal computer or laptop) from within KDevelop with gdb is smooth. Things get interesting if you do cross compiling, and the software runs on some embedded target which has not enough space for debugger plus debug symbols plus source fragments. You could use NFS, but every now and then we find ourselves in the position where we need to debug something on devices deployed in the field. What you usually do in this case is installing gdbserver on the target and connect to it from the gdb on your host computer.

The command line way

Remote debugging on the command line is easy from within your ptxdist based Board Support Package (BSP) if you are comfortable enough with gdb or cgdb on the shell. The process is basically as described in HowTo: Debug core dumps with ptxdist 2017.07.0 already, you call ptxdist with the argument gdb. For remote debugging you would do this:

  1. Build gdbserver for your target by selecting it from the menu (like with your other target packages)
  2. Start gdbserver on the target e.g. like this:
    gdbserver 192.168.10.184:12345 mytool --my-arguments
  3. Start gdb on the host from your BSP workspace:
    ptxdist gdb
  4. In the gdb command line connect to gdbserver like this:
    target remote 192.168.10.184:12345
  5. Use gdb as usual (set breakpoints, run, inspect, etc.)

The magic of choosing the correct gdb suitable for the target and setting the necessary options for finding debug symbols and getting paths right is done by ptxdist in step 3. Maybe all this could go to ptxdist’s documentation, but it’s more or less straightforward for experienced developers.

Using KDevelop to edit CMake based projects in your ptxdist BSP

Before we come to debugging, we start with importing a CMake project from the BSP into KDevelop. Please make sure everything builds fine first, a ptxdist go should be completed without errors to have everything in place.

For demonstration I’ll import mosquitto, because it’s a CMake project. I first enabled PTXCONF_MOSQUITTO and let ptxdist built it once.

Then in KDevelop I choose Open / Import Project … and a dialog pops up. The project file I select is /path/to/my/BSP/platform-xyz/built-target/mosquitto-2.0.5/CMakeLists.txt and I usually just select the main CMakeLists.txt of a project here.

CMake projects are usually built out-of-tree, so the source file folders are not clobbered with build artefacts and ptxdist does OOT builds too. In the next dialog KDevelop asks for a build directory, but the suggested one is not the one ptxdist uses. We should select the one from ptxdist however, which in this case is /path/to/my/BSP/platform-xyz/built-target/mosquitto-2.0.5-build and it’s always directly next to the source dir in ptxdist just with -build appended. In that dialog KDevelop tells us now: „Using an already created build directory.“ and that’s exactly what we want here, because ptxdist already set this up with the necessary cross-compiling options.

You could use that to actually code in that source tree now, and even let KDevelop build it locally. That’s a whole other can of worms however and out of scope of what we actually wanna show in this blog post (remote debugging).

Remote debugging with KDevelop

If you never used gdb from within KDevelop before, you should read Running programs in KDevelop and Debugging programs in KDevelop from the KDevelop documentation first.

So, create a Launch Configuration, name it as you like it, instead of using some project target as executable, select „Executable“ and use the binary from the BSP root folder, e.g. this in case of mosquitto_pub (we just use this as example now): /path/to/my/BSP/platform-xyz/root/usr/bin/mosquitto_pub

Then go to the Debug dialog of that Launch Configuration which should look like this somehow:

Screenshot of KDevelop Launch Configurations dialog

As Debugger executable we select the one from the toolchain. Easiest is to click through to your BSP folder, then further to platform-xyz/selected_toolchain and from there choose the gdb. In my case I have
/path/to/my/BSP/platform-xyz/selected_toolchain/arm-v7a-linux-gnueabihf-gdb
here, which actually is in
/opt/OSELAS.Toolchain-2020.08.0/arm-v7a-linux-gnueabihf/gcc-10.2.1-clang-10.0.1-glibc-2.32-b
inutils-2.35-kernel-5.8-sanitized/bin

but it’s a lot easier to use that selected_toolchain symlink here.

Up to here that was the „easy“ part. The main thing I struggled with a lot comes with those three scripts for Remote Debugging seen in the lower half of the dialog. You put those in three different files and those can reside anywhere you like. Let’s dive in.

Gdb config script

The Gdb config script is used to setup gdb and you can think of it like things done before gdb is started. The KDevelop help if you hover over it says: „This script is sourced by gdb when the debugging starts.“ We will put the same in here ptxdist uses when calling gdb, but have to add some more. This is essential. If you don’t get this right, things will not work as expected.

set debug-file-directory /path/to/my/BSP/platform-xyz/root/usr/lib/debug
add-auto-load-safe-path /path/to/my/BSP/platform-xyz/root
set sysroot /path/to/my/BSP/platform-xyz/root
set substitute-path platform-xyz /path/to/my/BSP/platform-xyz
directory /path/to/my/BSP

The first four lines do the same what ptxdist does when calling it with the gdb argument. The last line is exceptionally important, because KDevelop is not started from the BSP workspace directory as it’s the case when calling ptxdist gdb. This is essential however, because since ptxdist-2018.10.0 you only have paths relative to your BSP in the debug symbols. If you don’t add that extra directory gdb started by KDevelop won’t find the source files and breakpoints won’t work. (You can set the breakpoints in KDevelop, but gdb does not find the source files as communicated by KDevelop. Result is: breakpoints stay pending and execution is not interrupted. If you set the breakpoints manually through the GDB console, execution is interrupted, but KDevelop does not jump to the correct source lines. Both not satisfying.)

(Debugging without that directory line probably works when calling ptxdist gdb from the BSP workspace folder, because gdb has some paths added automatically. See Specifying Source Directories from the gdb documentation and especially the section about two special entries ‘$cdir’ and ‘$cwd’.)

Run shell script

The Run shell script can be used to let KDevelop start the application on the remote target. This works with ssh for example, but is only smooth if that connection can be made without typing a password. One example to put into such a script:

ssh root@192.168.10.185 'gdbserver 192.168.10.185:12345' /usr/bin/mosquitto_pub -h 192.168.10.74 -t 'hello' -m 'world'

Consider this optional. You might as well start gdbserver on the target manually with whatever options you need. (This is different for example if you want to attach to an already running process.)

Run gdb script

This is executed by KDevelop on your behalf, usually to connect to the remote gdbserver. You could as well type it into the GDB console in KDevelop. This is what you could put in such a script:

shell sleep 3
target remote 192.168.10.185:12345

Aftermath

The whole thing caused me some headaches, especially because things changed when ptxdist introduced layers back in late 2018. However it’s a lot more comfortable to debug in the same environment as you code and especially inspection of frame stacks and variables is much more comfortable in a graphical environment than on the console. I think it’s worth the effort and maybe this HowTo helps someone else to use a similar setup.

You can of course create those three files manually. What we did at work was writing a shell script creating those from some template, so we can recreate them again with different settings for hosts, binary paths, etc. This is not complicated however, you can do that by yourself.

HowTo: Debug core dumps with ptxdist 2017.07.0

Debugging for embedded projects is a little harder than for your own computer. In many cases you can not run gdb on the target, and even if you can use gdbserver1 this does not cover all use cases. For post mortem analysis (e.g. after a segmentation fault of your program) you want to examine so called core dumps. Given you successfully found out how to let your target create those2, copied it to your workstation and unpacked it, you still need to know how to analyze it.

With the release of ptxdist 2017.07.0 the handling of debug information changed. Quoting from the release announcement:

The debug symbol handling was reworked. The debug files are now named based
on build-ids and (optional) debug IPKGs can be created. They are not
installed by default but can be installed manually as needed. This is
useful to gdb on the target or with valgrind and perf.

In my BSP those debug info is put to /usr/lib/debug in the root folder from which the target files are copied. This looks like this now:

% tree -a ./platform-foo/root/usr/lib/debug | head
platform-foo/root/usr/lib/debug
└── .build-id
    ├── 00
    │   └── ba20cb0e075c4dc0a792a9062b0864ced517b1.debug
    ├── 03
    │   └── 3b4fc351317376388fadb19fc63b4c8ab6c0d9.debug
    ├── 04
    │   ├── 01fe993aa2bed2155514c676d7001625732396.debug
    │   ├── 7bdbc5fd44a4444de24762c76a3313d1fda2c0.debug
    │   └── d0ecb6611e590a036cbdd5909cc5bfc9158af8.debug

You can imagine it is not possible anymore to load this manually, the debugger will have to find out by itself. Getting this to work caused me some headaches, but this it how I got it work: Create a file ‘gdb-config’ with the following content:3

set debug-file-directory /home/adahl/Work/bsp/foo/platform-foo/root/usr/lib/debug
set sysroot /home/adahl/Work/bsp/foo/platform-foo/root

Note: the order of the commands is important, it does not work the other way round!

Then load your core dump:

% ./platform-foo/selected_toolchain/arm-v5te-linux-gnueabi-gdb -x ./tmp/gdb-config -e ./platform-foo/root/usr/local/bin/yourtool -c ./tmp/cores/2017-08-16/core 

So you run the gdb from your toolchain, load the previously crafted file with gdb commands with -x, give the path to your executable with -e and finally the core dump file with -c and that’s it. You may now have a look at a backtrace and find out what caused the segfault …

Update: I was pointed to an easier possibility to invoke the right gdb with the necessary options in #ptxdist IRC channel (on freenode). The previous call would be like this:

% ptxdist gdb -e ./platform-foo/root/usr/local/bin/yourtool -c ./tmp/cores/2017-08-16/core 

No need to pick the correct gdb and find and configure the right directories, just add the path to your tool and your core file, ptxdist handles everything else. I bet this would also work with older ptxdist versions, where the debug symbols where placed somewhere else, but didn’t try it. This however was also just added with ptxdist 2017.07.0:

There is also a new ptxdist command ‘gdb’ for remote debugging that sets up
the sysroot correctly and a wrapper script that can be used by graphical
development environments.

  1. we already had this topic here: KDevelop: Debuggen von Programmen, die root-Rechte benötigen (German) []
  2. this would be content for another post []
  3. of course you adapt the paths to the ones you use on your machine []

Do not change already released files!

tl;dr: Please upstream developers: Do not ever change what you already published, but make an additional version with your fix. This causes less trouble for people building your stuff.

As some of you might have noticed: I’m a little into embedded Linux software and contribute to some of the build systems around, mainly to buildroot (for fli4l) and to ptxdist (at work). This is a very special kind of fun meaning constantly trying to fix things gone wrong. Today is a day where the temperatures outside I’m stuck, because someone else fucked up his stuff.

Last week I built myself an image for testing iperf on a BeagleBone Black with the current buildroot master. This got me a tarball iperf-2.0.9.tar.gz from some mirror server. This worked.

Today I upgraded a ptxdist BSP from some older state, I think 2016.12.0, to the recent ptxdist 2017.06.0 which included an upgrade of the package iperf from 2.0.5 to 2.0.9. This got me a complaint about an invalid checksum. Those embedded build systems contain checksums for tarballs, buildroot uses mostly sha256, while ptxdist still uses md5. This is mostly to ensure transport integrity, but it also triggers when the upstream tarball changes. Which it should not.

So now the checksums in buildroot master from today are still from buildroot changeset 2016.05-1497-g11cc12e from 2016-07-29:

# From https://sourceforge.net/projects/iperf2/files/
sha1    9e215f6af8edd97f947f2b0207ff5487845d83d4        iperf-2.0.9.tar.gz
# Locally computed:
sha256  a5350777b191e910334d3a107b5e5219b72ffa393da4186da1e0a4552aeeded6  iperf-2.0.9.tar.gz

Those are the very same to the file I have locally. Note: both ptxdist and buildroot download archives to the same shared folder here. The md5sum of this file is 1bb3a1d98b1973aee6e8f171933c0f61 and ptxdist aborts with a warning this sum would not match. Well in ptxdist the iperf package was changed on 2016-12-19 last time, also upgrading from iperf 2.0.5 and here the changeset is ptxdist-2016.12.0-10-gd661f64 and the md5sum expected: 351b018b71176b8cb25f20eef6a9e37c. This is the same you can see today on sf.net, but why is it different from the one above?

To find out I downloaded the file currently available on sf.net, which was last changed 2016-09-08, after buildroot included the package update. The great tool diffoscope showed me, a lot of the content between those two archives was changed. But why?

Seems I was not the first one noticing: #20 Release file: iperf-2.0.9.tar.gz changed!!! And the maintainer set it to WONTFIX.

Now this is the point where I’m not sure whether to just get pissed, deeply sighing, or trying to fix the mess for those build systems. The clean way would be upstream releasing some new tarball, either the one or the other or even make a new release, named 2.0.9a or whatever.

What are possible solutions?

  • Wait for upstream to make a clean, new release. (And hope this doesn’t get changed in the future.)
  • Upgrade those hashes in buildroot. This obviously breaks old versions of buildroot.

According to the buildroot IRC channel, they want their package to be updated, even if older releases will break. And they said they have a fallback and use their own mirror, so that’s where my first package may have come from.

Update: buildroot accepted my patch updating those hashes quickly.