Please note that there’s a vulnerability in a component contained in releases prior to v2.1.4. An update to v2.1.4 or later is strongly recommended!
Release 2.1.4: Fixing a security issue
The primary reason to create an intermediate release before the next version was to include a patch for libappimage. This patch fixes a vulnerability, which could be exploited under rare circumstances in AppImageLauncher’s integration daemon, appimagelauncherd. The default configuration is secure, the attack vector only opens when including
~/Downloads in the list of watched directories.
Shortly after reviewing the report (which was disclosed it responsibly, i.e., the reporter contacted us privately), I fixed the reported issue (and a similar one I discovered myself) in libappimage. Then, I updated all projects under my maintainership which use libappimage (AppImageLauncher as well as appimaged) and informed distros shipping libappimage that they should update (or backport the patch).
The lack of sanitization of data used from untrusted files (AppImages) in libappimage and the resulting serious vulnerability in appimaged were both assigned CVE numbers. Those reports will be published soon. Furthermore, I will publish a detailed explanation of the issue, an analysis from AppImage’s perspective and how to prevent similar issues in the future. Thanks to Ryan Gonzales for discovering the vulnerability, sharing it with us responsibly and taking care of the CVE assignment.
The release further included a couple of minor improvements and translation updates.
Release 2.2.0: Replacing an annoying workaround with a less annoying one
The upcoming release drops the (infamous) AppImageLauncher FUSE filesystem, a workaround around one of the integration techniques AppImageLauncher uses, to allow running AppImages without triggering AppImageLauncher again, creating an infinite loop. This fixes a lot of issues people have had, removes a performance penalty and drops the dependency of all running AppImages on a user-level daemon.
Some of you may know that AppImageLauncher integrates deeply in the system to be able to intercept every launch of an AppImage and provide its functionality. It takes responsibility of the lifecycle of an AppImage process using two technologies: the XDG MIME Applications system, which is used by the system to determine which application to open on non-executable files (used when launching AppImages from the browser after downloading them, double-clicking them in the file manager etc.), and
binfmt_misc, a facility of the Linux kernel to launch executable files with user-space applications. The latter can be used to launch any file, even non-binaries like
.jar files, like any other application (e.g.,
./some.jar). The kernel just needs to know how to identify the files (e.g., look for a specific extension (rather weak), or check for a unique file signature. In case of AppImageLauncher, this makes sure the first run assitance and many other features can be provided on files which are executable already, too (e.g., when there’s other tools to make AppImages or ELF binaries in general executable, or later executions after an application was tried first via the “Run once” button).
Intercepting launches of an ELF binary using
binfmt_misc creates a problem: you are launched every time the kernel sees such a binary, so you cannot simply execute an AppImage from AppImageLauncher like you launch any other executable as a subprocess once the integration is set up. It would create an infinite loop of executions of AppImageLauncher. Therefore, you need another method to run AppImages from AppImageLauncher: you need to somehow bypass
binfmt_misc itself doesn’t provide any easy way to accomplish this. To be fair, the creators of this kernel feature might never have thought of a use case like AppImageLauncher.
In the beginning, AppImageLauncher just shipped its own type 2 runtime. This runtime didn’t embed the magic bytes which trigger the integration, and using an environment variable called
$TARGET_APPIMAGE, it could be used to run the payload of other type 2 AppImages. However, it’s a foreign runtime that doesn’t necessarily provide the same functionality like the one embedded in the AppImage, one can only hope that they’re compatible. The chances are really good (the official AppImage runtime is, by far, the implementation with the most widespread use), but the spec doesn’t contain every detail of the runtime. In fact, it’s quite vague on some topcis. There are some other implementations which claim to be spec-compliant but don’t use the same payload format as the official runtime.
Another problem, which, admittedly, could’ve been solved by updating the code, is that the old, deprecated type 1 runtime doesn’t support
$TARGET_APPIMAGE or anything comparable out of the box. To be able to add support for type 1 AppImages by shipping a suitable runtime, I would’ve had to first implement the missing feature in the obsolete codebase. Even worse, the approach creates the same problems like the the shipped type 2 runtime: the spec is vague and one can only hope for it to work. Finally, probably no Linux distribution would support shipping those runtimes in an AppImageLauncher package, unless their code would’ve been included in the AppImageLauncher code repository (so far, the runtime was downloaded from AppImageKit’s release page).
Searching for an alternative method
As shipping runtimes is highly problematic, an alternative approach needed to be found. Of course, now, the runtime shipped inside the AppImages should be used. To solve the problem, I took a look how
binfmt_misc is used in other projects. Everybody who ever made e.g.,
.jar files, executable and watched them run without running them like
java -jar ...jar (i.e., calling
./...jar), used a special “interpreter” registered in
binfmt_misc. This system is implemented in most distro’s Java packages. One can easily bypass this integration by not running
./...jar, but using a custom interpreter (e.g.,
mycustomjava -jar ...jar). Then, the kernel is never in charge of deciding how to execute the file.
Wouldn’t it be great to apply that approach to ELF binaries, and specifically AppImages? Instead of trying to run them directly (
./...AppImage), one could just try to launch their interpreter on them directly, right?
Dynamically linked ELF binaries are run by the so-called loader. The kernel creates a user-space process for the loader and passes the path to the ELF binary and all parameters to it. The loader reads the ELF header, prepares everything (e.g., it loads all libraries the executable depends on so that the binary can use them), and then calls the application’s entry point. The loader itself is a statically linked binary (i.e., no loader is required to prepare their execution – the kernel can directly create a process).
On Debian and its derivatives and many other distributions, the GNU libc loader is used. One can have multiple loaders on a single system, e.g., to run i386 binaries on an AMD64 system. The binary used for AMD64 binaries is called
ld-linux-x86-64.so.2, and on most systems is located in
/lib64/ld-linux-x86-64.so.2. You can try it out yourself: to launch
htop for instance, you can type
The loader usually refuses to run statically linked executables, but the official AppImage runtime is linked dynamically (as it uses a few system libraries like
libc.so.6), so, in theory, it should be possible to run it by calling
/lib64/ld-linux-x86-64.so.2 ...AppImage. Well, as said, in theory. The AppImage specification suggests for type 1 and requires for type 2 to embed magic bytes. Those three bytes, consisting of the ASCII values of
I (AppImage) and the unsigned numeric type number (
\x01 for type 1 and
\x02 for type 2) at an offset of eight bytes at the beginning of the file. (Note that type 1 AppImages can also be identified by checking for the magic numbers of both the ELF standard and the ISO 9660 filesystem, the payload format). What doesn’t sound very harmful violates the ELF specification, as these bytes create invalid values in the ELF header. The loader complains about this (“ELF file ABI version invalid”) and exits with an error code. Surprisingly, this only occurs when trying to call AppImages with the statically linked loader binary, not when running them normally. The kernel uses a slightly different method to create processes for dynamically linked ELF binaries apparently, and this loader isn’t as strict with the headerÄ’s values. However, the cases in which loaders don’t launch AppImages any more (because they strictly check the ELF header) become more frequent. In Docker containers with more recent distro releases for example, AppImages cannot be executed any more. There’s many bug reports in AppImageKit’s issue tracker on GitHub. I run into the issue regularly myself, too. The only solution is to patch the AppImage and erase the magic bytes before using those AppImages. The AppImage team is informed, and we have ideas how to fix the issue in the long term. Future AppImage types will use other methods to embed magic bytes. However, there’s tons of existing AppImages which might, in the future, no longer run everywhere.
What helps make the loader run the AppImage runtime would also bypass
binfmt_misc easily: just patch the AppImage, and replace the magic bytes with null bytes. Problem solved? Not really. First of all, we wouldn’t need to worry about the glibc loader binary any more, as the AppImages wouldn’t be run with AppImageLauncher any more anyway.
binfmt_misc doesn’t intercept the calls if the magic bytes are not found. Second, there’s strong reasons not to remove the magic bytes – that’d mean AppImageLauncher would break its own integration, and future versions could not update the integration if e.g., a new update of AppImageLauncher is installed. Furthermore, the AppImages wouldn’t be spec compliant any more (remember: the type 2 spec requires these bytes), and any other software couldn’t recognize AppImages reliably any more.
In late 2018, I thought, what if we could provide something like a virtual file that is basically an AppImage minus the magic bytes? I looked for simple ways (e.g., using named pipes), but none of them really behaved like real files. Copying AppImages entirely into memory and patching them there is also not practicable (consumes way too much RAM). As an experiment, I started to work on AppImageLauncherFS, a FUSE filesystem in which AppImages could be registered, creating virtual files which just forwards the underlying AppImages' data. The code recognizes the blocks in which the magic bytes reside, and overwrite those with zeroes. Such virtual AppImages could be launched without triggering AppImageLauncher’s
binfmt_misc integration, using the embedded runtime and thus eliminating all the disadvantages of shipping custom runtimes.
AppImageLauncherFS was integrated into AppImageLauncher soon, and replaced the shipped runtime entirely. It was first included in release 1.0.0. This allowed AppImageLauncher to be shipped in distro packages more easily later on, and it’s now shipped as part of KDE Neon and Manjaro KDE. However, the approach has some significant disadvantages, too. When the filesystem process dies, all AppImages running on the system would crash sooner or later, as the filesystem can’t reply to future I/O operations. The internal state keeping of the filesystem itself was also not too stable, and lead to a list of issues on GitHub. Making sure the filesystem runs from AppImageLauncher appeared to be problematic as well. The FUSE filesystem also broke electron-builder’s internal updater, as it didn’t support any write operations. Although a workaround was available and I offered electron-builder’s maintainers to help integrate it, they didn’t react. This lead to some projects recommending their users not to use AppImageLauncher at all – they saw it as an obstacle and ignored its advantages. A minor issue was that the FUSE filesystem was essentially a performance bottleneck – it didn’t use threading, really, and we had to run some custom logic on every read of a file. The only real good thing was that the code was rock solid and ran for many months without crashing and without creating any memory leaks.
It was clear from the beginning that this FUSE filesystem is only a temporary workaround. Future AppImage types should never again get into conflicts with the ELF specification. And I constantly looked for better solutions to get rid of the FUSE filesystem, as I didn’t want to invest the time needed to fix its problems and make other people’s software work with it.
A new and better solution
Last week, once again, I brainstormed on alternatives, inspired by a comment on a GitHub issue which claimed that AppImageLauncher “messed” with AppImage files, and asked to stop that behavior. Of course, AppImageLauncher never modifies an AppImage (it only ever moves them around, and only with the users' consent). So I clarified a few of their points. But I really didn’t want to have the same conversation yet again to justify how AppImageLauncher integrates itself into the system, how that creates issues that the FUSE filesystem could solve, etc. I’ve had this conversation a couple dozen times before, and they’re really tiring. I was really motivated to find an alternative solution, which wouldn’t just prevent further discussions on the topic, but also fix all my and other people’s issues related to the virtual filesystem. There’s been over a dozen issues on GitHub in my project, a couple more in other projects (e.g., electron-builder) and a lot more reports and speculations on the Internet, all related to the existence of the FUSE filesystem.
I first looked again at one of the ideas used before: using a patched runtime. Remember, the official type 2 runtime supports launching other AppImages with it out of the box. I wrote a tool which extracts an AppImage’s runtime into a virtual in-memory file, erases the magic bytes and then uses the environment variable
$TARGET_APPIMAGE, pointing on the AppImage we want to launch, to run the AppImage. Of course,
binfmt_misc was not triggered, and the AppImage was launched. But, as explained before,
$TARGET_APPIMAGE is not universal, it only works for our type 2 runtime, and only for AppImages built with the runtime released in mid 2018 or later. As AppImageLauncher needs to support all AppImages out there, the solution was a first step, but not perfect yet.
So, one problem solved, another created: the runtime needs to be made use the original AppImage from which it was extracted, but one cannot tell it the path easily. I started to experiment with
$LD_PRELOAD, building a special library which provides suitable replies when the runtime tries to figure out the path to its AppImage. I focused on type 2 first. Using
strace and looking at the code, I noticed that the runtime only tries to read the symlink
/proc/self/exe to find its own path. The logs and the source code of the runtime and its dependency squashfuse showed that all I had to do is hook into three libc functions (
realpath), recognize when
/proc/self/exe was passed as a filename, and modify the calls appropriately to either return the real path or open the actual AppImage. After a lot of back and forth (I hadn’t used
$LD_PRELOAD in production yet), I managed to trick a runtime downloaded from the release page whose magic bytes were erased into launching a couple of type 2 AppImages.
I invested a lot more time to improve the code, then combined both approaches and published the result as a proof of concept on GitHub. It proved that it was possible to bypass
binfmt_misc, without needing a FUSE filesystem and without having to use a third party runtime. The approach worked well for all type 2 AppImages on my system. Luckily, without any modifications, it also worked for old type 1 AppImages. After some more testing (and sharing the PoC on AppImage’s IRC channel), I started to work on its integration into AppImageLauncher. Soon, AppImageLauncherFS could be dropped from the codebase entirely.
I tested the integration on various other systems, making sure that upgrades work well, too (once the new version is installed, the new method is used, and AppImages running already would continue to run through the FUSE filesystem until the system was rebooted). I also asked users to test the new system, and for instance, one user on GitHub confirmed that the new method works fine on a Raspberry Pi 4. As it looked like everything worked as intended, I could close many issues on GitHub (see the central issue I created as a reference for all the other issues), and I’m confident that all users of AppImageLauncher (and AppImage creators who have such users) will be very happy about the change.
Now, of course, there’s some disadvantages with the new method as well. For every invocation of an AppImage, we need a few 100 kiB of RAM to store the patched runtime during the lifetime of the process. Preloading a library is not the safest way to pass the path of the AppImage to the runtime, but it seems to work fine with new and old AppImages, including 5+ years old type 1 ones, and it might even work with future runtimes as well, as the changes are good they’d use similar algorithms to detect their own path (Linux doesn’t provide any reliable ways to detect a binary’s own path other than following the symlink
/proc/self/exe. And the fact that the AppImage processes don’t depend on each other or even a user-level daemon any more is also a great advantage.
Since the last release, the translations were updated (thanks to all translators, keep up the good work!), and I removed some really old debug code. Furthermore, a little change to the issue templates was merged proposed in PR #345 (thanks to @xerus2000).
Where to get the new AppImageLauncher
You can download up-to-date binaries for all platforms supported upstream (Debian and some RPM based distributions with the classic AppImageLauncher, almost any somewhat recent distribution with AppImageLauncher Lite) from the GitHub release page. For Ubuntu 19.04 and later as well as and derivatives, there’s a Personal Package Archive (PPA) which allows for keeping AppImageLauncher up to date very easy.
If you encounter any issues, please don’t hesitate to open an issue on GitHub. This is the only way we, the developers, get a chance to analyze the problem and eventually provide a fix. Posting your problems in some distribution specific forums is not very useful, since we can’t monitor them all. So, please, consider posting your issue on GitHub, if you find any. Also, you can create issues for feature proposals there.