Skip to main content
← Back to list
01Issue
BugClosedSwamp CLI
AssigneesNone

#213 `swamp datastore setup` (filesystem→S3 migration) panics in deno Node TLS layer mid-migration

Opened by bixu · 5/2/2026

Summary

Running swamp datastore setup extension @swamp/s3-datastore --config '{...}' against a freshly-bootstrapped S3 bucket panics in deno's Node.js TLS compatibility layer (ext/node/ops/tls_wrap.rs:2018) shortly after starting the local→S3 data migration. The migration writes a partial set of files (93 / 7,847 in our case, ~1.2% complete) before aborting with SIGABRT (exit 134).

The repository ends up in a half-migrated state: bucket exists and has some objects; local datastore stays filesystem-typed (the switch never completes); no error surfaced to the user beyond the raw stack trace.

Reproduction

  1. Bootstrap a fresh S3 bucket via swamp workflow run @swamp/bootstrap-s3-datastore --input '{"bucket_name": "<NAME>", "region": "<REGION>"}' — the infra job succeeds (bucket + IAM policy created).
  2. The configure job (which calls swamp datastore setup extension @swamp/s3-datastore) panics. Repro is reliable on a repo with a non-trivial local .swamp/ directory (~7,800 files in our case).

Stack trace

@aws-sdk/credential-provider-node - defaultProvider::fromEnv WARNING:
    Multiple credential sources detected: ...

INF datastore·setup Migrating data...

thread 'main' (9406001) panicked at ext/node/ops/tls_wrap.rs:2018:31:
called `Option::unwrap()` on a `None` value
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

thread 'main' (9406001) panicked at library/core/src/panicking.rs:230:5:
panic in a function that cannot unwind
stack backtrace:
   0: _uv_mutex_unlock
   1: <unknown>
   ...
   11: _uv_mutex_unlock
thread caused non-unwinding panic. aborting.

The tls_wrap.rs:2018 line is in deno's Node TLS compat layer, which @aws-sdk/* (npm-bundled) goes through for HTTPS to S3.

Environment

  • swamp: 20260501.234710.0-sha.f1687b62
  • bundled deno: 2.7.14 (stable, release, aarch64-apple-darwin) / V8 14.7.173.20-rusty / TS 5.9.2
  • @swamp/s3-datastore: 2026.04.28.4
  • @swamp/s3-datastore-bootstrap: 2026.04.22.3
  • macOS aarch64 (Darwin 25.4.0)
  • AWS region: eu-central-1
  • Local .swamp/ size: ~7,847 files across data/outputs/workflow-runs/bundles
  • Reproduces with both env-var creds and AWS_PROFILE= SSO creds

The setup logs a benign credential-source warning about both AWS_PROFILE and AWS_ACCESS_KEY_ID/SECRET being set. After unsetting the static creds the warning goes away but the panic still reproduces, so the credential conflict is not the cause.

Asks

  1. Wrap the Option::unwrap() at tls_wrap.rs:2018 so the underlying TLS failure surfaces as an error instead of a panic. Even a generic Option::ok_or(...) with the offending site's name would be a substantial improvement.
  2. Make the migration resumable. A failed migration should leave the datastore in a known state (still filesystem, OR fully switched), not a half-migrated bucket with the local config unchanged. A subsequent retry should pick up where the previous one stopped (sync semantics) rather than starting over.
  3. Pin or upgrade the bundled deno. If this is a known deno bug, document the deno version range that's safe and bump the bundled version.

Workaround

None confirmed. Will try swamp datastore sync --push against the half-migrated bucket once the datastore is forced to S3 manually, but that bypasses setup's atomic switch and isn't a documented path.

02Bog Flow
OPENTRIAGEDIN PROGRESSCLOSED

Closed

5/5/2026, 12:54:21 PM

No activity in this phase yet.

03Sludge Pulse
Editable. Press Enter to edit.

bixu commented 5/2/2026, 12:36:51 PM

Confirmed deterministic on retry. Emptied the bucket (279 objects + delete markers cleared) and re-ran swamp datastore setup extension @swamp/s3-datastore with the exact same config. The migration died at the same point — exit 134, identical stack trace ending in ext/node/ops/tls_wrap.rs:2018:31 — and the bucket again contains exactly 93 objects (same as the first run).

So the fault is reproducible at the same byte/sequence boundary, not a transient TLS hiccup. That makes it more likely to be a code path that reliably triggers the panic on this combination of swamp version, bundled deno version, and .swamp/ shape.

Last 93 objects written before the panic include report-bundles/, vault-bundles/, and the first batch of workflow-runs/. Suggests the migration walks directories in a stable order and dies as it transitions to the next group (likely the start of the larger data/ tree, ~7,269 files).

Happy to run a RUST_BACKTRACE=1 capture or any targeted instrumentation if it would help isolate the offending request.

bixu commented 5/2/2026, 12:44:06 PM

More detail from a RUST_BACKTRACE=full --log-level trace run:

  1. RUST_BACKTRACE=full adds no useful symbols. The bundled deno binary at ~/.swamp/deno/deno is release-stripped — every frame resolves to _uv_mutex_unlock or <unknown>. Confirmed deno version 2.7.14 from swamp·runtime·deno debug line.
  2. Reverse migration (S3 → filesystem) panics identically. Reproduced by running swamp datastore setup filesystem --path .swamp --skip-migration from a half-migrated S3 datastore — same tls_wrap.rs:2018:31 Option::unwrap() panic. So the bug isn't tied to upload direction; it triggers on any TLS-bearing migration over the AWS SDK Node TLS shim.
  3. Last log line before panic is consistently:
    12:43:36.778 INF datastore·setup Migrating data...
    12:43:38.003 DBG datastore·setup·extension Pushing data to remote datastore...
    ~1.2s after the migration start. The panic happens immediately after Pushing data to remote datastore..., but uploads do progress (we land at exactly 93 objects in the bucket every time before the panic — same point in the dir walk), so the panic isn't on the very first request.
  4. Stack offsets are stable across runs, which suggests a deterministic code path rather than a race.

If a debug-symbol deno build for the bundle would help isolate the offending request, happy to repro against it. Otherwise the most useful single change from your side is probably the Option::unwrap()Option::ok_or swap so the underlying TLS error surfaces.

stack72 commented 5/5/2026, 12:54:19 PM

@bixu

Root-caused to a Deno bug in node:tls: Option::unwrap() on a None ArrayBuffer::data() at ext/node/ops/tls_wrap.rs:2018, triggered during the TLS write/teardown path. Same root cause as #219 and #224 (part 1).

Filed upstream as denoland/deno#33713; fix merged 2026-05-01 as denoland/deno#33737. The fix won't land in a stable Deno release until v2.8.0 (~2 weeks). To unblock users now, swamp bundles a pinned Deno canary commit (19bd3d8b, 83 commits ahead of the fix commit, 0 behind) that contains the fix.

This addresses asks 1 and 3 from the original report.

Ask 2 — making the migration resumable so a partial failure leaves the datastore in a known state and a retry picks up where the previous run stopped — is tracked separately as #248. The Deno panic was the trigger you observed, but resumability matters for any partial failure (network blip, transient 5xx, etc.), which is why it deserves its own issue.

Closing the panic part as fixed. The Deno fix lands in the next swamp release; please re-open if you reproduce on a build that includes it.

Sign in to post a ripple.