TransferX v1.3.4
Enterprise-grade chunked file transfer SDK for Node.js. Built around three pillars: reliability (retry with exponential backoff), resumability (crash-safe persistent sessions), and observability (strongly-typed event bus with live progress).
createB2Engine(),
createS3Engine(), createR2Engine(),
createHttpEngine(),
createDownloader() factories. Start here.
@aws-sdk/client-s3 v3.
initFn,
uploadFn, and completeFn.
createDownloader() from
@transferx/sdk.
"@transferx/sdk": "1.3.4").
Semver patch releases are backwards-compatible; minor releases may add
new exports. Current latest: @transferx/sdk@1.3.4.
Installation
# Backblaze B2
npm install @transferx/sdk
# AWS S3 or Cloudflare R2
npm install @transferx/sdk @transferx/adapter-s3
Requires Node.js 18+ (uses native fetch and
crypto.subtle). The @transferx/sdk package
re-exports all factories (createB2Engine,
createS3Engine, createR2Engine) from a
single entry point — install the matching adapter package for S3/R2.
Quick Start
import * as fs from 'fs';
import {
createB2Engine,
makeUploadSession,
makeSessionId,
FileSessionStore,
} from '@transferx/sdk';
// 1. Create the engine — store is required
const { upload, bus, config, store } = createB2Engine({
b2: {
applicationKeyId: process.env.B2_APPLICATION_KEY_ID!,
applicationKey: process.env.B2_APP_KEY!,
bucketId: process.env.B2_BUCKET_ID!,
},
store: new FileSessionStore('./.transferx-sessions'),
});
// 2. Listen for events
bus.on('progress', ({ progress }) => {
process.stdout.write(
`\r${progress.percent.toFixed(1)}% ${(progress.speedBytesPerSec / 1e6).toFixed(2)} MB/s`
);
});
bus.on('session:done', ({ session }) =>
console.log('\n✓ Done', session.id)
);
// 3. Build session — makeSessionId() gives a stable crash-safe ID
const stat = fs.statSync('/path/to/video.mp4');
const session = makeUploadSession(
makeSessionId('/path/to/video.mp4', 'uploads/video.mp4', stat.size),
{ name: 'video.mp4', size: stat.size, mimeType: 'video/mp4', path: '/path/to/video.mp4' },
'uploads/video.mp4',
config,
);
await store.save(session);
const result = await upload(session);
// Always check result.state — 'done' | 'failed' | 'cancelled'
if (result.state !== 'done') throw new Error('Upload failed: ' + result.state);
Sessions
A TransferSession is the unit of work in TransferX.
It holds the file descriptor, all chunk metadata, current transfer
state, and the configuration that governs the upload. Sessions are
created once with makeUploadSession(), then passed to
engine.upload().
Session state machine
| State | Terminal? | Resumable? | Meaning |
|---|---|---|---|
created |
Session constructed, not yet started | ||
initializing |
Calling initTransfer() on the adapter |
||
queued |
Waiting in the scheduler queue | ||
running |
Chunks uploading in parallel | ||
paused |
✓ | Scheduler drained, upload suspended | |
failed |
✓ | ✓ | A chunk exhausted its retry budget; resumable |
done |
✓ |
All chunks confirmed, completeTransfer() called
|
|
cancelled |
✓ |
User-initiated cancellation; abortTransfer() called
|
paused, failed, or
running (crash recovery) state can be resumed with
engine.resumeSession(sessionId). The engine reconciles
remote provider state, then re-uploads only the chunks that still need
uploading.
Chunks & Concurrency
Files are split into fixed-size chunks (default 10 MiB). Each chunk is uploaded independently. The Scheduler controls how many active chunk uploads run at once.
Chunk states
| State | Meaning |
|---|---|
pending |
Not yet started |
uploading |
In-flight HTTP upload |
done |
Confirmed by provider |
failed |
Attempt failed; may retry |
Concurrency config
const engine = createB2Engine({
b2: { ... },
config: {
concurrency: {
initial: 4, // start with 4 parallel uploads
min: 1, // never below 1
max: 16, // never above 16
adaptive: false, // disable auto-tuning
},
},
});
Retry
Each chunk has its own retry budget. Failed chunks are re-queued with
exponential backoff + full jitter to avoid thundering herd.
Non-retryable error categories (clientError,
auth, cancelled, fatal)
propagate immediately.
Backoff formula
Delay before attempt n (1-indexed):
base_delay = min(baseDelayMs × 2^(n-1), maxDelayMs)
actual_delay = base_delay + random(0, jitterMs)
Rate-limit awareness
When the provider responds with HTTP 429, the engine reads the
Retry-After header and waits at least that duration
before re-queuing the chunk.
config: {
retry: {
maxAttempts: 5, // 1 initial + 4 retries
baseDelayMs: 500,
maxDelayMs: 30_000, // cap at 30s
jitterMs: 500,
},
}
Progress
The ProgressEngine computes an EMA-smoothed speed
estimate and the resulting ETA. Progress events fire at most every
progressIntervalMs milliseconds (default 200 ms).
bus.on('progress', ({ progress }) => {
console.log({
percent: progress.percent, // 0–100
transferred: progress.transferredBytes, // bytes
total: progress.totalBytes,
speedMBps: (progress.speedBytesPerSec / 1e6).toFixed(2),
etaSeconds: progress.etaSeconds, // undefined when speed === 0
});
});
Events
All events are dispatched synchronously via the
EventBus. Handlers are called in registration order.
Use bus.on() / bus.off().
| Event type | Payload fields | When fired |
|---|---|---|
session:created |
session |
After session is initialised in the engine |
session:started |
session |
All chunks queued, uploading begins |
session:paused |
session |
engine.pause(id) called |
session:resumed |
session |
engine.resumeScheduler(id) called |
session:reconciling |
session |
Resume started; verifying remote provider state |
session:done |
session |
All chunks confirmed, transfer complete |
session:failed |
session, error |
Unrecoverable failure (chunk exhausted budget) |
session:cancelled |
session |
User called cancel before completion |
chunk:started |
session, chunk |
Chunk upload begins |
chunk:done |
session, chunk |
Chunk confirmed by provider |
chunk:failed |
session, chunk, error, willRetry |
Chunk attempt failed |
chunk:fatal |
session, chunk, error |
Chunk exhausted retry budget |
progress |
progress |
Throttled live progress snapshot |
log |
level, message, data? |
Internal diagnostic messages |
Unsubscribing
const handler = ({ progress }) => console.log(progress.percent);
bus.on('progress', handler);
// ...later...
bus.off('progress', handler);
Session Stores
A session store (ISessionStore) persists sessions so they
survive process restarts. Two implementations are included.
MemorySessionStore (testing only)
In-process only. Sessions are lost if the process exits. Suitable for
unit tests and short-lived CLI tools. For production use
FileSessionStore instead.
import { MemorySessionStore } from '@transferx/core';
const store = new MemorySessionStore();
FileSessionStore
Writes each session as a JSON file in a directory. Writes are atomic:
data is first written to a .tmp file, then renamed — so a
crash mid-write cannot corrupt an existing session.
import { FileSessionStore } from '@transferx/core';
const store = new FileSessionStore('./.transferx-sessions');
// On restart — call restoreAllSessions() to resume any interrupted uploads
import { restoreAllSessions } from '@transferx/sdk';
const { resuming, skipped } = await restoreAllSessions(store, engine, { maxConcurrent: 4 });
console.log(`Resuming ${resuming.length} session(s), ${skipped.length} already done.`);
ISessionStore interface
interface ISessionStore {
save(session: TransferSession): Promise<void>;
load(id: string): Promise<TransferSession | undefined>;
delete(id: string): Promise<void>;
listAll(): Promise<TransferSession[]>;
}
Factory Functions
All four factories share the same return shape
(EngineHandle) and the same required
store plus optional config,
fileStatFn, and onCompleted options. They
wire together the adapter, UploadEngine,
EventBus, and a session store so you never touch the
low-level classes.
createB2Engine(opts)
import { createB2Engine } from '@transferx/sdk';
const engine = createB2Engine({
b2: {
applicationKeyId: '...', // B2 keyID
applicationKey: '...', // B2 secret
bucketId: '...', // target bucket
},
store: ISessionStore, // required — use FileSessionStore for production
config?: Partial<EngineConfig>,
onCompleted?: (meta: CompletedUploadMeta) => Promise<void>,
fileStatFn?: (path: string) => Promise<{ mtimeMs: number }>,
});
createS3Engine(opts)
import { createS3Engine } from '@transferx/sdk';
const engine = createS3Engine({
s3: {
bucket: 'my-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
},
store: ISessionStore, // required
config?: Partial<EngineConfig>,
onCompleted?: (meta: CompletedUploadMeta) => Promise<void>,
fileStatFn?: (path: string) => Promise<{ mtimeMs: number }>,
});
createR2Engine(opts)
import { createR2Engine } from '@transferx/sdk';
const engine = createR2Engine({
r2: {
accountId: process.env.CF_ACCOUNT_ID!,
bucket: 'my-r2-bucket',
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
},
store: ISessionStore, // required
config?: Partial<EngineConfig>,
onCompleted?: (meta: CompletedUploadMeta) => Promise<void>,
fileStatFn?: (path: string) => Promise<{ mtimeMs: number }>,
});
Return value — EngineHandle
{
upload(session: TransferSession): Promise<TransferSession>;
resumeSession(sessionId: string): Promise<TransferSession>;
pause(sessionId: string): void;
resumeScheduler(sessionId: string): void;
cancel(sessionId: string): Promise<void>;
getSession(sessionId: string): Promise<TransferSession | null>;
bus: EventBus;
config: EngineConfig;
store: ISessionStore;
}
mtimeMs has changed
since the session was created, resumeSession() will
reject rather than silently upload stale data. Defaults to
fs.stat when path is set in the session's
FileDescriptor.
makeUploadSession()
import { makeUploadSession } from '@transferx/sdk';
const session = makeUploadSession(
id: string, // unique, stable identifier
file: FileDescriptor, // { name, size, mimeType, path? }
targetKey: string, // object key in the bucket
config: EngineConfig, // from engine.config
);
FileDescriptor
| Field | Type | Required | Description |
|---|---|---|---|
name |
string |
✓ | Filename, used in upload headers |
size |
number |
✓ | Total byte size |
mimeType |
string |
✓ | Content-Type for upload |
path |
string |
Absolute local path (optional; required for Node.js reader) |
UploadEngine
Low-level orchestrator. Use createB2Engine(),
createS3Engine(), or createR2Engine() unless
you need a custom adapter.
import { UploadEngine } from '@transferx/core';
const engine = new UploadEngine({
adapter: myAdapter,
store: myStore,
bus: myBus,
config?: Partial<EngineConfig>,
readerFactory?: (file: FileDescriptor) => IChunkReader,
});
// Methods
await engine.upload(session); // upload from scratch — returns final TransferSession
await engine.resumeSession(sessionId); // crash recovery — reconcile + re-upload pending chunks
engine.pause(sessionId); // pause in-process scheduler
engine.resumeScheduler(sessionId); // resume in-process scheduler
await engine.cancel(sessionId); // cancel active or persisted session
await engine.getSession(sessionId); // load session (null if not found)
EventBus
import { EventBus } from '@transferx/core';
const bus = new EventBus();
bus.on('session:done', handler); // subscribe
bus.off('session:done', handler); // unsubscribe
bus.emit(event); // dispatch (used internally)
bus.clear('progress'); // remove all handlers for one type
bus.clearAll(); // remove all handlers
log event
is emitted instead — other handlers still run.
EngineConfig
| Field | Type | Default | Description |
|---|---|---|---|
chunkSize |
number |
10485760 |
10 MiB. Bytes per chunk. |
checksumVerify |
boolean |
true |
SHA-256 per chunk before upload |
progressIntervalMs |
number |
200 |
Max frequency of progress events |
retry.maxAttempts |
number |
5 |
Total attempts per chunk (incl. first) |
retry.baseDelayMs |
number |
500 |
Base exponential backoff delay |
retry.maxDelayMs |
number |
30000 |
Backoff ceiling |
retry.jitterMs |
number |
500 |
Max random jitter per delay |
concurrency.initial |
number |
4 |
Starting parallel chunk count |
concurrency.min |
number |
1 |
Adaptive lower bound |
concurrency.max |
number |
16 |
Adaptive upper bound |
concurrency.adaptive |
boolean |
true |
Enable adaptive concurrency tuning |
Errors
All transfer errors are instances of TransferError. The
category field controls retry behaviour.
| Category | Retryable | Typical cause |
|---|---|---|
network |
✓ | Connection reset, DNS failure, timeout |
rateLimit |
✓ | HTTP 429 — respects Retry-After |
serverError |
✓ | HTTP 500–599 |
checksum |
✓ | Local SHA-256 mismatch before upload |
unknown |
✓ | Conservative; retried by default |
clientError |
✗ | HTTP 400–499 (except 429) — our bug |
auth |
✗ | HTTP 401/403 — needs credential refresh |
cancelled |
✗ | User-initiated cancellation |
duplicateUpload |
✗ |
Upload already active for the same targetKey — call
resumeSession() instead
|
fatal |
✗ | Unrecoverable internal state |
import { TransferError, ErrorCategory } from '@transferx/core';
bus.on('session:failed', ({ session, error }) => {
if (error.category === 'auth') {
// Refresh credentials and recreate the engine
}
});
Types
import type {
// Session
TransferSession,
SessionState,
FileDescriptor,
// Chunks
ChunkMeta,
ChunkState,
// Config
EngineConfig,
RetryPolicy,
ConcurrencyPolicy,
// Events
TransferEvent,
TransferProgress,
IEventBus,
// Errors
TransferError,
ErrorCategory,
// Stores
ISessionStore,
// Adapter
ITransferAdapter,
ChunkUploadResult,
} from '@transferx/core';
// SDK-level types (from '@transferx/sdk')
import type { CompletedUploadMeta, EngineHandle } from '@transferx/sdk';
// SDK-level helpers
import { makeSessionId, restoreAllSessions } from '@transferx/sdk';
S3 Adapter
@transferx/adapter-s3 provides a production-ready adapter
for AWS S3 multipart uploads built on
@aws-sdk/client-s3 v3.
Installation
npm install @transferx/adapter-s3
Options
| Option | Type | Required | Description |
|---|---|---|---|
bucket |
string |
Yes | Target S3 bucket name. |
region |
string |
No | AWS region (default: us-east-1). |
credentials.accessKeyId |
string |
Yes | AWS access key ID. |
credentials.secretAccessKey |
string |
Yes | AWS secret access key. |
credentials.sessionToken |
string |
No | Temporary session token (STS). |
endpoint |
string |
No | Custom endpoint URL (e.g. S3-compatible stores). |
forcePathStyle |
boolean |
No | Use path-style addressing instead of virtual-hosted. |
timeoutMs |
number |
No |
Per-request timeout in milliseconds (default:
120 000).
|
onLog |
LogCallback |
No | Structured log callback for debug / info / warn / error events. |
Usage (recommended — via SDK factory)
import { createS3Engine, makeUploadSession } from '@transferx/sdk';
import * as fs from 'fs';
const { upload, bus, config, store } = createS3Engine({
s3: {
bucket: 'my-bucket',
region: 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
},
});
const stat = fs.statSync('./video.mp4');
const session = makeUploadSession(
'upload-001',
{ name: 'video.mp4', size: stat.size, mimeType: 'video/mp4', path: './video.mp4' },
'uploads/video.mp4',
config,
);
await store.save(session);
const result = await upload(session);
// Always check result.state — 'done' | 'failed' | 'cancelled'
if (result.state !== 'done') throw new Error('Upload failed: ' + result.state);
Notes
-
ETags are stored verbatim (including surrounding
double-quotes) as
providerTokenon each chunk. This is required byCompleteMultipartUpload. -
Resume —
getRemoteState()callsListPartsand follows pagination viaIsTruncated/NextPartNumberMarker, so interrupted uploads of any size resume correctly. -
Retries — SDK-level retries are disabled (
maxAttempts: 1). The TransferX engine's retry loop owns all back-off logic.
R2 Adapter
R2Adapter (also exported from
@transferx/adapter-s3) wraps S3Adapter with
Cloudflare R2 defaults so you only need to supply your
account ID, bucket, and credentials.
Installation
npm install @transferx/adapter-s3
Options
| Option | Type | Required | Description |
|---|---|---|---|
accountId |
string |
Yes | Cloudflare account ID — used to build the R2 endpoint URL. |
bucket |
string |
Yes | R2 bucket name. |
credentials.accessKeyId |
string |
Yes | R2 API token access key ID. |
credentials.secretAccessKey |
string |
Yes | R2 API token secret access key. |
timeoutMs |
number |
No |
Per-request timeout in milliseconds (default:
120 000).
|
onLog |
LogCallback |
No | Structured log callback. |
endpoint,
region ("auto"), and
forcePathStyle are set automatically. Do not pass them
directly when using R2Adapter.
Usage (recommended — via SDK factory)
import { createR2Engine, makeUploadSession } from '@transferx/sdk';
import * as fs from 'fs';
const { upload, bus, config, store } = createR2Engine({
r2: {
accountId: process.env.CF_ACCOUNT_ID!,
bucket: 'my-r2-bucket',
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
},
});
const stat = fs.statSync('./video.mp4');
const session = makeUploadSession(
'upload-001',
{ name: 'video.mp4', size: stat.size, mimeType: 'video/mp4', path: './video.mp4' },
'uploads/video.mp4',
config,
);
await store.save(session);
const result = await upload(session);
// Always check result.state — 'done' | 'failed' | 'cancelled'
if (result.state !== 'done') throw new Error('Upload failed: ' + result.state);
Notes
-
R2 ignores the
regionvalue; the adapter passes"auto"to satisfy the AWS SDK requirement. -
The endpoint is constructed as
https://<accountId>.r2.cloudflarestorage.com. -
All resume, retry, and timeout behaviour is identical to
S3Adapter— see the S3 Adapter section above.
makeSessionId()
Generate a deterministic, crash-safe session ID from the upload's
inputs. The same filePath, targetKey, and
fileSizeBytes always produce the same 24-hex ID, so a
restart with the same file will resolve to the existing persisted
session automatically.
import { makeSessionId } from '@transferx/sdk';
const id = makeSessionId(filePath, targetKey, fileSizeBytes);
// e.g. "a3f2c8e1b0d94f2a67c1" — stable 24-hex SHA-256-based ID
engine.upload() twice with a session whose
id already exists in the store (same file, same target),
the engine throws duplicateUploadError instead of
silently overwriting. Call resumeSession(id) to recover.
Signature
| Parameter | Type | Description |
|---|---|---|
filePath |
string |
Absolute local path to the source file |
targetKey |
string |
Object key on the storage provider |
fileSizeBytes |
number |
File size in bytes (from stat.size) |
Returns a string — 24 hex characters (96 bits,
SHA-256-derived).
onCompleted Callback & CompletedUploadMeta
Supply onCompleted to any factory to receive a clean
structured payload after every successful upload — no need to parse
raw bus events. Errors thrown inside this callback are caught and
re-emitted as log events; they do not
fail or cancel the upload.
const engine = createB2Engine({
b2: { /* ... */ },
store: new FileSessionStore('.sessions'),
onCompleted: async (meta) => {
await db.media.create({
storageKey: meta.remoteKey,
fileSizeBytes: meta.fileSizeBytes,
durationMs: meta.durationMs,
manifestChecksum: meta.manifestChecksum,
});
},
});
CompletedUploadMeta fields
| Field | Type | Description |
|---|---|---|
sessionId |
string |
Stable session ID |
localPath |
string | undefined |
Absolute local file path |
remoteKey |
string |
Object key on the storage provider |
fileSizeBytes |
number |
File size in bytes |
mimeType |
string |
MIME type |
createdAt |
number |
Epoch ms when session was created |
completedAt |
number |
Epoch ms when provider confirmed all parts |
durationMs |
number |
completedAt − createdAt in milliseconds |
chunkCount |
number |
Total parts uploaded |
manifestChecksum |
string | undefined |
SHA-256 of sorted per-chunk checksums; undefined if
checksumVerify is off
|
session |
TransferSession |
Full session for advanced use |
restoreAllSessions()
Call this once at startup to recover any uploads that were interrupted
by a process crash or restart. It scans the store, skips
done / cancelled sessions, and resumes every
other session via the engine — throttled by
maxConcurrent.
import {
createB2Engine, FileSessionStore, restoreAllSessions
} from '@transferx/sdk';
const store = new FileSessionStore('./.transferx-sessions');
const engine = createB2Engine({ b2: { /* ... */ }, store });
const { resuming, skipped } = await restoreAllSessions(store, engine, {
maxConcurrent: 4, // at most 4 resumeSession() calls in-flight at once
});
console.log(`Resuming ${resuming.length}, ${skipped.length} already done.`);
Parameters
| Parameter | Type | Description |
|---|---|---|
store |
ISessionStore |
Store to scan |
engine |
EngineHandle |
Engine to resume sessions with |
options.maxConcurrent |
number |
Max parallel resumes. Default: 4 |
Returns { resuming: string[]; skipped: string[] }.
Custom Adapter
Implement ITransferAdapter to use any S3-compatible or
proprietary storage backend.
import type {
ITransferAdapter, TransferSession,
ChunkMeta, ChunkUploadResult,
} from '@transferx/core';
export class S3Adapter implements ITransferAdapter {
/** Called once before any chunk. Return a stable upload ID. */
async initTransfer(session: TransferSession): Promise<string> {
const { uploadId } = await this.s3.createMultipartUpload({
Bucket: this.bucket,
Key: session.targetKey,
});
return uploadId!;
}
/** Called for each chunk. Return a providerToken (etag / part ID). */
async uploadChunk(
session: TransferSession,
chunk: ChunkMeta,
data: Uint8Array,
sha256Hex: string,
): Promise<ChunkUploadResult> {
const res = await this.s3.uploadPart({
Bucket: this.bucket,
Key: session.targetKey,
UploadId: session.providerSessionId!,
PartNumber: chunk.index + 1, // S3 is 1-indexed
Body: data,
});
return { providerToken: res.ETag! };
}
/** Called after all chunks succeed. Finalise the multipart upload. */
async completeTransfer(
session: TransferSession,
chunks: ChunkMeta[],
): Promise<void> {
await this.s3.completeMultipartUpload({
Bucket: this.bucket,
Key: session.targetKey,
UploadId: session.providerSessionId!,
MultipartUpload: {
Parts: chunks.map(c => ({
PartNumber: c.index + 1,
ETag: c.providerToken!,
})),
},
});
}
/** Best-effort cleanup — must never throw. */
async abortTransfer(session: TransferSession): Promise<void> {
await this.s3.abortMultipartUpload({
Bucket: this.bucket,
Key: session.targetKey,
UploadId: session.providerSessionId!,
}).catch(() => {});
}
}
Crash Resume
Use FileSessionStore and save the session before starting
the upload. On restart, call restoreAllSessions() — it
scans the store and resumes every non-terminal session, throttled to a
configurable concurrency cap.
// ─── startup.ts ───────────────────────────────────────────
import { createB2Engine, FileSessionStore, restoreAllSessions } from '@transferx/sdk';
const store = new FileSessionStore('./.transferx-sessions');
const engine = createB2Engine({ b2: { /* ... */ }, store });
// Resume any incomplete sessions — throttled to 4 concurrent
const { resuming, skipped } = await restoreAllSessions(store, engine, { maxConcurrent: 4 });
console.log(`Resuming ${resuming.length} session(s), ${skipped.length} already done.`);
resumeSession() queries the provider for already-uploaded
parts (if the adapter supports getRemoteState()), marks
confirmed chunks as done, and re-uploads only what remains. Orphaned
in-flight chunks (state uploading at crash time) are
automatically reset to pending.
Downloads — Overview
@transferx/downloader is an IDM-class parallel HTTP
download engine. It splits large files into equal-size chunks and
fetches them concurrently using HTTP Range requests, then
stitches them back into a single output file using pwrite-based
streaming writes.
| Feature | Details |
|---|---|
| Parallel connections |
Up to 8 simultaneous Range requests (configurable)
|
| Crash resume | JSON session persisted to disk; resumes from the last complete chunk boundary |
| Retry | Per-chunk exponential back-off with full jitter; 4xx errors throw immediately |
| Progress |
EMA-smoothed bytesPerSec, percent,
eta, throttled to 250 ms
|
| Adaptive concurrency | Sliding-window error-rate controller auto-adjusts connection ceiling |
| Server fallback |
Falls back to single-stream if server does not support
Range
|
| Stale detection | Validates ETag → Last-Modified → Content-Length before resuming |
Downloads — Quick Start
import { createDownloader } from "@transferx/sdk";
const task = createDownloader({
url: "https://example.com/large-file.zip",
outputPath: "/tmp/large-file.zip",
});
task.on("progress", (p) => {
const pct = p.percent?.toFixed(1) ?? "?";
const mbps = (p.speedBytesPerSec / 1024 / 1024).toFixed(1);
console.log(`${pct}% ${mbps} MB/s`);
});
await task.start();
console.log("Done!");
Pause · Resume · Cancel
const task = createDownloader({ url, outputPath });
const done = task.start(); // returns Promise<DownloadSession>
task.pause(); // queues drain — in-flight chunks finish naturally
task.resume(); // re-activates scheduling
await task.cancel(); // persists "cancelled" session; resumable later
Downloads — Crash Resume
Session data is stored as JSON at
<storeDir>/<sessionId>.json where:
-
storeDirdefaults to~/.transferx/sessions -
sessionIdis derived assha256(url + "\0" + outputPath).slice(0, 16)(hex)
On next launch with the same url /
outputPath:
- Session file is loaded and staleness is checked against the server (ETag → Last-Modified → Content-Length).
-
If the server resource has changed, a
staleSessionerror is thrown. -
Otherwise, only chunks with status
pendingorfailedare re-downloaded.
Downloads — Configuration
const engine = new DownloadEngine({
config: {
concurrency: 4, // parallel connections (default: 8)
chunkSize: 8 << 20, // 8 MiB chunk size (default: 4 MiB)
progressIntervalMs: 500, // progress event throttle (default: 250 ms)
headers: { Authorization: "Bearer <token>" },
retry: {
maxAttempts: 5, // (default: 5)
baseDelayMs: 500, // (default: 500 ms)
maxDelayMs: 30_000, // (default: 30 s)
jitterMs: 200, // (default: 200 ms)
},
},
storeDir: "/var/lib/myapp/dl-sessions",
});
Downloads — API Reference
createDownloader(opts)
| Field | Type | Required | Description |
|---|---|---|---|
url |
string |
✓ | HTTP(S) URL to download |
outputPath |
string |
✓ | Absolute path for the output file |
config |
Partial<DownloadConfig> |
Override any engine defaults | |
storeDir |
string |
Session store directory |
Returns a DownloadTask.
DownloadTask
| Method / Property | Description |
|---|---|
start(): Promise<DownloadSession> |
Begin (or resume) the download. Rejects with
DownloadError on failure.
|
pause() |
Stop dispatching new chunks; in-flight chunks finish naturally. |
resume() |
Re-enable dispatching after a pause. |
cancel(): Promise<void> |
Abort and persist a resumable cancelled session. |
on(event, handler) |
Subscribe to progress, completed,
error, log.
|
off(event, handler) |
Unsubscribe. |
getSession() |
Returns the current in-memory DownloadSession (or
null before start()).
|
id |
The 16-hex-char session ID. |
DownloadProgress
| Field | Type | Notes |
|---|---|---|
sessionId |
string |
|
bytesDownloaded |
number |
Bytes written so far |
totalBytes |
number | null |
null if no Content-Length |
percent |
number | null |
0–100; null when totalBytes is
null
|
speedBytesPerSec |
number |
EMA-smoothed; 0 when stalled |
eta |
number | null |
Remaining seconds; null when unknown |
chunksTotal |
number |
|
chunksDone |
number |
|
chunksFailed |
number |
Error Categories
| Category | Cause | Retried? |
|---|---|---|
network |
DNS / TCP failure | Yes |
timeout |
Request timed out | Yes |
serverError |
5xx response | Yes |
clientError |
4xx response | No |
rangeError |
416 Range Not Satisfiable | No |
auth |
401 / 403 | No |
notFound |
404 | No |
disk |
I/O failure | No |
staleSession |
Server resource changed since session was saved | No |
unknown |
Unexpected error | No |
Architecture
Data flow for a single chunk
UploadEngine.upload()
└─ Chunker.split(file) → ChunkMeta[]
└─ adapter.initTransfer(session) → providerSessionId
└─ Scheduler.push(chunkTask) × N
└─ Chunker.read(chunk) → Uint8Array
└─ sha256(data) → hex
└─ adapter.uploadChunk(session, chunk, data, sha256)
→ ChunkUploadResult { providerToken }
└─ [on failure] RetryEngine.shouldRetry() → delay
└─ Scheduler.pushFront(chunkTask) ← high-priority
└─ completeTransfer() when all chunks done
TransferX · MIT License · 313 tests passing