
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
@fetchkit/chaos-proxy
Advanced tools
Proxy and CLI for injecting network chaos (latency, failures, drops, rate-limiting) into API requests. Configurable via YAML.
Chaos Proxy is a proxy server for injecting configurable network chaos (latency, failures, connection drops, rate-limiting, etc.) into any HTTP or HTTPS traffic. Use it via CLI or programmatically to apply ordered middleware (global and per-route) and forward requests to your target server, preserving method, path, headers, query, and body.
chaos.yaml fileGET /api/users)npm install chaos-proxy
npx chaos-proxy --config chaos.yaml [--verbose]
--config <path>: YAML config file (default ./chaos.yaml)--verbose: print loaded middlewares, and request logsimport { loadConfig, startServer, registerMiddleware } from "chaos-proxy";
// Register custom middleware before starting the server
registerMiddleware('customDelay', (opts) => (req, res, next) => setTimeout(next, opts.ms));
const cfg = loadConfig("chaos.yaml");
const server = await startServer(cfg, { port: 5001 });
// Do requests pointing at http://localhost:5001
// Shutdown the server when done
await server.close();
Chaos Proxy supports full runtime reloads without process restart.
POST /reloadapplication/jsonchaos.yaml)curl -X POST http://localhost:5000/reload \
-H "Content-Type: application/json" \
-d '{
"target": "http://localhost:4000",
"port": 5000,
"global": [
{ "latency": { "ms": 120 } },
{ "failRandomly": { "rate": 0.05, "status": 503 } }
],
"routes": {
"GET /users/:id": [
{ "failNth": { "n": 3, "status": 500 } }
]
}
}'
{
"ok": true,
"version": 2,
"reloadMs": 4
}
400 invalid config/payload (runtime state is unchanged)409 reload already in progress415 unsupported content type{
"ok": false,
"error": "Config must include a string \"target\" field",
"version": 2,
"reloadMs": 1
}
startServer(...) returns a server object with:
reloadConfig(newConfig)getRuntimeVersion()chaos.yaml)target (string, required): Upstream API base URLport (number, optional): Proxy listen port (default 5000)global: Ordered array of middleware nodes applied to every requestroutes: Map of path or method+path to ordered array of middleware nodeslatency: 100)target: "http://localhost:4000"
port: 5000
global:
- latency: 100
- failRandomly:
rate: 0.1
status: 503
- bodyTransform:
request: "(body, ctx) => { body.foo = 'bar'; return body; }"
response: "(body, ctx) => { body.transformed = true; return body; }"
routes:
"GET /users/:id":
- failRandomly:
rate: 0.2
status: 503
"/users/:id/orders":
- failNth:
n: 3
status: 500
Chaos Proxy uses Koa Router for path matching, supporting named parameters (e.g., /users/:id), wildcards (e.g., *), and regex routes.
"GET /api/*" matches any GET request under /api/."GET /users/:id" matches GET requests like /users/123.Rule inheritance:
GET /path), the rule only applies to that method. If no method is specified (e.g., /path), the rule applies to all methods for that path./users/:id over /users/*).GET /path), the rule only applies to that method. If no method is specified (e.g., /path), the rule applies to all methods for that path.latency(ms) — delay every requestlatencyRange(minMs, maxMs, seed?) — random delay (deterministic when seed is set)fail({ status, body }) — always failfailRandomly({ rate, status, body, seed? }) — fail with probability (deterministic when seed is set)failNth({ n, status, body }) — fail every nth requestdropConnection({ prob, seed? }) — randomly drop connection (deterministic when seed is set)rateLimit({ limit, windowMs, key }) — rate limiting (by IP, header, or custom)cors({ origin, methods, headers }) — enable and configure CORS headers. All options are strings.
throttle({ rate, chunkSize, burst, key }) — throttles bandwidth per request to a specified rate (bytes per second), with optional burst capacity and chunk size. The key option allows per-client throttling. (Implemented natively, not using koa-throttle.)bodyTransform({ request?, response? }) — parse and mutate request and/or response body with custom functions.headerTransform({ request?, response? }) — parse and mutate request and/or response headers with custom functions.For randomness-based middlewares (latencyRange, failRandomly, dropConnection), you can set an optional seed to make behavior reproducible across runs (useful for CI and local debugging).
The rateLimit middleware restricts how many requests a client can make in a given time window. It uses koa-ratelimit under the hood.
limit: Maximum number of requests allowed per window (e.g., 100)windowMs: Time window in milliseconds (e.g., 60000 for 1 minute)key: How to identify clients (default is IP, but can be a header name or a custom function)How it works:
limit, further requests from that key receive a 429 Too Many Requests response until the window resets.Authorization), or by any custom logic.Example:
global:
- rateLimit:
limit: 100
windowMs: 60000
key: "Authorization"
This configuration limits clients to 100 requests per minute, identified by their Authorization header.
This helps test client retry logic under rate-limited conditions.
The cors middleware enables Cross-Origin Resource Sharing (CORS) for your proxied API. By default, it allows all origins (*), methods (GET,POST,PUT,DELETE,OPTIONS), and headers (Content-Type,Authorization). You can customize these by providing string values:
origin: Allowed origin(s) as a string (e.g., "https://example.com").methods: Allowed HTTP methods as a comma-separated string (e.g., "GET,POST").headers: Allowed headers as a comma-separated string (e.g., "Authorization,Content-Type").Example:
global:
- cors:
origin: "https://example.com"
methods: "GET,POST"
headers: "Authorization,Content-Type"
This configuration restricts CORS to requests from https://example.com using only GET and POST methods, and allows the Authorization and Content-Type headers.
The throttle middleware limits the bandwidth of responses to simulate slow network conditions.
rate: The average rate (in bytes per second) to allow (e.g., 1024 for 1 KB/s).chunkSize: The size of each chunk to send (in bytes). Smaller chunks can simulate more granular throttling (default 16384).burst: The maximum burst size (in bytes) that can be sent at once (default 0, meaning no burst).key: How to identify clients for per-client throttling (default is IP, but can be a header name or a custom function).Example:
global:
- throttle:
rate: 1024 # 1 KB/s
chunkSize: 512 # 512 bytes per chunk
burst: 2048 # allow bursts up to 2 KB
key: "Authorization"
This configuration throttles responses to an average of 1 KB/s, sending data in 512-byte chunks, with bursts up to 2 KB, identified by the Authorization header.
The bodyTransform middleware allows you to parse and mutate both the request and response bodies using custom transformation functions. You can specify a request and/or response transform, each as either a JavaScript function string (for YAML config) or a real function (for programmatic usage). Backward compatibility with the old transform key is removed—use the new object shape only.
How it works:
ctx.request.body.request transform is provided, it is called with the parsed body and Koa context, and its return value replaces ctx.request.body.response transform is provided, it is called with the response body (ctx.body) and Koa context, and its return value replaces ctx.body.Example (YAML):
global:
- bodyTransform:
request: "(body, ctx) => { body.foo = 'bar'; return body; }"
response: "(body, ctx) => { body.transformed = true; return body; }"
This configuration adds a foo: 'bar' property to every JSON request body and a transformed: true property to every JSON response body.
Note:
For maximum flexibility, the request and response options in bodyTransform can be specified as JavaScript function strings in your YAML config. This allows you to define custom transformation logic directly in the config file. Be aware that evaluating JS from config can introduce security and syntax risks. Use with care and only in trusted environments.
bodyTransform works by buffering request/response bodies so they can be mutated before forwarding/sending. This buffering behavior is an explicit tradeoff of enabling body transformation.
If you call startServer programmatically, you can also pass real functions instead of strings:
import { startServer, bodyTransform } from 'chaos-proxy';
startServer({
target: 'http://localhost:4000',
port: 5000,
global: [
bodyTransform({
request: (body, ctx) => {
body.foo = 'bar';
return body;
},
response: (body, ctx) => {
body.transformed = true;
return body;
}
})
]
});
The headerTransform middleware allows you to parse and mutate both the request and response headers using custom transformation functions. You can specify a request and/or response transform, each as either a JavaScript function string (for YAML config) or a real function (for programmatic usage).
How it works:
request transform is provided, it is called with a copy of the request headers and Koa context, and its return value replaces ctx.request.headers.response transform is provided, it is called with a copy of the response headers and Koa context, and its return value replaces ctx.response.headers.Example (YAML):
global:
- headerTransform:
request: "(headers, ctx) => { headers['x-added'] = 'foo'; return headers; }"
response: "(headers, ctx) => { headers['x-powered-by'] = 'chaos'; return headers; }"
This configuration adds an x-added: foo header to every request and an x-powered-by: chaos header to every response.
Note:
For maximum flexibility, the request and response options in headerTransform can be specified as JavaScript function strings in your YAML config. This allows you to define custom transformation logic directly in the config file. Be aware that evaluating JS from config can introduce security and syntax risks. Use with care and only in trusted environments.
If you call startServer programmatically, you can also pass real functions instead of strings:
import { startServer, headerTransform } from 'chaos-proxy';
startServer({
target: 'http://localhost:4000',
port: 5000,
global: [
headerTransform({
request: (headers, ctx) => {
headers['x-added'] = 'foo';
return headers;
},
response: (headers, ctx) => {
headers['x-powered-by'] = 'chaos';
return headers;
}
})
]
});
The presets/ directory contains ready-to-use YAML configurations for common chaos scenarios. Each preset applies its middleware stack globally (to every route) via the global array.
| Preset | What it simulates |
|---|---|
mobile-3g.yaml | High-latency, bandwidth-limited mobile connection with occasional drops |
flaky-backend.yaml | Unstable upstream: intermittent 503s, connection drops, and latency jitter |
burst-errors.yaml | Periodic error bursts: every 5th request fails, plus a 10% background error rate |
timeout-storm.yaml | Timeout storm: requests take 1–8s, frequent connection drops, and 504s |
Copy the preset file, set target and port to match your service, then run:
npx chaos-proxy --config presets/mobile-3g.yaml
Because chaos-proxy uses a single config file, combining presets means merging their global arrays manually into one file. For example, to layer flaky-backend behaviour on top of mobile-3g:
target: "http://localhost:4000"
port: 5000
global:
# from mobile-3g
- latencyRange:
minMs: 100
maxMs: 300
- throttle:
rate: 51200
chunkSize: 1024
burst: 10240
# from flaky-backend
- failRandomly:
rate: 0.05
status: 503
- dropConnection:
prob: 0.02
Middleware executes top-to-bottom, so put latency first if you want the added delay to precede error injection.
You can register custom middleware factories using registerMiddleware(name, factory). Once registered, your middleware can be referenced by name in any config, including reload payloads.
import { registerMiddleware, startServer } from 'chaos-proxy';
// Register before starting the server
registerMiddleware('customLogger', (opts) => {
const prefix = opts.prefix ?? '[chaos]';
return async (ctx, next) => {
console.log(`${prefix} ${ctx.method} ${ctx.url}`);
await next();
};
});
const server = startServer({
target: 'http://localhost:4000',
port: 5000,
global: [
{ customLogger: { prefix: '[myapp]' } }
],
});
Then reference it in YAML:
global:
- customLogger:
prefix: "[myapp]"
Or in a reload payload:
{
"target": "http://localhost:4000",
"global": [
{ "customLogger": { "prefix": "[spike]" } }
]
}
Rules:
opts object from config and must return a Koa async (ctx, next) middleware function.Have questions, want to discuss features, or share examples? Join the Fetch-Kit Discord server:
MIT
FAQs
Proxy and CLI for injecting network chaos (latency, failures, drops, rate-limiting) into API requests. Configurable via YAML.
The npm package @fetchkit/chaos-proxy receives a total of 225 weekly downloads. As such, @fetchkit/chaos-proxy popularity was classified as not popular.
We found that @fetchkit/chaos-proxy demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.