Documentation
Complete guide to using stable-request for production-grade HTTP workflows
Installation
npm install @emmvish/stable-request
Requirements: Node.js 14+ (ES Modules)
Core Functions
stableRequest()
Execute a single HTTP request with built-in retry logic, circuit breaker, caching, and observability hooks.
Signature
function stableRequest<RequestDataType = any, ResponseDataType = any>(
options: STABLE_REQUEST<RequestDataType, ResponseDataType>
): Promise<ResponseDataType | false>
Basic Usage
import { stableRequest, RETRY_STRATEGIES, REQUEST_METHODS } from '@emmvish/stable-request';
const userData = await stableRequest({
// Define the HTTP request configuration
reqData: {
hostname: 'api.example.com', // Target API hostname
path: '/users/123', // Endpoint path
method: REQUEST_METHODS.GET, // HTTP method
headers: { 'Authorization': 'Bearer token' } // Custom headers
},
resReq: true, // Return the response data (not just true/false)
attempts: 3, // Maximum retry attempts including initial request
wait: 1000, // Base wait time between retries (1 second)
retryStrategy: RETRY_STRATEGIES.EXPONENTIAL, // Double wait time after each retry
jitter: 500, // Add ±500ms randomness to prevent thundering herd
// Custom response validation - return false to trigger retry
responseAnalyzer: async ({ data }) => {
return data.status === 'success'; // Only accept responses with success status
},
// Error handler called on each failed attempt
handleErrors: async ({ errorLog }) => {
console.error('Request failed:', errorLog);
}
});
STABLE_REQUEST Interface
Complete parameter reference for the stableRequest function options:
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
reqData |
REQUEST_DATA<RequestDataType> |
- | Yes | Request configuration including hostname, path, method, headers, body, query, timeout, protocol, port, signal |
responseAnalyzer |
(options: ResponseAnalysisHookOptions<RequestDataType, ResponseDataType>) => boolean | Promise<boolean> |
() => true |
No | Custom function to validate response content. Return false to trigger retry, true if response is acceptable |
resReq |
boolean |
false |
No | If true, returns response data. If false, returns true on success or false on failure |
attempts |
number |
1 |
No | Maximum number of attempts (including initial request). Must be ≥ 1 |
performAllAttempts |
boolean |
false |
No | If true, performs all attempts even if one succeeds (useful for testing) |
wait |
number |
1000 |
No | Base wait time in milliseconds between retry attempts |
maxAllowedWait |
number |
60000 |
No | Maximum allowed wait time between retries (caps the backoff calculation) |
retryStrategy |
RETRY_STRATEGY_TYPES |
FIXED |
No | Retry backoff strategy: FIXED, LINEAR, or EXPONENTIAL |
jitter |
number |
0 |
No | Random delay variation in milliseconds. If non-zero, applies randomized jitter to retry delays to prevent thundering herd |
logAllErrors |
boolean |
false |
No | If true, logs all error attempts to console |
handleErrors |
(options: HandleErrorHookOptions<RequestDataType>) => any | Promise<any> |
() => console.log |
No | Custom error handler called for each failed attempt |
logAllSuccessfulAttempts |
boolean |
false |
No | If true, logs all successful attempts to console |
handleSuccessfulAttemptData |
(options: HandleSuccessfulAttemptDataHookOptions<RequestDataType, ResponseDataType>) => any | Promise<any> |
() => console.log |
No | Custom handler called for each successful attempt |
maxSerializableChars |
number |
1000 |
No | Maximum characters to include when serializing objects in logs |
finalErrorAnalyzer |
(options: FinalErrorAnalysisHookOptions<RequestDataType>) => boolean | Promise<boolean> |
() => false |
No | Analyzes the final error after all retries exhausted. Return true to suppress error (return false), false to throw |
trialMode |
TRIAL_MODE_OPTIONS |
{ enabled: false } |
No | Enables trial mode for testing without making real API calls. Configure failure probabilities and latency ranges |
hookParams |
HookParams |
{} |
No | Custom parameters to pass to hook functions (responseAnalyzerParams, handleErrorsParams, handleSuccessfulAttemptDataParams, finalErrorAnalyzerParams) |
preExecution |
RequestPreExecutionOptions |
{} |
No | Pre-execution hook configuration for dynamic request modification before execution |
commonBuffer |
Record<string, any> |
undefined |
No | Shared buffer for storing/accessing data across requests, hooks, and workflow phases |
cache |
CacheConfig |
undefined |
No | Response caching configuration with TTL, cache control, status codes, and custom key generation |
executionContext |
ExecutionContext |
undefined |
No | Execution context for logging and traceability (workflowId, phaseId, branchId, requestId) |
circuitBreaker |
CircuitBreakerConfig | CircuitBreaker |
undefined |
No | Circuit breaker configuration or instance to prevent cascade failures and system overload |
statePersistence |
StatePersistenceConfig |
undefined |
No | State persistence configuration for saving/loading workflow state to external storage (Redis, MongoDB, File System) |
stableApiGateway()
Execute multiple HTTP requests either sequentially or concurrently with unified configuration.
Signature
function stableApiGateway<RequestDataType = any, ResponseDataType = any>(
requests: API_GATEWAY_REQUEST<RequestDataType, ResponseDataType>[],
options: API_GATEWAY_OPTIONS<RequestDataType, ResponseDataType>
): Promise<API_GATEWAY_RESPONSE<ResponseDataType>[]>
Concurrent Execution
import { stableApiGateway } from '@emmvish/stable-request';
// Define multiple requests with unique IDs
const requests = [
{ id: 'users', requestOptions: {
reqData: { path: '/users' }, resReq: true
}},
{ id: 'orders', requestOptions: {
reqData: { path: '/orders' }, resReq: true
}},
{ id: 'products', requestOptions: {
reqData: { path: '/products' }, resReq: true
}}
];
const results = await stableApiGateway(requests, {
concurrentExecution: true, // Execute all requests simultaneously
// Common configuration applied to all requests
commonRequestData: {
hostname: 'api.example.com', // Shared hostname
headers: { 'X-API-Key': 'secret' } // Shared authentication
},
commonAttempts: 2, // Each request retries up to 2 times
commonWait: 500, // 500ms wait between retries
stopOnFirstError: false // Continue executing even if some requests fail
});
// Process results - each result has the request ID and success status
results.forEach(result => {
console.log(`${result.id}:`, result.success ? result.data : result.error);
});
Sequential Execution
// Execute requests one at a time in order
const results = await stableApiGateway(requests, {
concurrentExecution: false, // Wait for each request to complete before starting next
stopOnFirstError: true, // Abort remaining requests if one fails
commonRequestData: { hostname: 'api.example.com' },
commonAttempts: 3 // Retry each request up to 3 times
});
Request Grouping
Apply different configurations to groups of requests:
// Organize requests by assigning groupId to apply group-specific settings
const requests = [
{ id: 'critical-1', groupId: 'critical', requestOptions: { reqData: { path: '/critical/1' }, resReq: true } },
{ id: 'critical-2', groupId: 'critical', requestOptions: { reqData: { path: '/critical/2' }, resReq: true } },
{ id: 'optional-1', groupId: 'optional', requestOptions: { reqData: { path: '/optional/1' }, resReq: true } },
];
const results = await stableApiGateway(requests, {
concurrentExecution: true,
commonAttempts: 1, // Default for requests without a group
commonRequestData: { hostname: 'api.example.com' },
// Define group-specific configurations
requestGroups: [
{
id: 'critical', // Matches groupId in requests
commonConfig: {
commonAttempts: 5, // Critical requests retry more
commonWait: 1000,
commonRetryStrategy: RETRY_STRATEGIES.EXPONENTIAL // Aggressive retry strategy
}
},
{
id: 'optional', // Less critical requests
commonConfig: {
commonAttempts: 2, // Fewer retries for optional data
commonWait: 500
}
}
]
});
API_GATEWAY_OPTIONS Interface
Complete parameter reference for the stableApiGateway function options:
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
commonRequestData |
Partial<REQUEST_DATA<RequestDataType>> |
{} |
No | Common request configuration applied to all requests (hostname, headers, protocol, etc.) |
commonAttempts |
number |
1 |
No | Default number of retry attempts for all requests |
commonHookParams |
HookParams |
{} |
No | Default hook parameters for all requests |
commonPerformAllAttempts |
boolean |
false |
No | Default performAllAttempts setting for all requests |
commonWait |
number |
1000 |
No | Default wait time between retries for all requests |
commonMaxAllowedWait |
number |
60000 |
No | Default maximum wait time for all requests |
commonRetryStrategy |
RETRY_STRATEGY_TYPES |
FIXED |
No | Default retry strategy for all requests |
commonJitter |
number |
0 |
No | Default jitter setting for all requests |
commonLogAllErrors |
boolean |
false |
No | Default error logging setting for all requests |
commonLogAllSuccessfulAttempts |
boolean |
false |
No | Default success logging setting for all requests |
commonMaxSerializableChars |
number |
1000 |
No | Default max chars for serialization in logs |
commonTrialMode |
TRIAL_MODE_OPTIONS |
{ enabled: false } |
No | Default trial mode configuration for all requests |
commonResponseAnalyzer |
(options: ResponseAnalysisHookOptions<RequestDataType, ResponseDataType>) => boolean | Promise<boolean> |
() => true |
No | Default response analyzer for all requests |
commonResReq |
boolean |
false |
No | Default resReq value for all requests |
commonFinalErrorAnalyzer |
(options: FinalErrorAnalysisHookOptions<RequestDataType>) => boolean | Promise<boolean> |
() => false |
No | Default final error analyzer for all requests |
commonHandleErrors |
(options: HandleErrorHookOptions<RequestDataType>) => any | Promise<any> |
() => console.log |
No | Default error handler for all requests |
commonHandleSuccessfulAttemptData |
(options: HandleSuccessfulAttemptDataHookOptions<RequestDataType, ResponseDataType>) => any | Promise<any> |
() => console.log |
No | Default success handler for all requests |
commonPreExecution |
RequestPreExecutionOptions |
() => {} |
No | Default pre-execution hook configuration for all requests |
commonCache |
CacheConfig |
undefined |
No | Default cache configuration for all requests |
commonStatePersistence |
StatePersistenceConfig |
undefined |
No | Default state persistence configuration for all requests |
concurrentExecution |
boolean |
true |
No | If true, executes all requests concurrently. If false, executes sequentially |
requestGroups |
RequestGroup<RequestDataType, ResponseDataType>[] |
[] |
No | Array of request group configurations for applying settings to specific groups |
stopOnFirstError |
boolean |
false |
No | If true, stops executing remaining requests after first error (sequential mode) or stops launching new requests (concurrent mode) |
sharedBuffer |
Record<string, any> |
undefined |
No | Shared buffer accessible by all requests for data exchange |
maxConcurrentRequests |
number |
undefined |
No | Maximum number of concurrent requests (concurrent mode only) |
rateLimit |
RateLimitConfig |
undefined |
No | Rate limiting configuration (maxRequests, windowMs) |
circuitBreaker |
CircuitBreakerConfig |
undefined |
No | Circuit breaker configuration shared across all requests |
executionContext |
Partial<ExecutionContext> |
undefined |
No | Execution context for logging and traceability |
stableWorkflow()
Orchestrate complex multi-phase API workflows with support for sequential, concurrent, mixed, non-linear, and branching execution patterns.
Signature
function stableWorkflow<RequestDataType = any, ResponseDataType = any>(
phases: STABLE_WORKFLOW_PHASE<RequestDataType, ResponseDataType>[],
options: STABLE_WORKFLOW_OPTIONS<RequestDataType, ResponseDataType>
): Promise<STABLE_WORKFLOW_RESULT<ResponseDataType>>
Basic Multi-Phase Workflow
import { stableWorkflow, REQUEST_METHODS } from '@emmvish/stable-request';
// Define workflow as a sequence of phases
const phases = [
{
id: 'authentication', // Phase 1: Authenticate
requests: [
{
id: 'login',
requestOptions: {
reqData: { path: '/auth/login', method: REQUEST_METHODS.POST },
resReq: true
}
}
]
},
{
id: 'fetch-data', // Phase 2: Fetch data (runs after auth)
concurrentExecution: true, // Requests within this phase run in parallel
requests: [
{ id: 'profile', requestOptions: { reqData: { path: '/profile' }, resReq: true }},
{ id: 'orders', requestOptions: { reqData: { path: '/orders' }, resReq: true }},
{ id: 'settings', requestOptions: { reqData: { path: '/settings' }, resReq: true }}
]
},
{
id: 'process-data', // Phase 3: Process the fetched data
requests: [
{ id: 'analytics', requestOptions: {
reqData: { path: '/analytics', method: REQUEST_METHODS.POST },
resReq: false
}}
]
}
];
const result = await stableWorkflow(phases, {
workflowId: 'user-data-sync', // Unique workflow identifier
commonRequestData: { hostname: 'api.example.com' }, // Shared config for all requests
commonAttempts: 3, // Default retry attempts for all requests
stopOnFirstPhaseError: true, // Stop workflow if any phase fails
logPhaseResults: true // Log each phase completion
});
// Access workflow execution summary
console.log(`Workflow completed: ${result.success}`);
console.log(`Total requests: ${result.totalRequests}`);
console.log(`Successful: ${result.successfulRequests}`);
console.log(`Failed: ${result.failedRequests}`);
console.log(`Execution time: ${result.executionTime}ms`);
STABLE_WORKFLOW_PHASE Interface
Complete parameter reference for defining individual workflow phases:
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
id |
string |
auto-generated |
No | Unique identifier for the phase. Auto-generated if not provided |
requests |
API_GATEWAY_REQUEST[] |
- | Yes | Array of requests to execute in this phase |
concurrentExecution |
boolean |
false |
No | If true, executes all requests in this phase concurrently. If false, executes sequentially |
stopOnFirstError |
boolean |
false |
No | If true, stops executing remaining requests in this phase after first error |
markConcurrentPhase |
boolean |
false |
No | Mark this phase for concurrent execution in mixed execution mode. Used with enableMixedExecution |
maxConcurrentRequests |
number |
undefined |
No | Maximum number of concurrent requests for this phase. Overrides workflow-level setting |
rateLimit |
RateLimitConfig |
undefined |
No | Rate limiting configuration for this phase. Overrides workflow-level setting |
circuitBreaker |
CircuitBreakerConfig |
undefined |
No | Circuit breaker configuration for this phase. Overrides workflow-level setting |
maxReplayCount |
number |
undefined |
No | Maximum number of times this phase can be replayed in non-linear workflows |
allowReplay |
boolean |
true |
No | Whether this phase can be replayed via phase decision hook |
allowSkip |
boolean |
true |
No | Whether this phase can be skipped via phase decision hook |
phaseDecisionHook |
(options: PhaseDecisionHookOptions) => PhaseExecutionDecision | Promise<PhaseExecutionDecision> |
undefined |
No | Hook function to determine what action to take after this phase completes. Used for non-linear workflows. Returns decision with action: CONTINUE, SKIP, JUMP, REPLAY, or TERMINATE |
commonConfig |
Partial<API_GATEWAY_OPTIONS> |
undefined |
No | Phase-level configuration containing all common* properties from API_GATEWAY_OPTIONS interface (e.g., commonAttempts, commonWait, commonRetryStrategy, commonRequestData, etc.). These settings apply to all requests in this phase and override workflow-level common settings |
branchId |
string |
undefined |
No | Branch identifier when this phase is part of a branched workflow |
statePersistence |
StatePersistenceConfig |
undefined |
No | State persistence configuration for this phase. Allows saving/loading phase state to external storage |
STABLE_WORKFLOW_OPTIONS Interface
Complete parameter reference for the stableWorkflow function options. Extends all API_GATEWAY_OPTIONS plus workflow-specific parameters:
| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
workflowId |
string |
'workflow-{timestamp}' |
No | Unique identifier for this workflow execution |
stopOnFirstPhaseError |
boolean |
false |
No | If true, stops workflow execution after first phase error |
logPhaseResults |
boolean |
false |
No | If true, logs each phase result to console |
concurrentPhaseExecution |
boolean |
false |
No | If true, all phases execute concurrently. If false, phases execute sequentially |
enableBranchExecution |
boolean |
false |
No | Enables branch-based workflow execution with independent branches |
branches |
STABLE_WORKFLOW_BRANCH<RequestDataType, ResponseDataType>[] |
[] |
No | Array of workflow branches (when enableBranchExecution is true) |
enableMixedExecution |
boolean |
false |
No | Enables mixed execution mode where phases can be marked for concurrent execution using markConcurrentPhase |
enableNonLinearExecution |
boolean |
false |
No | Enables non-linear execution with phase decision hooks (JUMP, SKIP, REPLAY, TERMINATE) |
maxWorkflowIterations |
number |
1000 |
No | Maximum total phase executions to prevent infinite loops in non-linear workflows |
statePersistence |
StatePersistenceConfig |
undefined |
No | State persistence configuration for workflow recovery and distributed execution |
handlePhaseCompletion |
(options: HandlePhaseCompletionHookOptions<ResponseDataType>) => any | Promise<any> |
() => console.log |
No | Hook called after each phase completes successfully. Receives workflowId, branchId, phaseResult, params, sharedBuffer |
handlePhaseError |
(options: HandlePhaseErrorHookOptions<ResponseDataType>) => any | Promise<any> |
() => console.log |
No | Hook called when a phase encounters an error |
handlePhaseDecision |
(options: HandlePhaseDecisionHookOptions<ResponseDataType>) => any | Promise<any> |
() => {} |
No | Hook called when a phase makes a non-linear decision (JUMP, SKIP, REPLAY, TERMINATE) |
handleBranchCompletion |
(options: {workflowId, branchId, branchResults, success}) => any | Promise<any> |
() => console.log |
No | Hook called when a branch completes. Receives workflowId, branchId, branchResults, success |
handleBranchDecision |
(decision: BranchExecutionDecision, branchResult: BranchExecutionResult<ResponseDataType>) => any | Promise<any> |
() => {} |
No | Hook called when a branch makes a decision |
maxSerializableChars |
number |
1000 |
No | Maximum characters for serialization in logs |
workflowHookParams |
WorkflowHookParams |
{} |
No | Custom parameters passed to workflow-level hooks (handlePhaseCompletionParams, handlePhaseErrorParams, handlePhaseDecisionParams, handleBranchDecisionParams) |
commonRequestData |
Partial<REQUEST_DATA> |
{} |
No | Common request configuration applied to all phases |
commonAttempts |
number |
1 |
No | Default retry attempts for all requests |
commonWait |
number |
1000 |
No | Default wait time between retries |
commonRetryStrategy |
RETRY_STRATEGY_TYPES |
FIXED |
No | Default retry strategy |
commonCache |
CacheConfig |
undefined |
No | Default cache configuration |
commonStatePersistence |
StatePersistenceConfig |
undefined |
No | Default state persistence for all phases |
circuitBreaker |
CircuitBreakerConfig |
undefined |
No | Circuit breaker shared across workflow |
rateLimit |
RateLimitConfig |
undefined |
No | Rate limiting configuration |
maxConcurrentRequests |
number |
undefined |
No | Maximum concurrent requests |
sharedBuffer |
Record<string, any> |
undefined |
No | Shared buffer for data exchange across phases |
requestGroups |
RequestGroup[] |
[] |
No | Request group configurations |
commonHookParams |
HookParams |
{} |
No | Default hook parameters |
commonPerformAllAttempts |
boolean |
false |
No | Default performAllAttempts setting |
commonMaxAllowedWait |
number |
60000 |
No | Default maximum wait time |
commonJitter |
number |
0 |
No | Default jitter setting |
commonLogAllErrors |
boolean |
false |
No | Default error logging |
commonLogAllSuccessfulAttempts |
boolean |
false |
No | Default success logging |
commonMaxSerializableChars |
number |
1000 |
No | Default max serialization chars |
commonTrialMode |
TRIAL_MODE_OPTIONS |
{ enabled: false } |
No | Default trial mode |
commonResponseAnalyzer |
Function |
() => true |
No | Default response analyzer |
commonResReq |
boolean |
false |
No | Default resReq value |
commonFinalErrorAnalyzer |
Function |
() => false |
No | Default final error analyzer |
commonHandleErrors |
Function |
() => console.log |
No | Default error handler |
commonHandleSuccessfulAttemptData |
Function |
() => console.log |
No | Default success handler |
commonPreExecution |
RequestPreExecutionOptions |
() => {} |
No | Default pre-execution hook |
executionContext |
Partial<ExecutionContext> |
undefined |
No | Execution context for logging |
Configuration Cascading
Understand how configuration options cascade and override from higher levels to lower levels, allowing you to set defaults globally while customizing specific requests.
Cascading Principles
Configuration follows a hierarchical override pattern where:
- Lower-level configurations override higher-level ones
- More specific settings take precedence over general settings
- Request-level options always have the highest priority
API Gateway Configuration Cascading
In stableApiGateway, configuration flows from gateway options to individual requests:
Gateway Options (common*)
↓
Request Group Options (for matching groupId)
↓
Individual Request Options
↓
Final Configuration Applied
Cascading Hierarchy
- Gateway-Level (Lowest Priority):
common*options apply to all requestscommonRequestData- Shared request configuration (hostname, headers, etc.)commonAttempts- Default retry attemptscommonWait- Default wait time between retriescommonRetryStrategy- Default retry strategycommonJitter- Default jitter setting- And all other
common*properties...
- Request Group-Level (Medium Priority): Settings for specific request groups override gateway defaults
- Request-Level (Highest Priority): Individual request options override all others
Example: API Gateway Cascading
const requests = [
{
id: 'user-1',
groupId: 'critical', // Belongs to 'critical' group
requestOptions: {
reqData: { path: '/users/1' },
attempts: 7, // Request-level: Highest priority
resReq: true
}
},
{
id: 'user-2',
groupId: 'critical', // Uses group settings
requestOptions: {
reqData: { path: '/users/2' },
resReq: true // Will use group's 5 attempts
}
},
{
id: 'analytics',
groupId: 'optional', // Belongs to 'optional' group
requestOptions: {
reqData: { path: '/analytics' },
resReq: true // Will use group's 1 attempt
}
},
{
id: 'logs', // No groupId assigned
requestOptions: {
reqData: { path: '/logs' },
resReq: true // Will use gateway's 3 attempts
}
}
];
await stableApiGateway(requests, {
// Gateway-Level (applies to all) - Lowest priority
commonRequestData: {
hostname: 'api.example.com',
headers: { 'X-API-Key': 'secret' }
},
commonAttempts: 3, // Default for all
commonWait: 1000,
commonRetryStrategy: RETRY_STRATEGIES.EXPONENTIAL,
// Request Group-Level (overrides gateway defaults) - Medium priority
requestGroups: [
{
id: 'critical',
commonConfig: {
commonAttempts: 5, // Critical requests: 5 attempts
commonWait: 2000 // Longer wait for critical data
}
},
{
id: 'optional',
commonConfig: {
commonAttempts: 1, // Optional requests: single attempt
commonWait: 500 // Shorter wait for non-critical data
}
}
]
});
// Final Configuration Applied (showing precedence):
// user-1: 7 attempts, 1000ms wait (request-level attempts, gateway wait)
// user-2: 5 attempts, 2000ms wait (group-level overrides)
// analytics: 1 attempt, 500ms wait (group-level overrides)
// logs: 3 attempts, 1000ms wait (gateway-level defaults)
Workflow Configuration Cascading
In stableWorkflow, configuration flows through multiple layers:
Workflow Options (common*)
↓
Branch Configuration (in branched workflows)
↓
Phase Configuration (commonConfig property)
↓
Request Group Options (for matching groupId)
↓
Individual Request Options
↓
Final Configuration Applied
Cascading Hierarchy
- Workflow-Level (Lowest Priority):
common*options in workflow options apply to all phases and requests - Branch-Level: In branched workflows, branch configuration overrides workflow defaults
- Phase-Level: Phase
commonConfigproperty overrides workflow/branch settings for that phase - Request Group-Level: Request group settings override phase/workflow defaults
- Request-Level (Highest Priority): Individual request options override all others
Example: Workflow Cascading
const phases = [
{
id: 'critical-phase',
requests: [
{
id: 'auth',
requestOptions: {
reqData: { path: '/auth' },
attempts: 10, // Request-level: Highest priority
resReq: true
}
},
{
id: 'validate',
requestOptions: {
reqData: { path: '/validate' },
resReq: true // Uses phase-level: 7 attempts
}
}
],
// Phase-Level Configuration (overrides workflow defaults)
commonConfig: {
commonAttempts: 7, // Critical phase gets more retries
commonWait: 3000,
commonRetryStrategy: RETRY_STRATEGIES.EXPONENTIAL
}
},
{
id: 'data-fetch',
concurrentExecution: true, // Requests run in parallel
requests: [
{
id: 'users',
groupId: 'important', // Belongs to 'important' group
requestOptions: {
reqData: { path: '/users' },
resReq: true // Uses group-level: 5 attempts
}
},
{
id: 'products',
requestOptions: {
reqData: { path: '/products' },
resReq: true // Uses workflow-level: 3 attempts
}
}
]
}
];
await stableWorkflow(phases, {
workflowId: 'config-cascade-demo',
// Workflow-Level (applies to all phases) - Lowest priority
commonRequestData: {
hostname: 'api.example.com',
headers: { 'Authorization': 'Bearer token' }
},
commonAttempts: 3, // Default for all requests
commonWait: 1000,
commonRetryStrategy: RETRY_STRATEGIES.LINEAR,
// Request Group-Level (overrides workflow defaults)
requestGroups: [
{
id: 'important',
commonConfig: {
commonAttempts: 5, // Important requests: more retries
commonWait: 2000
}
}
]
});
// Final Configuration Applied (showing precedence chain):
// auth: 10 attempts, 3000ms wait (request > phase)
// validate: 7 attempts, 3000ms wait (phase > workflow)
// users: 5 attempts, 2000ms wait (group > workflow)
// products: 3 attempts, 1000ms wait (workflow defaults)
Example: Branched Workflow Cascading
const branches = [
{
id: 'payment-branch',
phases: [
{
id: 'authorize',
requests: [
{
id: 'check-funds',
requestOptions: {
reqData: { path: '/payment/authorize' },
attempts: 8, // Request-level: Highest priority
resReq: true
}
}
],
commonConfig: {
commonAttempts: 6, // Phase-level overrides branch
commonWait: 2000
}
}
],
commonConfig: {
commonAttempts: 4, // Branch-level overrides workflow
commonWait: 1500
}
},
{
id: 'notification-branch',
phases: [
{
id: 'send-email',
requests: [
{
id: 'email',
requestOptions: {
reqData: { path: '/notify/email' },
resReq: true // Uses branch-level: 2 attempts
}
}
]
}
],
commonConfig: {
commonAttempts: 2, // Branch-level for notifications
commonWait: 500
}
}
];
await stableWorkflow([], {
workflowId: 'branched-cascade',
enableBranchExecution: true, // Enable branched workflow mode
branches,
// Workflow-Level (applies to all branches) - Lowest priority
commonAttempts: 3,
commonWait: 1000,
commonRequestData: {
hostname: 'api.example.com'
}
});
// Final Configuration Applied (5-level precedence):
// check-funds: 8 attempts, 2000ms wait (request > phase > branch > workflow)
// email: 2 attempts, 500ms wait (branch > workflow)
Configuration Precedence Rules
When the same configuration property is defined at multiple levels:
API Gateway Priority (High to Low)
Request Options
↓
Request Group Options
↓
Gateway common* Options
Workflow Priority (High to Low)
Request Options
↓
Request Group Options
↓
Phase commonConfig
↓
Branch commonConfig (in branched workflows)
↓
Workflow common* Options
State Buffers
Overview
Buffers provide a mechanism for sharing data across different parts of your workflows without relying on global variables. There are two types of buffers:
- commonBuffer - Request-level buffer for individual requests in
stableRequest - sharedBuffer - Gateway/Workflow-level buffer shared across all requests in
stableApiGatewayandstableWorkflow
The Override Rule
Important: When both commonBuffer and sharedBuffer are present, sharedBuffer takes precedence and completely overrides commonBuffer for that execution context.
commonBuffer - Request-Level State
Used in stableRequest to maintain state across retry attempts and hooks for a single request.
Use Cases
- Store authentication tokens obtained during pre-execution
- Track retry-specific metadata
- Pass data between request hooks (preExecution, responseAnalyzer, handleErrors)
- Accumulate information across multiple retry attempts
Example: Using commonBuffer
await stableRequest({
reqData: {
hostname: 'api.example.com',
path: '/protected-resource'
},
resReq: true,
attempts: 3,
// Initialize buffer with tracking state
commonBuffer: {
attemptCount: 0, // Track retry attempts
authToken: null, // Store auth token across retries
errors: [] // Accumulate error history
},
// Pre-execution: Modify request based on buffer state
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Access and modify the buffer (shared across all hooks and retries)
commonBuffer.attemptCount++;
// Add dynamic authentication if not already present
if (!commonBuffer.authToken) {
commonBuffer.authToken = await getAuthToken();
}
// Inject auth token and attempt count into request
const reqData = {
...inputParams.reqData,
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`,
'X-Attempt': commonBuffer.attemptCount.toString()
}
};
return { reqData }; // Return modified request data
}
},
// Response analyzer: Use buffer to detect and handle auth expiry
responseAnalyzer: async ({ data, status, commonBuffer }) => {
if (status === 401) {
// Auth token expired, clear it for retry (will re-fetch in preExecution)
delete commonBuffer.authToken;
return false; // Trigger retry
}
return true; // Success - accept response
},
// Error handler: Accumulate error history in buffer for debugging
handleErrors: async ({ error, commonBuffer, attempt }) => {
commonBuffer.errors.push({
attempt,
error: error.message,
timestamp: new Date().toISOString()
});
console.log(`Error history:`, commonBuffer.errors);
}
});
sharedBuffer - Gateway/Workflow-Level State
Used in stableApiGateway and stableWorkflow to share state across multiple requests and phases.
Use Cases
- Share authentication tokens across all requests
- Pass data from one phase to another in workflows
- Accumulate results across multiple requests
- Implement workflow-level state machines
- Track global metrics (total processed items, errors, etc.)
- Store checkpoint data for workflow resumption
Example: API Gateway with sharedBuffer
const requests = [
{
id: 'login',
requestOptions: {
reqData: {
hostname: 'api.example.com',
path: '/auth/login',
method: REQUEST_METHODS.POST,
body: { username: 'user', password: 'pass' }
},
resReq: true // Expect response data
}
},
{
id: 'get-profile',
requestOptions: {
reqData: {
hostname: 'api.example.com',
path: '/user/profile'
},
resReq: true,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Use token from previous 'login' request (stored in sharedBuffer)
const reqData = {
...inputParams.reqData,
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`
}
};
return { reqData }; // Return modified request with auth
}
}
}
},
{
id: 'get-orders',
requestOptions: {
reqData: {
hostname: 'api.example.com',
path: '/user/orders'
},
resReq: true,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Reuse same token from sharedBuffer
const reqData = {
...inputParams.reqData,
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`
}
};
return { reqData };
}
}
}
}
];
await stableApiGateway(requests, {
concurrentExecution: false, // Sequential: login must complete first
// commonBuffer is accessible to ALL requests in the gateway
commonBuffer: {
authToken: null, // Will be populated by login request
userProfile: null, // Will be populated by profile request
totalOrders: 0 // Will be populated by orders request
},
// Common success handler: runs after each successful request
commonHandleSuccessfulAttemptData: async ({ data, commonBuffer, executionContext }) => {
if (executionContext.requestId === 'login') {
commonBuffer.authToken = data.token; // Store token for subsequent requests
console.log('Auth token stored in commonBuffer');
} else if (executionContext.requestId === 'get-profile') {
commonBuffer.userProfile = data; // Store profile data
} else if (executionContext.requestId === 'get-orders') {
commonBuffer.totalOrders = data.length; // Store order count
}
}
});
// After execution, commonBuffer contains all accumulated data
console.log('Total orders:', sharedBuffer.totalOrders);
Example: Workflow with sharedBuffer
const phases = [
{
id: 'authentication',
requests: [{
id: 'auth',
requestOptions: {
reqData: {
hostname: 'api.example.com',
path: '/auth/login',
method: REQUEST_METHODS.POST,
body: { username: 'user', password: 'pass' }
},
resReq: true // Expect auth token in response
}
}]
},
{
id: 'fetch-data',
concurrentExecution: true, // Parallel execution for efficiency
requests: [
{
id: 'users',
requestOptions: {
reqData: { path: '/users' },
resReq: true,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Use token from Phase 1 (stored in sharedBuffer)
const reqData = {
...inputParams.reqData,
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`
}
};
return { reqData };
}
}
}
},
{
id: 'products',
requestOptions: {
reqData: { path: '/products' },
resReq: true,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Same token shared across all Phase 2 requests
const reqData = {
...inputParams.reqData,
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`
}
};
return { reqData };
}
}
}
}
]
},
{
id: 'process',
requests: [{
id: 'analytics',
requestOptions: {
reqData: {
path: '/analytics',
method: REQUEST_METHODS.POST,
body: {} // Will be populated with Phase 2 data
},
resReq: false,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Use accumulated data from previous phases
const reqData = {
...inputParams.reqData,
body: {
userCount: commonBuffer.users?.length || 0,
productCount: commonBuffer.products?.length || 0
},
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`
}
};
return { reqData };
}
}
}
}]
}
];
await stableWorkflow(phases, {
workflowId: 'data-sync',
commonRequestData: { hostname: 'api.example.com' },
// sharedBuffer accessible across ALL phases (persists throughout workflow)
sharedBuffer: {
authToken: null, // Populated in Phase 1
users: [], // Populated in Phase 2
products: [] // Populated in Phase 2
},
// Store successful response data in sharedBuffer after each phase
handlePhaseCompletion: async ({ phaseResult, sharedBuffer }) => {
if (phaseResult.phaseId === 'authentication') {
const authResponse = phaseResult.responses.find(r => r.requestId === 'auth');
if (authResponse?.success) {
sharedBuffer.authToken = authResponse.data.token; // Store for Phase 2 & 3
}
} else if (phaseResult.phaseId === 'fetch-data') {
phaseResult.responses.forEach(response => {
if (response.requestId === 'users' && response.success) {
sharedBuffer.users = response.data; // Store for Phase 3
} else if (response.requestId === 'products' && response.success) {
sharedBuffer.products = response.data; // Store for Phase 3
}
});
}
}
});
Override Behavior: sharedBuffer vs commonBuffer
When a request has both commonBuffer in its options and is executed within a context that provides sharedBuffer (API Gateway or Workflow), the sharedBuffer completely replaces commonBuffer.
Example: Understanding the Override
const requests = [
{
id: 'request-1',
requestOptions: {
reqData: { hostname: 'api.example.com', path: '/data1' },
resReq: true,
// This commonBuffer will be IGNORED (overridden by gateway's sharedBuffer)
commonBuffer: {
source: 'request-level',
value: 'ignored'
},
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// commonBuffer parameter actually refers to gateway's commonBuffer
console.log(commonBuffer.source); // Outputs: 'gateway-level'
console.log(commonBuffer.value); // Outputs: 'active'
// Modifications affect the gateway's commonBuffer (shared across all requests)
commonBuffer.modifiedBy = 'request-1';
return { reqData: inputParams.reqData };
}
}
}
},
{
id: 'request-2',
requestOptions: {
reqData: { hostname: 'api.example.com', path: '/data2' },
resReq: true,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Can see modifications from request-1 (same sharedBuffer reference)
console.log(commonBuffer.modifiedBy); // Outputs: 'request-1'
return { reqData: inputParams.reqData };
}
}
}
}
];
await stableApiGateway(requests, {
concurrentExecution: false,
// This sharedBuffer OVERRIDES all request-level commonBuffers
sharedBuffer: {
source: 'gateway-level',
value: 'active'
}
});
// Override Rule: sharedBuffer (gateway/workflow level) > commonBuffer (request level)
Standalone Request vs Gateway Context
// Scenario 1: Standalone stableRequest - commonBuffer is used
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
resReq: true,
commonBuffer: {
mode: 'standalone',
counter: 0
},
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// commonBuffer refers to commonBuffer in this context
console.log(commonBuffer.mode); // 'standalone'
commonBuffer.counter++;
return { reqData: inputParams.reqData };
}
}
});
// Scenario 2: Same request config in API Gateway - sharedBuffer overrides
await stableApiGateway([{
id: 'req1',
requestOptions: {
reqData: { hostname: 'api.example.com', path: '/data' },
resReq: true,
commonBuffer: {
mode: 'standalone', // IGNORED
counter: 0 // IGNORED
},
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// commonBuffer refers to gateway's sharedBuffer
console.log(commonBuffer.mode); // 'gateway'
commonBuffer.counter++;
return { reqData: inputParams.reqData };
}
}
}
}], {
sharedBuffer: {
mode: 'gateway',
counter: 100 // This is what gets used
}
});
Retry Strategies
Automatically retry failed requests with sophisticated backoff strategies.
Fixed Delay
Constant wait time between each retry attempt.
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
attempts: 5, // Try up to 5 times
wait: 1000, // Wait timing: 1s, 1s, 1s, 1s (constant)
retryStrategy: RETRY_STRATEGIES.FIXED // Same delay between each retry
});
Linear Backoff
Incrementally increasing delays.
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
attempts: 5,
wait: 1000, // Wait timing: 1s, 2s, 3s, 4s (linear growth)
retryStrategy: RETRY_STRATEGIES.LINEAR // Delay increases by 'wait' each retry
});
Exponential Backoff
Exponentially growing delays (recommended for most use cases).
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
attempts: 5,
wait: 1000, // Wait timing: 1s, 2s, 4s, 8s, 16s (exponential)
retryStrategy: RETRY_STRATEGIES.EXPONENTIAL, // Delay doubles each retry
maxAllowedWait: 30000 // Cap maximum delay at 30 seconds
});
Jitter
Add randomness to prevent thundering herd problems.
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
attempts: 5,
wait: 1000,
retryStrategy: RETRY_STRATEGIES.EXPONENTIAL,
jitter: 500 // Add ±500ms random variation to each wait
}); // Prevents all clients retrying at same time
Custom Response Validation
Retry based on response content, not just HTTP status.
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/job/status' },
resReq: true,
attempts: 10, // Poll up to 10 times
wait: 2000, // Wait 2s between polls
responseAnalyzer: async ({ data }) => {
// Retry (return false) until job is complete
return data.status === 'completed'; // Return true to accept response
}
});
Circuit Breaker Pattern
Prevent cascade failures and system overload with built-in circuit breakers.
Circuit Breaker States
- CLOSED: Normal operation, requests flow through
- OPEN: Too many failures detected, requests blocked immediately
- HALF_OPEN: Testing if service recovered, limited requests allowed
Basic Usage with API Gateway
import { stableApiGateway } from '@emmvish/stable-request';
const requests = [
{ id: 'req1', requestOptions: { reqData: { path: '/users' }, resReq: true, attempts: 3 } },
{ id: 'req2', requestOptions: { reqData: { path: '/orders' }, resReq: true, attempts: 3 } },
{ id: 'req3', requestOptions: { reqData: { path: '/products' }, resReq: true, attempts: 3 } }
];
await stableApiGateway(requests, {
commonRequestData: { hostname: 'unreliable-api.example.com' },
circuitBreaker: {
failureThresholdPercentage: 50, // Open circuit if 50%+ requests fail
minimumRequests: 10, // Need 10+ requests to calculate rate
recoveryTimeoutMs: 60000, // Wait 60s before testing recovery
successThresholdPercentage: 70, // Need 70%+ success to close circuit
halfOpenMaxRequests: 3 // Allow 3 test requests in HALF_OPEN
}
});
Shared Circuit Breaker with API Gateway
Circuit breaker config is shared across all requests in an API Gateway, providing centralized failure tracking:
import { stableApiGateway } from '@emmvish/stable-request';
// Define circuit breaker config
const apiCircuitBreakerConfig = {
failureThresholdPercentage: 50,
minimumRequests: 5,
recoveryTimeoutMs: 60000,
successThresholdPercentage: 70,
halfOpenMaxRequests: 3
};
// All requests in this gateway share the same circuit breaker state
const requests = [
{ id: 'users', requestOptions: { reqData: { path: '/users' }, resReq: true } },
{ id: 'orders', requestOptions: { reqData: { path: '/orders' }, resReq: true } },
{ id: 'products', requestOptions: { reqData: { path: '/products' }, resReq: true } }
];
await stableApiGateway(requests, {
commonRequestData: { hostname: 'api.example.com' },
circuitBreaker: apiCircuitBreakerConfig, // Shared state across all requests
commonAttempts: 3
});
Workflow-Level Circuit Breaker
await stableWorkflow(phases, {
commonRequestData: { hostname: 'api.example.com' },
circuitBreaker: { // Config shared across all phases
failureThresholdPercentage: 50,
minimumRequests: 10,
recoveryTimeoutMs: 60000,
successThresholdPercentage: 70,
halfOpenMaxRequests: 3
},
commonAttempts: 3
});
Response Caching
Cache responses to reduce load and improve performance.
Basic Caching
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
resReq: true,
cache: {
enabled: true, // Enable response caching
ttl: 300000 // Cache for 5 minutes (300,000ms)
}
});
Shared Cache Instance
// Create a shared cache config for multiple requests
const cacheConfig = { enabled: true, ttl: 300000 };
// First request: fetches from API and caches response
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/users' },
resReq: true,
cache: cacheConfig // Use shared cache instance
});
// Second request: returns cached data (within TTL)
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/users' },
resReq: true, // Reuses cached data, no API call
});
Workflow-Level Caching
await stableWorkflow(phases, {
commonCache: {
enabled: true, // Cache enabled for all requests
ttl: 300000 // 5-minute cache shared across workflow
}
});
Rate Limiting & Concurrency Control
Rate Limiting
Control request rates to respect API quotas.
await stableApiGateway(requests, {
rateLimit: {
maxRequests: 100, // Maximum 100 requests
windowMs: 60000 // Per 60-second window
}
});
Concurrency Limiting
Control the maximum number of simultaneous requests.
await stableApiGateway(requests, {
concurrentExecution: true, // Enable parallel execution
maxConcurrentRequests: 5 // But limit to 5 simultaneous requests
});
Combined Rate & Concurrency Limiting
await stableApiGateway(requests, {
concurrentExecution: true,
maxConcurrentRequests: 10, // Max 10 parallel requests at a time
rateLimit: {
maxRequests: 100, // And max 100 requests
windowMs: 60000 // Per minute (rate limit)
}
});
Workflow Execution Patterns
Sequential & Concurrent Phases
Sequential Phases (Default)
Phases execute one after another.
const result = await stableWorkflow(phases, {
concurrentPhaseExecution: false // Default: phases run sequentially
});
Concurrent Phases
All phases execute simultaneously.
const result = await stableWorkflow(phases, {
concurrentPhaseExecution: true // All phases start at once
});
Concurrent Requests within Phase
const phases = [
{
id: 'fetch-data',
concurrentExecution: true, // Requests within phase run in parallel
requests: [...] // While phases themselves may be sequential
}
];
Mixed Execution Mode
Mark specific phases for concurrent execution while others run sequentially.
const phases = [
{
id: 'auth',
requests: [...] // Phase 1: Runs first (sequential)
},
{
id: 'fetch-users',
markConcurrentPhase: true, // Phase 2: Concurrent group starts
requests: [...]
},
{
id: 'fetch-products',
markConcurrentPhase: true, // Phase 3: Runs with Phase 2 in parallel
requests: [...]
},
{
id: 'process',
requests: [...] // Phase 4: Runs after Phases 2 & 3 complete
}
];
const result = await stableWorkflow(phases, {
enableMixedExecution: true // Enable marking phases for concurrency
});
Non-Linear Workflows
Implement conditional logic with phase decision hooks.
Available Actions
CONTINUE: Proceed to next phase normallySKIP: Skip to a specific phaseJUMP: Jump backwards to re-execute a phaseREPLAY: Re-execute current phaseTERMINATE: End workflow immediately
import { PHASE_DECISION_ACTIONS } from '@emmvish/stable-request';
const phases = [
{
id: 'validate',
requests: [{ id: 'check', requestOptions: { reqData: { path: '/validate' }, resReq: true } }],
phaseDecisionHook: async ({ phaseResult, sharedBuffer }) => {
if (!phaseResult.success) {
return { action: PHASE_DECISION_ACTIONS.TERMINATE }; // End workflow early
}
return { action: PHASE_DECISION_ACTIONS.CONTINUE }; // Proceed to next phase
}
},
{
id: 'process',
requests: [{ id: 'process', requestOptions: { reqData: { path: '/process' }, resReq: true } }],
maxReplayCount: 3, // Allow up to 3 replays of this phase
phaseDecisionHook: async ({ phaseResult, sharedBuffer, replayCount }) => {
if (phaseResult.failedRequests > 0 && replayCount < 3) {
return { action: PHASE_DECISION_ACTIONS.REPLAY }; // Retry this phase (replayCount++)
}
return { action: PHASE_DECISION_ACTIONS.CONTINUE }; // Success or max replays reached
}
},
{
id: 'finalize',
requests: [{ id: 'finalize', requestOptions: { reqData: { path: '/finalize' }, resReq: true } }]
}
];
const result = await stableWorkflow(phases, {
enableNonLinearExecution: true, // Enable conditional logic
maxWorkflowIterations: 1000 // Prevent infinite loops
});
Branched Workflows
Execute independent branches of work, each with its own phases.
const branches = [
{
id: 'user-service', // Branch 1: User operations
phases: [
{ id: 'validate-user', requests: [{ id: 'validate', requestOptions: { reqData: { path: '/users/validate' }, resReq: true } }] },
{ id: 'update-profile', requests: [{ id: 'update', requestOptions: { reqData: { path: '/users/update', method: REQUEST_METHODS.POST }, resReq: true } }] }
]
},
{
id: 'inventory-service', // Branch 2: Inventory operations
phases: [
{ id: 'check-stock', requests: [{ id: 'check', requestOptions: { reqData: { path: '/inventory/check' }, resReq: true } }] },
{ id: 'reserve-items', requests: [{ id: 'reserve', requestOptions: { reqData: { path: '/inventory/reserve', method: REQUEST_METHODS.POST }, resReq: true } }] }
]
},
{
id: 'payment-service', // Branch 3: Payment operations
phases: [
{ id: 'authorize-payment', requests: [{ id: 'authorize', requestOptions: { reqData: { path: '/payment/authorize', method: REQUEST_METHODS.POST }, resReq: true } }] },
{ id: 'capture-payment', requests: [{ id: 'capture', requestOptions: { reqData: { path: '/payment/capture', method: REQUEST_METHODS.POST }, resReq: true } }] }
]
}
];
const result = await stableWorkflow([], {
enableBranchExecution: true, // Enable branched workflow mode
branches, // Define independent branches
concurrentBranchExecution: true, // Run all branches in parallel
commonRequestData: { hostname: 'api.example.com' },
handleBranchCompletion: async ({ branchId, success, branchResults }) => {
console.log(`Branch ${branchId}: ${success ? 'SUCCESS' : 'FAILED'}`);
}
});
Branch Decision Hooks
const branches = [
{
id: 'critical-service',
phases: [{
id: 'critical-operation',
requests: [{ id: 'critical', requestOptions: { reqData: { path: '/critical' }, resReq: true } }],
phaseDecisionHook: async ({ phaseResult }) => {
if (!phaseResult.success) {
return {
action: PHASE_DECISION_ACTIONS.TERMINATE, // Terminate this branch
};
}
return { action: PHASE_DECISION_ACTIONS.CONTINUE }; // Continue branch execution
}
}]
}
];
const result = await stableWorkflow([], {
enableBranchExecution: true,
branches,
handleBranchDecision: async ({ branchId, decision }) => {
console.log(`Branch ${branchId} decision:`, decision); // Monitor branch decisions
}
});
Observability & Hooks
Comprehensive hooks for monitoring, logging, and debugging.
Request-Level Hooks
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
// Called on each failed attempt (before retry)
handleErrors: async ({ error, errorLog, attempt, totalAttempts }) => {
logger.error(`Attempt ${attempt}/${totalAttempts} failed:`, errorLog);
},
// Called on each successful attempt (including retries)
handleSuccessfulAttemptData: async ({ data, status, attempt }) => {
logger.info(`Attempt ${attempt} succeeded with status ${status}`);
},
// Validate responses (return false to retry even on 2xx status)
responseAnalyzer: async ({ data, status, headers }) => {
return status === 200 && data.status === 'success'; // Custom validation logic
},
// Handle final error after all retries exhausted
finalErrorAnalyzer: async ({ error, allErrors }) => {
logger.error('All attempts failed:', allErrors);
return false; // false = throw error, true = suppress
}
});
Workflow-Level Hooks
await stableWorkflow(phases, {
// Called after each phase completes (success or failure)
handlePhaseCompletion: async ({
phaseResult, // Phase execution results
sharedBuffer, // Shared workflow state
params // Custom parameters
}) => {
logger.info(`Phase ${phaseResult.phaseId} completed:`, {
success: phaseResult.success,
totalRequests: phaseResult.totalRequests,
executionTime: phaseResult.executionTime
});
},
// Called when phase encounters error
handlePhaseError: async ({ error, phaseResult, params }) => {
logger.error(`Phase ${phaseResult.phaseId} failed:`, error);
},
// Called for non-linear workflow decisions
handlePhaseDecision: async ({ workflowId, decision, phaseResult }) => {
console.log(`Phase decision: ${decision.action}`); // CONTINUE, SKIP, JUMP, REPLAY, TERMINATE
if (decision.targetPhaseId) {
console.log(`Target: ${decision.targetPhaseId}`);
}
},
// Called when branch needs to take a decision
handleBranchDecision: async ({ workflowId, branchId, branchResults, success }) => {
console.log(`Branch ID: ${branchId}`);
},
// Called upon branch completion
handleBranchCompletion: async ({ workflowId, branchResult }) => {
console.log(`Branch ${branchResult.branchId} completed`);
},
// Pass custom parameters to workflow hooks
workflowHookParams: {
handlePhaseCompletionParams: { environment: 'production' },
handlePhaseErrorParams: { severity: 'high' }
},
});
Pre-Execution Hook
Dynamically modify requests before they are executed. The pre-execution hook allows you to inspect and transform request data based on the current execution state, making it ideal for adding dynamic headers, authentication, or conditional request modifications.
Usage
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
resReq: true,
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Modify request data based on buffer state
const reqData = { ...inputParams.reqData, hostname: commonBuffer.hostname };
return { reqData }; // Return modified request data
}
},
commonBuffer: { hostname: 'abc.com' } // Initial buffer state
});
Trial Mode
Test without making real API calls using probabilistic success/failure simulation.
Basic Trial Mode
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
resReq: true,
attempts: 3,
trialMode: {
enabled: true, // Enable trial mode (no real API calls)
successProbability: 0.7, // 70% chance of simulated success
trialModeData: { mock: 'data' } // Mock response data
}
});
A/B Testing with Trial Mode
// Test feature flags with different success probabilities
const runTest = async (featureName, successRate) => {
const result = await stableRequest({
reqData: { hostname: 'api.example.com', path: '/feature-test' },
resReq: true,
attempts: 5,
trialMode: {
enabled: true,
successProbability: successRate, // Variable success rate for testing
trialModeData: { feature: featureName, enabled: true }
}
});
return result;
};
// Test different configurations without real API calls
await runTest('feature-A', 0.9); // 90% success rate
await runTest('feature-B', 0.5); // 50% success rate
State Persistence
Persist workflow state to external storage (databases, Redis, file systems) for resilience, recovery, and distributed execution.
Configuration
State persistence is configured using the StatePersistenceConfig interface:
interface StatePersistenceConfig {
persistenceFunction: (options: StatePersistenceOptions) => Promise<Record<string, any>> | Record<string, any>;
persistenceParams?: any; // Custom parameters passed to your persistence function
loadBeforeHooks?: boolean; // Load state before executing hooks (default: false)
storeAfterHooks?: boolean; // Store state after hook execution (default: false)
}
interface StatePersistenceOptions {
executionContext: ExecutionContext; // Context about current execution
params?: any; // Your custom persistenceParams
buffer: Record; // The state buffer to store/load
}
interface ExecutionContext {
workflowId: string; // Unique workflow identifier
phaseId?: string; // Current phase ID (if in a phase)
branchId?: string; // Current branch ID (if in a branch)
requestId?: string; // Current request ID (if applicable)
}
How It Works
The persistence function is called in two modes:
- LOAD Mode: When
bufferis empty/null, return the stored state - STORE Mode: When
buffercontains data, save it to your storage
// Your persistence function
const myPersistenceFunction = async ({ executionContext, params, buffer }) => {
if (buffer && Object.keys(buffer).length > 0) {
// STORE MODE: Save the buffer to storage
await myStorage.save(executionContext.workflowId, buffer);
return {};
} else {
// LOAD MODE: Return stored state
const stored = await myStorage.load(executionContext.workflowId);
return stored || {}; // Return empty object if nothing stored
}
};
Checkpoint-Based Global Persistence
Track workflow progress with global checkpoints that persist completed phases:
import Redis from 'ioredis';
const redis = new Redis();
async function createCheckpoint({ executionContext, params, buffer }) {
const { workflowId, phaseId } = executionContext;
const { ttl = 86400 } = params || {};
const checkpointKey = `checkpoint:${workflowId}`;
if (buffer && Object.keys(buffer).length > 0) {
// STORE: Save checkpoint with completed phases
const existingData = await redis.get(checkpointKey);
const existing = existingData ? JSON.parse(existingData) : {};
const checkpointData = {
...existing,
completedPhases: [...new Set([...(existing.completedPhases || []), ...(buffer.completedPhases || [])])],
lastPhase: phaseId || existing.lastPhase,
lastUpdated: new Date().toISOString(),
progress: buffer.progress || existing.progress || 0,
processedRecords: buffer.recordsProcessed || existing.processedRecords || 0
};
await redis.setex(checkpointKey, ttl, JSON.stringify(checkpointData));
console.log(`Checkpoint saved: ${phaseId} (Progress: ${checkpointData.progress}%)`);
} else {
// LOAD: Return checkpoint data
const data = await redis.get(checkpointKey);
return data ? JSON.parse(data) : { completedPhases: [], processedRecords: 0 };
}
return {};
}
// Use global checkpoint for all phases
await stableWorkflow(phases, {
workflowId: 'migration-12345',
enableNonLinearExecution: true, // Enable conditional logic for skipping
sharedBuffer: {
completedPhases: [], // Track which phases completed
progress: 0
},
commonStatePersistence: {
persistenceFunction: createCheckpoint, // Checkpoint function (LOAD/STORE mode)
persistenceParams: { ttl: 7200 }, // Custom params: 2 hours TTL
loadBeforeHooks: true, // Load state before hooks execute
storeAfterHooks: true // Save state after hooks execute
}
});
Resume workflows from checkpoints with automatic phase skipping:
// Resume a workflow from last saved state
async function resumeWorkflow(workflowId: string) {
const phases = [
{
id: 'phase-1',
requests: [...],
phaseDecisionHook: async ({ phaseResult, sharedBuffer }) => {
// Check if this phase already completed (recovery scenario)
if (sharedBuffer.completedPhases?.includes('phase-1')) {
console.log('Phase-1 already completed, skipping...');
return {
action: PHASE_DECISION_ACTIONS.SKIP, // Skip completed phase
skipToPhaseId: 'phase-2' // Jump to next incomplete phase
};
}
// Verify success before marking complete
if (phaseResult.success) {
sharedBuffer.completedPhases = [...(sharedBuffer.completedPhases || []), 'phase-1'];
console.log('📊 Phase-1 completed, saving checkpoint...');
return { action: PHASE_DECISION_ACTIONS.CONTINUE };
}
return { action: PHASE_DECISION_ACTIONS.TERMINATE }; // Fail workflow on error
},
statePersistence: {
persistenceFunction: persistToDatabase, // Database persistence
persistenceParams: { db },
loadBeforeHooks: true, // Load before decision hook
storeAfterHooks: true // Save after decision hook
}
},
{
id: 'phase-2',
requests: [...],
phaseDecisionHook: async ({ phaseResult, sharedBuffer }) => {
if (sharedBuffer.completedPhases?.includes('phase-2')) {
console.log('Phase-2 already completed, skipping...');
return {
action: PHASE_DECISION_ACTIONS.SKIP,
skipToPhaseId: 'phase-3'
};
}
if (phaseResult.success) {
sharedBuffer.completedPhases = [...(sharedBuffer.completedPhases || []), 'phase-2'];
return { action: PHASE_DECISION_ACTIONS.CONTINUE };
}
return { action: PHASE_DECISION_ACTIONS.TERMINATE };
},
statePersistence: {
persistenceFunction: persistToDatabase,
persistenceParams: { db },
loadBeforeHooks: true,
storeAfterHooks: true
}
}
];
const result = await stableWorkflow(phases, {
workflowId,
enableNonLinearExecution: true, // Required for SKIP action
sharedBuffer: { completedPhases: [] } // Initialize buffer
});
return result;
}
Best Practices
Comprehensive guide to building production-ready HTTP workflows using stable-request's full feature set.
1. Implement Progressive Retry Strategies
Use exponential backoff with jitter for most scenarios to prevent overwhelming services and avoid thundering herd problems.
import { stableRequest, RETRY_STRATEGIES } from '@emmvish/stable-request';
await stableRequest({
reqData: {
hostname: 'api.example.com',
path: '/data',
timeout: 5000 // Always set timeouts to prevent hanging
},
attempts: 5, // 5 attempts = 1 initial + 4 retries
wait: 1000, // Start with 1s delay
retryStrategy: RETRY_STRATEGIES.EXPONENTIAL, // 1s, 2s, 4s, 8s growth
jitter: 500, // ±500ms randomization prevents thundering herd
maxAllowedWait: 30000, // Cap maximum wait at 30s
logAllErrors: true // Track all failures for debugging
});
2. Deploy Circuit Breakers for External Dependencies
Prevent cascade failures by implementing circuit breakers for all external API calls. Use API Gateway to share circuit breaker config across related requests.
import { stableApiGateway } from '@emmvish/stable-request';
// Define circuit breaker config per external service
const paymentServiceBreakerConfig = {
failureThresholdPercentage: 50, // Open circuit after 50% failure rate
minimumRequests: 10, // Need 10+ requests to calculate rate
recoveryTimeoutMs: 60000, // Wait 60s before testing recovery (HALF_OPEN)
successThresholdPercentage: 80, // Need 80% success to close circuit
halfOpenMaxRequests: 3 // Allow 3 test requests in HALF_OPEN state
};
// All payment requests share the same circuit breaker state
const paymentRequests = [
{ id: 'charge', requestOptions: { reqData: { path: '/charge' }, resReq: true, attempts: 3 } },
{ id: 'refund', requestOptions: { reqData: { path: '/refund' }, resReq: true, attempts: 3 } },
{ id: 'validate', requestOptions: { reqData: { path: '/validate' }, resReq: true, attempts: 3 } }
];
await stableApiGateway(paymentRequests, {
commonRequestData: { hostname: 'payment.api.com' },
circuitBreaker: paymentServiceBreakerConfig // Shared circuit state for all payment requests
});
3. Leverage Request Grouping for Different SLAs
Separate requests by criticality with different retry policies and timeouts.
await stableApiGateway(requests, {
requestGroups: [
{
id: 'critical', // High-priority requests
commonConfig: {
commonAttempts: 7, // More retries for critical requests
commonWait: 2000,
commonRetryStrategy: RETRY_STRATEGIES.EXPONENTIAL,
commonJitter: 500,
commonMaxAllowedWait: 30000
}
},
{
id: 'optional', // Low-priority requests
commonConfig: {
commonAttempts: 1, // Single attempt for optional data
commonWait: 500
}
}
]
});
4. Enable Intelligent Caching for Read Operations
Cache GET requests with appropriate TTL and custom key generation.
const apiCacheConfig = {
enabled: true,
ttl: 300000, // Cache for 5 minutes
maxSize: 500, // Store max 500 cache entries
excludeMethods: ['POST', 'PUT', 'PATCH', 'DELETE'] // Only cache safe methods
};
await stableRequest({
reqData: { path: '/users/profile', method: REQUEST_METHODS.GET },
resReq: true,
cache: apiCacheConfig // Shared cache across requests
});
5. Build Comprehensive Observability
Track all request attempts, failures, and performance metrics.
await stableWorkflow(phases, {
// Common error handler: track all failed attempts
commonHandleErrors: async ({ error, errorLog, attempt, executionContext }) => {
metrics.increment('api.error', {
endpoint: executionContext.requestId, // Which request failed
attempt: attempt.toString() // Which attempt (1, 2, 3...)
});
logger.error('Request failed', { attempt, error: errorLog });
},
// Phase completion handler: track phase performance
handlePhaseCompletion: async ({ phaseResult }) => {
metrics.histogram('phase.duration', phaseResult.executionTime); // Track timing
logger.info('Phase completed', {
phaseId: phaseResult.phaseId,
success: phaseResult.success // Log success/failure
});
}
});
6. Master Configuration Cascading
Leverage hierarchical configuration for sensible defaults with fine-grained control.
await stableWorkflow(phases, {
// Workflow-level defaults (apply to all requests) - Lowest priority
commonAttempts: 3,
commonWait: 1000,
// Group-level overrides (apply to matching groupId) - Medium priority
requestGroups: [{
id: 'critical',
commonConfig: {
commonAttempts: 7 // Critical requests get 7 attempts
}
}]
});
// Final cascade: Request > Group > Phase > Workflow (highest to lowest priority)
7. Use Shared Buffers for State Management
Share data across phases without global variables.
await stableWorkflow(phases, {
sharedBuffer: { configVersion: null, apiKey: null }, // Initialize workflow state
handlePhaseCompletion: async ({ phaseResult, sharedBuffer }) => {
if (phaseResult.phaseId === 'fetch-config') {
// Store config data for use in subsequent phases
sharedBuffer.configVersion = phaseResult.responses[0].data.version;
sharedBuffer.apiKey = phaseResult.responses[0].data.apiKey;
}
}
});
8. Implement Smart Pre-Execution Hooks
Dynamically modify requests based on runtime conditions.
await stableRequest({
reqData: { hostname: 'api.example.com', path: '/data' },
resReq: true,
commonBuffer: { authToken: null, tokenExpiry: null }, // Track auth state
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer, attempt }) => {
const now = Date.now();
// Refresh auth token if expired or missing
if (!commonBuffer.authToken || now >= commonBuffer.tokenExpiry) {
const tokenResponse = await getAuthToken();
commonBuffer.authToken = tokenResponse.token;
commonBuffer.tokenExpiry = now + (tokenResponse.expiresIn * 1000);
}
// Inject auth token and metadata into request
const reqData = {
...inputParams.reqData,
headers: {
'Authorization': `Bearer ${commonBuffer.authToken}`,
'X-Attempt': attempt.toString() // Track retry attempt number
}
};
return { reqData }; // Return modified request
}
}
});
9. Leverage Non-Linear Workflows
Implement conditional execution with phase decision hooks.
import { PHASE_DECISION_ACTIONS } from '@emmvish/stable-request';
const phases = [{
id: 'process-data',
requests: [{ id: 'process', requestOptions: { reqData: { path: '/process' }, resReq: true } }],
maxReplayCount: 3, // Allow up to 3 replays
phaseDecisionHook: async ({ phaseResult, replayCount }) => {
// Retry phase if some requests failed and under replay limit
if (phaseResult.failedRequests > 0 && replayCount < 3) {
return { action: PHASE_DECISION_ACTIONS.REPLAY }; // Retry this phase
}
// Accept partial success (80%+ success rate)
if (phaseResult.successfulRequests >= phaseResult.totalRequests * 0.8) {
return { action: PHASE_DECISION_ACTIONS.CONTINUE }; // Proceed to next phase
}
// Fail workflow if below threshold
return { action: PHASE_DECISION_ACTIONS.TERMINATE }; // Stop workflow
}
}];
await stableWorkflow(phases, {
enableNonLinearExecution: true, // Enable conditional logic
maxWorkflowIterations: 1000 // Prevent infinite loops
});
10. Deploy Branched Workflows
Execute independent service calls in parallel branches.
const branches = [
{
id: 'payment-service', // Branch 1: Payment flow
phases: [
{ id: 'authorize-payment', requests: [{ id: 'auth', requestOptions: { reqData: { path: '/payment/authorize', method: REQUEST_METHODS.POST }, resReq: true } }] },
{ id: 'capture-payment', requests: [{ id: 'capture', requestOptions: { reqData: { path: '/payment/capture', method: REQUEST_METHODS.POST }, resReq: true } }] }
]
},
{
id: 'notification-service', // Branch 2: Notifications (independent)
markConcurrentBranch: true, // Run concurrently with payment
phases: [
{ id: 'send-email', requests: [{ id: 'email', requestOptions: { reqData: { path: '/notify/email', method: REQUEST_METHODS.POST }, resReq: true } }] },
{ id: 'send-sms', requests: [{ id: 'sms', requestOptions: { reqData: { path: '/notify/sms', method: REQUEST_METHODS.POST }, resReq: true } }] }
]
}
];
await stableWorkflow([], {
enableBranchExecution: true, // Enable branched workflow mode
branches, // Define independent branches
concurrentBranchExecution: true, // Execute all branches in parallel
commonRequestData: { hostname: 'api.example.com' } // Shared config across branches
});
11. Implement State Persistence
Enable recovery from failures with state persistence.
import Redis from 'ioredis';
const redis = new Redis();
// Persistence function handles both LOAD and STORE operations
const persistToRedis = async ({ executionContext, buffer }) => {
const key = `workflow:${executionContext.workflowId}`;
if (buffer && Object.keys(buffer).length > 0) {
// STORE MODE: Save state to Redis with 2-hour expiry
await redis.setex(key, 7200, JSON.stringify(buffer));
return {};
}
// LOAD MODE: Retrieve state from Redis
const data = await redis.get(key);
return data ? JSON.parse(data) : {}; // Return empty if no state found
};
await stableWorkflow(phases, {
commonStatePersistence: {
persistenceFunction: persistToRedis, // Persistence handler
loadBeforeHooks: true, // Load state before hooks execute
storeAfterHooks: true // Save state after hooks execute
}
});
12. Use Trial Mode for Testing
Test workflow logic without real API calls.
await stableWorkflow(phases, {
commonTrialMode: {
enabled: true, // Enable trial mode (no real API calls)
successProbability: 0.7, // 70% of requests succeed in simulation
trialModeData: { mock: 'data' } // Mock response data returned
}
});
13. Validate Responses Beyond HTTP Status
Implement custom response validation.
await stableRequest({
responseAnalyzer: async ({ data, status }) => {
if (status !== 200) return false; // Reject non-200 status (retry)
if (data.status === 'processing') return false; // Job still running (retry)
if (data.status === 'completed') return true; // Job complete (accept response)
throw new Error(`Unexpected status: ${data.status}`); // Fail immediately
}
});
14. Configure Rate Limiting & Concurrency
Respect API quotas and prevent resource exhaustion.
await stableApiGateway(requests, {
concurrentExecution: true, // Enable parallel execution
maxConcurrentRequests: 10, // But limit to 10 simultaneous
rateLimit: {
maxRequests: 100, // Maximum 100 requests
windowMs: 60000 // Per 60-second window (rate limit)
}
});
15. Production-Ready Complete Example
Combining all patterns for robust production systems.
import {
stableWorkflow,
RETRY_STRATEGIES,
REQUEST_METHODS,
PHASE_DECISION_ACTIONS,
} from '@emmvish/stable-request';
// Shared circuit breaker for payment service
const paymentBreakerConfig = {
failureThresholdPercentage: 50,
minimumRequests: 10,
recoveryTimeoutMs: 60000
};
// Shared cache for GET requests
const apiCacheConfig = { enabled: true, ttl: 300000, excludeMethods: ['POST', 'PUT', 'PATCH', 'DELETE'] };
const result = await stableWorkflow([], {
workflowId: 'order-processing',
enableBranchExecution: true, // Enable branched workflow
branches: [
{
id: 'payment',
phases: [{
id: 'charge',
requests: [{
id: 'payment',
requestOptions: {
reqData: {
path: '/payment/charge',
method: REQUEST_METHODS.POST,
timeout: 10000 // 10-second timeout
},
resReq: true,
attempts: 7, // 7 attempts for critical payment
retryStrategy: RETRY_STRATEGIES.EXPONENTIAL,
jitter: 500, // Random delay variation
preExecution: {
preExecutionHook: async ({ inputParams, commonBuffer }) => {
// Inject dynamic order amount from shared buffer
const reqData = {
...inputParams.reqData,
body: { amount: commonBuffer.orderAmount }
};
return { reqData };
}
}
}
}],
statePersistence: {
persistenceFunction: persistToRedis, // Enable state persistence
loadBeforeHooks: true,
storeAfterHooks: true
},
circuitBreaker: paymentBreakerConfig, // Shared circuit breaker
commonConfig: {
commonCache: apiCacheConfig // Shared cache for GET requests
}
}]
}
],
sharedBuffer: { orderAmount: 100, completedPhases: [] }, // Workflow state
commonRequestData: { hostname: 'api.example.com' },
maxConcurrentRequests: 10, // Concurrency limit
rateLimit: { maxRequests: 100, windowMs: 60000 }, // Rate limit
commonHandleErrors: async ({ error, attempt, executionContext }) => {
console.error('Request error:', { request: executionContext.requestId, attempt });
}
});