Chain Pattern
How a handler can schedule its own follow-up cue.
What chaining is
A handler finishes its work and, before exiting, schedules the next cue. The next cue could be a retry, a follow-up agent turn, a verification pass, or a completely different task. The pattern looks like:
cue_A fires → handler_A runs → handler_A creates cue_B
↓
cue_B fires → handler_B runs → ...
This is how agent loops actually loop in production: no long-running process, no in-memory state, no "sleep 3 hours." Each step is a cue. Each cue is a crash-safe scheduling primitive with its own outcome record.
What the worker injects
Starting in cueapi-worker 0.2.1, handlers receive credentials in the environment automatically:
| Variable | Value |
|---|---|
CUEAPI_API_KEY | The worker's CueAPI API key |
CUEAPI_BASE_URL | The CueAPI base URL (self-host aware) |
No separate auth setup, no secret plumbing in the YAML. The handler can POST to $CUEAPI_BASE_URL/v1/cues with Authorization: Bearer $CUEAPI_API_KEY.
See Environment Variables for the full list.
Minimal chain — bash
#!/bin/bash
# handler_A: does its work, then schedules handler_B to run in 1 hour.
python3 do_work.py
status=$?
if [ $status -eq 0 ]; then
at_time=$(date -u -v+1H +"%Y-%m-%dT%H:%M:%SZ") # macOS; use -d on Linux
curl -s -X POST "$CUEAPI_BASE_URL/v1/cues" \
-H "Authorization: Bearer $CUEAPI_API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"follow-up-${CUEAPI_EXECUTION_ID}\",
\"at\": \"$at_time\",
\"worker\": true,
\"payload\": {
\"task\": \"handler-b\",
\"parent_execution_id\": \"$CUEAPI_EXECUTION_ID\"
}
}"
fi
exit $statusChain in Python
import json
import os
from datetime import datetime, timedelta, timezone
import urllib.request
def schedule_next(task: str, delay_minutes: int, payload: dict):
at = (datetime.now(timezone.utc) + timedelta(minutes=delay_minutes)).isoformat()
body = json.dumps({
"name": f"{task}-{os.environ['CUEAPI_EXECUTION_ID']}",
"at": at,
"worker": True,
"payload": {"task": task, **payload},
}).encode()
req = urllib.request.Request(
f"{os.environ['CUEAPI_BASE_URL']}/v1/cues",
data=body,
headers={
"Authorization": f"Bearer {os.environ['CUEAPI_API_KEY']}",
"Content-Type": "application/json",
},
method="POST",
)
with urllib.request.urlopen(req, timeout=10) as resp:
return json.loads(resp.read())
# ... do work ...
schedule_next(
task="verify-result",
delay_minutes=30,
payload={"parent_execution_id": os.environ["CUEAPI_EXECUTION_ID"]},
)Carrying context across the chain
CueAPI cues are stateless — every payload is delivered fresh. To pass state between steps:
- Small state (IDs, flags, short strings): put it in the payload. It round-trips through CueAPI untouched.
- Large state: keep it in your own storage (S3, KV, Postgres) and pass a reference in the payload. CueAPI's payloads are bounded; use them for coordination, not as a blob store.
- Parent execution linking: set
parent_execution_idin the payload. Outcome queries on the parent will show the full lineage in your dashboard (and can be joined in your own analytics).
Combining with the outcome file
The chain and the outcome file are independent. A handler can do both in the same run:
import json
import os
# ... work happens ...
# Report evidence for the current execution.
with open(os.environ["CUEAPI_OUTCOME_FILE"], "w") as f:
json.dump({
"success": True,
"external_id": "pr_42",
"result_url": "https://github.com/org/repo/pull/42",
}, f)
# Schedule the follow-up.
schedule_next(task="monitor-pr", delay_minutes=15, payload={"pr_number": "42"})The current execution closes cleanly with verified evidence, and the next cue is on the schedule before the handler exits.
Related
- Outcome File — report evidence for the current execution
- Environment Variables — all injected
CUEAPI_*vars - Agent Turn Scheduling — the broader pattern
- Argus-style Workflow — a concrete chaining example