Samples
End-to-end playbooks that combine the portal, Agent Network, and CLI into realistic developer workflows.
Overview
This page is a set of complete playbooks, not a pile of isolated commands.
These samples are CLI-first. If you want the shortest CLI setup path, start with Getting Started.
Each sample walks through a full product workflow:
- what you are building
- what to set up (CLI, coding agent, or portal)
- what you run in the CLI
- what you should expect to see
The CLI still does most of the work. The portal and Agent Network show up where the product actually expects them.
Jump to: Sample 1 | Sample 2 | Sample 3 | Sample 4 | Sample 5 | Sample 6 | Sample 7
How to use these samples
A few practical notes before you start:
- Every
createcommand returns an ID. Save it before you move to the next step. - If you want to script the sequence, add
--jsonand capture.idwithjq.
Example:
agent_id=$(archastro --json create agent -n "Support Agent" -k support-agent \
-i "You help users solve billing and support problems clearly." | jq -r '.id')
If you prefer to work more manually, you can also run archastro describe ... or archastro list ... after each step and copy the ID you need.
These flags appear several times below:
--skip-welcome-messagekeeps the thread creation step quiet so your test begins with the message you send on purpose--waitkeeps the CLI attached long enough to show the result of the message or action you just triggered--jsonis a global CLI flag, so these examples place it before the verb:archastro --json create ...
Sample 1: Create one working support agent
What you are building
A single agent inside one company that can answer one test request, then pick up a routine for automatic follow-up behavior.
This is the smallest slice of ArchAstro that still shows the full loop:
- one project
- one agent
- one live session
- one thread
- one incoming message
Prerequisites
- The CLI is installed and authenticated (
archastro auth login). - A project is linked (
archastro init).
Run in the CLI
archastro auth login
archastro init
agent_id=$(archastro --json create agent -n "Support Agent" -k support-agent \
-i "You help users solve billing and support problems clearly." | jq -r '.id')
session_id=$(archastro --json create agentsession --agent "$agent_id" \
--instructions "Answer support questions clearly, ask one clarifying question if needed, and summarize the next action." | jq -r '.id')
archastro exec agentsession "$session_id" \
-m "A customer says their invoice failed and wants to know what to try next."
archastro describe agentsession "$session_id" --follow
user_id=$(archastro --json create user --system-user -n "Support Test User" | jq -r '.id')
thread_id=$(archastro --json create thread -t "Support test thread" \
--owner-type agent --owner-id "$agent_id" --skip-welcome-message | jq -r '.id')
archastro create threadmember --thread "$thread_id" --user-id "$user_id"
archastro create threadmessage --thread "$thread_id" --user-id "$user_id" \
-c "Can you help me figure out why my invoice keeps failing?" --wait
routine_id=$(archastro --json create agentroutine --agent "$agent_id" \
-n "billing-triage" \
-e message.created \
-t script \
--script "{ handled: true }" | jq -r '.id')
archastro activate agentroutine "$routine_id"
Here -k support-agent gives the agent a stable lookup key you can search for and reuse later.
--system-user creates a bot-style non-login user for testing or automation. Use clear names for these identities so they are easy to recognize in thread history and operational review, and do not use them as a shortcut around the approvals or human checks your deployment expects.
If you need that identity to call APIs directly later, create a dedicated system-user token for it and treat that token like any other service credential: name it, track it, and revoke it when the workflow is done.
What to check
- the session replies like an agent, not just a saved object
- the thread now has a test conversation in it
- the routine is active and ready to react to future thread events
What this confirms
- agents keep their own identity over time
- sessions are the quickest way to prove the agent can think and respond
- threads and messages are where that behavior shows up in the product
- routines are the bridge from one-off testing to ongoing behavior
Sample 2: Move the setup into reviewable config
What you are building
The same agent setup, but moved into project config so the team can review, sync, and redeploy it instead of recreating it by hand.
This is where you move from exploration to something the team can keep in source control.
Prerequisites
Use the same project from Sample 1.
Run in the CLI
archastro configs init
archastro configs kinds
archastro configs sample agent
archastro configs sync
archastro configs deploy
What to check
- a local
configs/directory in the project - a pulled-down view of the config objects the project knows about
- a clean
configs deploypath for reviewable changes
What this confirms
- the CLI is not just for one-off object creation
- ArchAstro has a config layer for repeatable setup
- once a pattern works, move it out of ad hoc commands and into tracked config
Good next links:
Sample 3: Run a scheduled workflow with a script in the middle
What you are building
A project-wide job that runs on a schedule, calls a workflow, and uses a script node for the company-specific logic in the middle.
This is the right pattern when the work belongs to the project, not to one named agent.
Prerequisites
Create a workflow config (use archastro configs sample workflow as a starting point) with a script node for the custom logic. Deploy it with archastro configs deploy and note the workflow config ID.
The three pieces:
- workflow = the visible process
- script = the custom logic inside that process
- automation = the schedule or trigger that starts it
Run in the CLI
automation_id=$(archastro --json create automation \
-n "Daily support summary" \
-t scheduled \
--schedule "0 9 * * 1-5" \
--config-id <workflow_config_id> | jq -r '.id')
archastro activate automation "$automation_id"
archastro list automations
archastro describe automation "$automation_id"
archastro list automationruns --automation "$automation_id"
What to check
- one named automation attached to your workflow config
- an active project-wide job in the automation list
- run history you can inspect after the schedule fires
What this confirms
- routines are for one agent's behavior
- automations are for project-wide jobs
- workflows and scripts become more useful when something repeatable starts them
Good next links:
Sample 4: Test a notification flow in a sandbox
What you are building
A notification or email flow you can trigger safely without touching production users or production mail.
This is the right place to test the parts of your app that need production-like behavior before they touch production.
Prerequisites
Deploy a workflow or automation that sends a notification. The sandbox will capture emails instead of delivering them, so you can test the full flow safely.
Run in the CLI
sandbox_id=$(archastro --json create sandbox -n "Notification Test" -s notification-test | jq -r '.id')
archastro activate sandbox
archastro list sandboxes
archastro describe sandbox "$sandbox_id"
user_id=$(archastro --json create user --system-user -n "Sandbox Notification User" | jq -r '.id')
thread_id=$(archastro --json create thread -t "Sandbox notification test" \
--user "$user_id" --skip-welcome-message | jq -r '.id')
archastro create threadmember --thread "$thread_id" --user-id "$user_id"
archastro create threadmessage --thread "$thread_id" --user-id "$user_id" \
-c "Trigger the sandbox notification path." --wait
archastro list sandboxmails --sandbox "$sandbox_id"
What to check
- the CLI is operating in the sandbox context after
archastro activate sandbox - the thread and message exist inside the test boundary
- captured email appears in
sandboxmailsinstead of touching production
What this confirms
- sandboxes are not a toy environment; they are where realistic testing becomes believable
- the same CLI loop still works, but the boundary changes
- notification flows are much easier to trust once you can inspect captured output safely
Good next links:
Sample 5: Coordinate a rollout across two companies
What you are building
A shared rollout room between two companies: each side keeps its own private agents, users, and knowledge, but both sides collaborate through one shared team and one shared thread.
This is the Agent Network story in practical form.
Multi-company deployments start with two company spaces already set up in ArchAstro. The steps here begin once those company boundaries exist and the shared rollout work is ready to start. If you want to enable this setup, work with the ArchAstro team first at hi@archastro.ai.
Prerequisites
- Both company spaces are provisioned (contact hi@archastro.ai for multi-company setup).
- A shared team and shared thread exist for the rollout.
- Each side has decided which agents and people participate.
Each company keeps its private space. The shared team and thread are the only crossing point.
Run in the CLI
archastro list teams
archastro describe team <shared_team_id>
archastro list threads
archastro describe thread <shared_thread_id>
archastro list threadmembers --thread <shared_thread_id>
operator_id=$(archastro --json create user --system-user -n "Rollout Operator" | jq -r '.id')
archastro create threadmember --thread <shared_thread_id> --user-id "$operator_id"
archastro create threadmessage --thread <shared_thread_id> --user-id "$operator_id" \
-c "Company A completed staging validation. Company B can start the rollout window review." --wait
archastro list threadmessages --thread <shared_thread_id> --full
What to check
- one shared team and one shared thread you can inspect directly
- one shared conversation that both companies can use without flattening everything into one tenant
- visible participants and message history in the shared layer
What this confirms
- Agent Network is not abstract architecture; it becomes a concrete collaboration room
- the collaboration surface is intentionally small
- CLI still matters in cross-company work because it lets you inspect, join, and operate the shared thread directly
Good next links:
Sample 6: Debug a cross-company integration by impersonating the support agent
What you are building
A realistic debugging loop where an engineer explicitly approved by Company A to work in its support app uses a shared rollout thread plus Company A's support agent to diagnose a broken acme-billing-webhooks integration.
This is the kind of flow that makes ArchAstro feel different:
- the companies stay separate
- the rollout thread is shared
- the support agent keeps its own private tools, skills, and knowledge
- the developer can still debug from the same attached surface the live agent uses
Prerequisites
- A shared rollout team and thread exist (from Sample 5).
- Company A's support agent is a participant in the shared thread.
- Company A has granted operator access to their ArchAstro app for this rollout.
- Troubleshooting knowledge is connected to the support agent.
- The relevant skill and tool are linked to the agent.
Run in the CLI
archastro describe thread <shared_thread_id>
archastro list threadmembers --thread <shared_thread_id>
archastro impersonate start <company_a_agent_id>
archastro impersonate status
archastro impersonate list tools
archastro impersonate list skills
archastro list contextsources
archastro list contextingestions --status failed
archastro impersonate run tool search --input '{"query":"acme billing webhooks retry validation"}'
archastro create threadmessage --thread <shared_thread_id> --user-id <operator_user_id> \
-c "Search results point to webhook retry validation as the likely blocker. Please confirm the retry path before the rollout window." --wait
archastro impersonate stop
What to check
- the shared thread clearly shows who is collaborating
- impersonation reflects the support agent's attached skills and tools
- the search result comes from Company A's approved troubleshooting corpus
- the thread gets a concrete next step instead of vague back-and-forth
What this confirms
- Agent Network is not just shared chat; it supports debugging work across company lines
- impersonation connects the live agent surface to the local coding/debugging loop only after the owning company has deliberately authorized that workflow
- knowledge, tools, and cross-company collaboration all meet in one operational flow
Good next links:
Sample 7: Deploy a real agent from a template
What you are building
A production-ready agent deployed from a single YAML file. This is the recommended workflow once you understand the basic model from Samples 1-2.
One file defines everything: identity, tools, routines, and installations. One command deploys it. One test proves it works.
Write the agent template
Create configs/agents/security-reviewer.yaml:
kind: AgentTemplate
key: security-reviewer
name: Security Reviewer
identity: |
You are a security code reviewer for our engineering team.
When asked to review code, check for:
- hardcoded secrets or credentials
- SQL injection or command injection risks
- missing input validation
- overly permissive access controls
Be specific about file paths and line numbers. Suggest fixes, not just problems.
tools:
- kind: builtin
builtin_tool_key: search
status: active
- kind: builtin
builtin_tool_key: knowledge_search
status: active
- kind: builtin
builtin_tool_key: integrations
status: active
routines:
- name: Respond in conversations
description: Join threads and respond to messages
handler_type: preset
preset_name: participate
event_type: thread.session.join
event_config:
thread.session.join: {}
status: active
- name: Memory extraction (opt-in)
description: Extracts and stores key facts after conversations when this routine is enabled
handler_type: preset
preset_name: auto_memory_capture
event_type: thread.session.leave
event_config:
thread.session.leave:
subject_is_agent: true
status: active
installations:
- kind: memory/long-term
config: {}
- kind: archastro/thread
config: {}
Validate and deploy
archastro configs validate --kind AgentTemplate --file configs/agents/security-reviewer.yaml
archastro deploy agent configs/agents/security-reviewer.yaml --name "Security Reviewer"
One command creates the agent with all tools, routines, and installations provisioned.
Test it
# Quick direct test
session_id=$(archastro --json create agentsession --agent <agent_id> \
--instructions "Review code for security issues." | jq -r '.id')
archastro exec agentsession "$session_id" \
-m "Review this function: def login(user, password): query = f'SELECT * FROM users WHERE name={user}'"
Test in a real conversation
thread_id=$(archastro --json create thread -t "Security review" \
--owner-type agent --owner-id <agent_id> --skip-welcome-message | jq -r '.id')
user_id=$(archastro --json create user --system-user -n "Engineer" | jq -r '.id')
archastro create threadmember --thread "$thread_id" --user-id "$user_id"
archastro create threadmessage --thread "$thread_id" --user-id "$user_id" \
-c "Can you review our auth module for SQL injection risks?" --wait
Test in a sandbox first
For production agents, deploy to a sandbox before going live:
# Switch to sandbox, deploy, and test
archastro activate sandbox staging
archastro deploy agent configs/agents/security-reviewer.yaml --name "Security Reviewer"
# test in sandbox...
# When ready, switch back to production and deploy
archastro activate sandbox
# (select production from the interactive prompt, or deactivate the sandbox)
archastro deploy agent configs/agents/security-reviewer.yaml --name "Security Reviewer"
What to check
- Agent responds with specific, actionable security feedback
- Agent cites file paths and line numbers when reviewing code
- Memory extraction routine (opt-in) stores key facts between conversations when enabled
- The same YAML file deploys identically to sandbox and production
Need something clearer?
Tell us where this page still falls short.
If a step is confusing, a diagram is misleading, or a workflow needs a better example, send feedback directly and we will tighten it.