Introduction
Akshi is a Rust-based agent runtime that sandboxes AI agents in WebAssembly, routes inference between local and cloud models, and connects agents via mesh networking.
What Akshi does
- Declarative agents – Define agent behavior in TOML: goal, file pattern, LLM prompt, output schema. No code needed for most use cases.
- Sandboxed execution – Agents run as WebAssembly modules with explicit capability grants. No ambient filesystem or network access unless configured.
- Inference routing – A built-in router dispatches LLM calls to local models (via Ollama) or cloud providers (Anthropic, OpenRouter) based on prompt complexity.
- Mesh networking – Agents discover and communicate with each other over a lightweight peer-to-peer mesh, enabling multi-agent workflows without a central orchestrator.
- Developer CLI – The
akshibinary handles scaffolding, running, monitoring, and deploying agents from a single command.
Who this is for
Akshi is aimed at developers who want to run autonomous AI agents with strong isolation guarantees and flexible model routing, whether on a single laptop or across a cluster of machines.
How to use this book
| Section | What you will find |
|---|---|
| Getting Started | Installation, quickstart, building your first agent |
| Architecture | Runtime internals, sandbox model, broker design |
| Configuration | Reference for runtime.toml and CLI flags |
| SDK Reference | Agent-side API (logging, inference, database, MCP tools) |
| Operations | Deployment, monitoring, troubleshooting |
If you are new to Akshi, start with Getting Started. It takes about five minutes to go from zero to a running agent.
Getting Started
This section walks you from a fresh install to a running agent in a few minutes. Work through the pages in order or jump to the one you need.
Setup path
-
Installation – Install the
akshiCLI via the curl installer or build from source. Covers supported platforms and environment variables. -
Quickstart – Run
akshi quickstartto scaffold a default config, build a sample agent, and start the runtime. Verify everything works through the dashboard and CLI. -
First Agent – Create an agent project from scratch, write Rust code against the Akshi SDK, compile to WebAssembly, and run it under the runtime with a custom configuration.
-
Starter Agents – Explore the four built-in templates (log-monitor, code-reviewer, test-runner, research-lead) that demonstrate common agent patterns you can adapt for your own use cases.
Installation
Curl installer (recommended)
The fastest way to install Akshi is with the official install script:
curl -fsSL https://akshi.dev/install.sh | sh
The script downloads the correct binary for your platform and places it in
/usr/local/bin by default.
Environment variables
| Variable | Default | Description |
|---|---|---|
AKSHI_INSTALL_DIR | /usr/local/bin | Directory to install the akshi binary into |
AKSHI_VERSION | latest | Pin a specific release version (e.g. 0.4.2) |
AKSHI_ALLOW_DOWNGRADE | unset | Set to 1 to allow installing an older version than what is currently installed |
Example – install a specific version to a custom directory:
AKSHI_INSTALL_DIR=~/.local/bin AKSHI_VERSION=0.4.2 \
curl -fsSL https://akshi.dev/install.sh | sh
Supported platforms
| OS | Architecture | Status |
|---|---|---|
| macOS | x86_64 | Supported |
| macOS | aarch64 (Apple Silicon) | Supported |
| Linux | x86_64 | Supported |
Windows support is not yet available. WSL2 with a supported Linux distribution works as a workaround.
Build from source
Building from source requires a recent Rust toolchain (stable) and the
wasm32-wasip1 target.
# Clone the repository
git clone https://github.com/AkshiSystems/akshi.git
cd akshi
# Build the host CLI and runtime
cargo build --release
# Build the agent Wasm module
cargo build --target wasm32-wasip1 -p akshi-agent --release
The host binary is at target/release/akshi. The agent Wasm module is at
target/wasm32-wasip1/release/akshi_agent.wasm.
If you do not already have the Wasm target installed:
rustup target add wasm32-wasip1
Verify the installation
akshi --version
You should see output like:
akshi 0.4.2
If the command is not found, make sure the install directory is on your PATH.
Next steps
Continue to the Quickstart to launch your first runtime.
Quickstart
This guide gets a working Akshi runtime running in about five minutes. You
should already have the akshi CLI installed (see
Installation).
1. Scaffold and start
akshi quickstart
This single command:
- Creates a default
runtime.tomlconfiguration in the current directory. - Builds a sample agent to WebAssembly.
- Starts the runtime with that agent loaded.
You will see log output as the runtime boots and the agent begins executing.
2. Open the dashboard
While the runtime is running, open the web dashboard:
http://127.0.0.1:3210
The dashboard shows agent status, inference routing decisions, and live log output.
3. Check status
In a separate terminal:
akshi status
This prints the state of the runtime and each loaded agent (running, idle, errored).
4. View logs
Stream runtime and agent logs:
akshi logs -f
Press Ctrl-C to stop following.
5. Stop the runtime
akshi stop
This gracefully shuts down all agents and the runtime process.
What just happened
The quickstart command created a minimal setup with:
- A
runtime.tomlwith a declarative agent configuration — goal, file pattern, LLM prompt, and output schema are all defined in TOML. - A pre-built agent Wasm module that reads the declarative config and handles file watching, inference calls, and result storage automatically.
No Rust toolchain or custom code was needed. The agent behavior is entirely driven by the TOML configuration.
To understand each piece in detail, continue to Your First Agent where you configure an agent from scratch.
Your First Agent
Akshi agents are configured declaratively in TOML. You describe what the agent should do — its goal, which files to watch, and the LLM prompt — and the runtime handles execution, sandboxing, and inference routing. No Rust toolchain or SDK knowledge required.
1. Create a workspace
mkdir -p workspace/my-agent
2. Add something for the agent to analyze
echo "ERROR 2024-03-17 10:42:01 Connection to database timed out after 30s" > workspace/my-agent/app.log
echo "INFO 2024-03-17 10:42:05 Retry succeeded, connection restored" >> workspace/my-agent/app.log
echo "WARN 2024-03-17 10:43:12 Memory usage at 89% — approaching limit" >> workspace/my-agent/app.log
3. Write the configuration
Create config.toml:
node_id = "my-first-node"
[dashboard]
port = 3210
[router]
ollama_url = "http://127.0.0.1:11434/api/generate"
ollama_model = "llama3.2"
enable_remote = true
[[agents]]
name = "my-agent"
wasm_path = "target/wasm32-wasip1/release/agent.wasm"
workspace_dir = "./workspace/my-agent"
fuel_limit = 200000000
# Declarative agent configuration
goal = "Analyze log files and classify incidents by severity."
file_pattern = "*.log"
prompt = """You are a log analysis assistant.
For each log line, classify it as [CRITICAL], [WARNING], or [INFO].
Return one finding per line with a brief explanation."""
store_table = "findings"
store_columns = "file TEXT, severity TEXT, summary TEXT, ts INTEGER"
schedule_ms = 5000
max_iterations = 1
The key declarative fields:
| Field | Description |
|---|---|
goal | Human-readable description of what the agent does |
file_pattern | Glob pattern for files the agent processes (e.g., *.log, *.diff) |
prompt | System prompt sent to the LLM along with file contents |
store_table | SQLite table where the agent stores its output |
store_columns | Schema for the output table |
schedule_ms | How often (in ms) the agent checks for new files |
max_iterations | Number of processing cycles (0 = run forever) |
4. Run
akshi run -c config.toml
The runtime loads the pre-built agent WASM module, reads your declarative configuration, and begins processing files in the workspace.
5. Verify
Open the dashboard at http://127.0.0.1:3210 or check from the terminal:
# Check agent status
akshi status
# Stream logs
akshi logs -f
# Query the findings database directly
sqlite3 ~/.akshi/agents/my-agent/state.db "SELECT * FROM findings;"
You should see the agent classify each log line by severity.
6. Stop
akshi stop
How it works
The pre-built akshi-agent binary (compiled to WASM) reads the declarative
fields from your TOML config at runtime:
- Watch — scans
workspace_dirfor files matchingfile_pattern - Analyze — sends file contents +
promptto the inference router - Store — parses the LLM response for severity markers and inserts into
store_table - Report — posts findings to the dashboard API
This means you can create entirely different agents — log monitors, code reviewers, research assistants — just by changing the TOML configuration.
Going further
- More templates: see Starter Agents for log-monitor, code-reviewer, test-runner, and research-lead configurations
- Custom agents: if you need logic beyond declarative config, see the Rust WASM Agent guide for writing custom agents with the SDK
- Configuration reference: see Agent Entry for the full list of agent configuration fields
Starter Agents
Akshi ships with four agent templates that demonstrate common patterns. Use them as starting points for your own agents.
Create any template with:
akshi create agent <name> --template <template>
log-monitor
akshi create agent my-log-monitor --template log-monitor
Watches *.log files in its workspace directory for new entries. When new
lines appear, the agent sends them to the inference broker for severity
classification (info, warning, error, critical). Classified findings are posted
back to the runtime log stream with structured annotations.
Use case: Automated log triage. Point it at application or system logs and let the LLM surface the entries that matter.
Key patterns demonstrated:
- File-watching via workspace directory polling
- Batched inference calls with structured output parsing
- Severity-based filtering logic
code-reviewer
akshi create agent my-code-reviewer --template code-reviewer
Watches for *.diff files in its workspace. When a new diff appears, the agent
reads it, sends the patch to the inference broker with a code-review prompt,
and writes review comments back as a structured output file.
Use case: Lightweight automated code review. Drop a git diff output into
the workspace and get feedback without leaving the terminal.
Key patterns demonstrated:
- File-triggered agent execution
- Multi-turn prompt construction (system prompt + diff context)
- Structured output (review comments with line references)
test-runner
akshi create agent my-test-runner --template test-runner
Watches for test output files (e.g., JUnit XML, TAP, or plain-text test logs). When a new result file appears, the agent parses it and classifies each failure as a likely flake, environment issue, or genuine regression using the inference broker. Results are written to the agent database.
Use case: CI failure triage. Feed test output into the workspace and get a categorized breakdown of what failed and why.
Key patterns demonstrated:
- Parsing structured and semi-structured test output
- Classification with confidence scores
- Database writes via
db_exec
research-lead
akshi create agent my-research-lead --template research-lead
Takes query files (plain text with a research question) from its workspace. For each query, the agent:
- Uses the MCP search tool to find relevant sources.
- Fetches and scrapes the top URLs.
- Synthesizes findings into a journal-style dossier written back to the workspace.
Use case: Automated research briefs. Drop a question file in and receive a sourced summary document.
Key patterns demonstrated:
- MCP tool invocation (search, fetch)
- Multi-step agent workflow (search, retrieve, synthesize)
- Long-form structured output generation
Customizing templates
Each template is a standard Rust project. After scaffolding, you own the code and can modify it freely:
- Change the file patterns the agent watches.
- Adjust the prompts sent to the inference broker.
- Add additional SDK calls (database, logging, MCP tools).
- Tune
fuel_limitand workspace permissions inruntime.toml.
See the First Agent guide for details on the build and configuration workflow.
Architecture
Akshi ships as a single Rust binary (akshi) that runs agents, routes inference,
brokers secrets, enforces policy, and serves a dashboard — all in one process.
Runtime model
The runtime uses a supervisor tree inspired by Erlang/OTP. Each agent gets its own OS thread (sync mode) or Tokio task (async mode). Sync mode is the default and the only mode shipped today — it keeps the execution model simple and deterministic.
When you run akshi run, the process:
- Loads
runtime.tomland resolves agent definitions. - Spawns the supervisor tree.
- Starts the dashboard HTTP server (default port 3210).
- Launches each agent in its own supervised thread.
If an agent panics or exceeds its fuel budget, the supervisor restarts it according to the configured restart policy.
Five subsystems
| Subsystem | Responsibility |
|---|---|
| Supervisor | Lifecycle management, restart policies, health tracking |
| Protocol Router | Inference routing — picks local or cloud provider per request |
| Secrets Broker | Credential injection at the sandbox boundary |
| Policy Engine | Approval gates, risk scoring, spend limits |
| Discovery Service | mDNS/DHT peer discovery, mesh transport selection |
These subsystems are composed inside the single binary. There are no separate daemons or sidecar processes.
Request pipeline
A typical agent action flows through this pipeline:
Incoming request
→ Protocol Router (select inference provider)
→ Supervisor (locate target agent thread)
→ WASM Sandbox (execute agent code)
→ Host capabilities (db, journal, http_fetch, infer, …)
← Response back through the chain
The sandbox boundary is the security perimeter. All host capability calls cross this boundary and are subject to capability checks, broker interception, and policy evaluation.
Dashboard
The built-in HTTP server exposes:
- A web dashboard for monitoring agents, approving actions, and viewing logs.
- REST API endpoints for programmatic access.
- SSE streams for real-time event feeds.
The dashboard binds to 0.0.0.0:3210 by default. Override with the
dashboard.port setting in runtime.toml or the AKSHI_DASHBOARD_PORT
environment variable.
What is not in the runtime
- No orchestrator process. Agents coordinate through the journal and A2A protocol, not a central controller.
- No external database required. State lives in per-agent Automerge documents on the local filesystem.
- No container runtime. WASM sandboxing replaces container-based isolation for Tier 1 agents.
WASM Sandbox
Every agent runs inside a WebAssembly sandbox powered by Wasmtime. The sandbox is the primary security boundary — it controls what an agent can access and how much computation it can consume.
Execution tiers
| Tier | Isolation | Status |
|---|---|---|
| Tier 1 | Pure WASM (Wasmtime) | Shipped, default |
| Tier 2 | Native process + OS-level isolation | Planned |
| Tier 3 | Native process + hardware isolation | Planned |
Tier 1 is the only execution tier available today. Agents compile to
wasm32-wasip2 targets and run inside Wasmtime with capability-gated host
imports.
Capability-gated host imports
Agents declare the host capabilities they need in their configuration. The runtime exposes these capabilities as WASM host imports:
| Capability | Purpose |
|---|---|
db | Key-value storage |
journal | Append-only structured log |
mcp | Model Context Protocol tool calls |
http_fetch | Outbound HTTP requests (broker-mediated) |
infer | LLM inference requests |
config | Read agent configuration |
a2a | Agent-to-agent messaging |
websocket | WebSocket connections |
Deny-by-default: if an agent calls a capability it did not declare, the call fails immediately. There is no prompt, no fallback — undeclared capabilities are hard errors.
Fuel metering
Each agent gets a fuel budget that limits computation per execution cycle. Fuel maps roughly to WASM instructions executed.
- Default budget: 100,000,000 (100M) fuel units per invocation.
- Configurable: set
fuelin the agent entry inruntime.toml. - Exhaustion: when fuel runs out, the sandbox traps and the supervisor handles the restart.
Fuel prevents runaway agents from monopolizing CPU. It is not a billing mechanism — see Economics for spend tracking.
What the sandbox prevents
- Filesystem access: agents cannot read or write the host filesystem directly.
- Network access: all outbound requests go through the broker.
- System calls: WASM has no
exec, no signals, no process control. - Memory isolation: each agent gets its own linear memory; no shared state between agents except through explicit host capabilities.
Inference Routing
Akshi routes each LLM inference request to either a local model or a cloud provider. The routing decision is automatic, per-request, and based on a lightweight scoring model called Akshi Route.
Providers
| Provider | Type | Notes |
|---|---|---|
| Ollama | Local | Runs on the same machine or local network |
| Anthropic | Cloud | Direct API access |
| OpenRouter | Cloud gateway | Access to multiple model families |
Agents do not choose their provider. They call the infer host capability with
a prompt, and the router picks the best destination.
How routing works
Akshi Route is a logistic scoring model with 7 input features (prompt length, tool complexity, context size, and others). It produces a score between 0 and 1.
- Score >= threshold → route to cloud provider.
- Score < threshold → route to local Ollama.
- Default threshold: 0.55 (configurable in route profile).
The idea: simple requests stay local (fast, free, private), while complex requests go to more capable cloud models.
Fallback chain
The router uses a three-tier fallback to determine routing parameters when a full route profile is not configured:
- TinyLocal — hardcoded minimal profile for local inference.
- Profile — user-defined route profile from
runtime.toml. - Heuristic — feature-based scoring via the logistic model.
If local inference fails (Ollama unavailable, model not loaded), the router automatically falls back to a cloud provider.
Circuit breaker
Cloud providers can go down. The router includes a circuit breaker:
- Trigger: 3 consecutive failures to a cloud provider.
- Open duration: 30 seconds (requests skip the failed provider).
- Recovery: after the open period, one probe request tests the provider.
This prevents cascading failures when a cloud API has an outage.
Configuration
Route behavior is controlled through route profiles in runtime.toml. You can
set the scoring threshold, preferred providers, model overrides, and fallback
behavior per profile. See Router Configuration
for details.
Secrets Broker
The secrets broker manages credentials so that agents never handle raw API keys, tokens, or passwords. It intercepts outbound requests at the sandbox boundary and injects credentials on the agent’s behalf.
How it works
- An agent makes an outbound HTTP request via the
http_fetchcapability. - The broker checks the request URL against the configured endpoint allowlist.
- If the endpoint is allowed and has a
secret_env_varconfigured, the broker reads the credential from the environment variable and injects it into the request headers. - If the endpoint is not on the allowlist, the request is denied and the agent receives a deny receipt explaining why.
The agent never sees the credential value. It only knows whether the request succeeded or was denied.
Endpoint allowlist
Each allowed endpoint is defined by:
- Domain: the exact hostname (e.g.,
api.anthropic.com). - Path prefix: optional path binding (e.g.,
/v1/messages). - secret_env_var: the environment variable holding the credential.
Requests to endpoints not on the allowlist are blocked by default.
Security protections
The broker includes defenses against common credential-exfiltration attacks:
- DNS rebinding protection: the broker resolves DNS and verifies the IP has not changed between resolution and connection.
- IPv4-mapped IPv6 normalization: prevents bypassing allowlist checks via
::ffff:mapped addresses. - Deny receipts: every blocked request generates a structured deny receipt with the reason, so agents and operators can debug allowlist issues.
Why not just pass keys to agents?
Giving agents raw credentials creates a data exfiltration risk — a compromised or misbehaving agent could send credentials to an unauthorized endpoint. The broker eliminates this by keeping credentials outside the sandbox and only injecting them into requests that match the allowlist.
Identity & DIDs
Akshi uses Ed25519 key pairs and the DID:key format to give every node and agent a verifiable cryptographic identity. These identities are used for signing sync envelopes, authenticating mesh peers, and issuing capability attestations.
Node identity
When you run akshi init, the runtime generates an Ed25519 key pair and stores
it at:
$AKSHI_HOME_DIR/security/node_key.pem
The public key is encoded as a DID:key identifier using the multicodec
ed25519 prefix (0xed). This DID is the node’s identity on the mesh and in
sync envelopes.
Example: did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK
Agent identity
Each agent gets its own DID, deterministically derived from the node key and the agent’s name. This means:
- The same agent on the same node always has the same DID.
- Different nodes running an agent with the same name have different DIDs.
- Agent DIDs do not require separate key storage.
Capability attestations
The node issues signed capability attestations to its agents. These are local credentials that prove an agent is authorized to use specific host capabilities on this node. Attestations are not shared across the mesh — they are a local trust mechanism.
Where identity is used
| Use case | What gets signed/verified |
|---|---|
| Journal sync | Sync envelopes are signed by the sending node’s key |
| Mesh discovery | .well-known/akshi-node includes the node DID |
| Peer trust | Nodes verify peer DIDs before accepting sync data |
| Agent credentials | Capability attestations bind agent DID to allowed capabilities |
Key management
The node key pair is generated once and persists across restarts. There is no
automatic key rotation today. Protect the security/ directory — anyone with
the private key can impersonate the node.
Mesh Networking
Akshi nodes can discover each other and sync agent state over a peer-to-peer mesh. The mesh uses three discovery mechanisms and two transport modes, selected automatically based on network topology.
Discovery mechanisms
mDNS (local network)
Nodes on the same LAN broadcast _akshi._tcp.local mDNS service records. This
enables zero-configuration discovery — start two Akshi nodes on the same network
and they find each other automatically.
WireGuard tunnels (point-to-point)
For nodes separated by NATs or firewalls, Akshi establishes WireGuard tunnels. Each node’s WireGuard public key is part of its peer configuration. Tunnels provide encrypted, authenticated transport without exposing dashboard ports to the public internet.
Kademlia DHT (wide-area)
For discovery beyond the local network without pre-configured peers, Akshi uses a Kademlia distributed hash table. Nodes publish their DID and endpoint information to the DHT and query it to find peers.
Peer configuration
Each peer entry includes:
| Field | Purpose |
|---|---|
node_did | The peer’s DID:key identity |
wireguard_public_key | WireGuard key for tunnel setup |
endpoint | IP address and port |
dashboard_port | The peer’s dashboard HTTP port |
Peers can be configured explicitly in runtime.toml or discovered via mDNS/DHT.
Transport selection
The runtime auto-detects the available transport:
- Peers configured (explicit or discovered) → HTTP transport.
Sync envelopes are exchanged via
POST /api/v1/sync/envelopeson each peer’s dashboard endpoint. - No peers configured → Filesystem transport. Sync data is written to a shared directory, useful for single-node setups or development.
You do not choose the transport manually. The runtime picks HTTP when it has peers and falls back to filesystem otherwise.
What flows over the mesh
The mesh carries:
- Journal sync envelopes — Automerge CRDT changes for agent state convergence.
- Discovery announcements — node DID, capabilities, available agents.
- A2A messages — agent-to-agent communication across nodes.
The mesh does not carry inference requests, secrets, or raw credentials. Those stay local to each node.
Journal & Sync
Each agent in Akshi has a journal — a persistent, append-only log backed by an Automerge CRDT document. Journals sync across nodes in the mesh, giving agents convergent shared state without a central database.
Per-agent Automerge documents
Every agent gets its own Automerge document stored on the local filesystem. The CRDT data structure means:
- Concurrent edits from different nodes merge automatically.
- No coordination or locking is needed between peers.
- After sync, all nodes holding the same agent’s journal converge to the same state (identical Automerge heads).
Sync loop
The sync process runs in a continuous loop:
- Capture changes — collect new Automerge changes since the last sync.
- Sign envelope — wrap changes in a sync envelope signed with the node’s Ed25519 key.
- Deliver to peers — send the envelope to each known peer via the active transport (HTTP or filesystem).
- Process inbox — receive envelopes from peers, verify signatures, and merge changes into the local document.
HTTP transport
When peers are available, sync envelopes are delivered via:
POST /api/v1/sync/envelopes
The request body contains the signed envelope. The receiving node verifies the signature, checks peer trust, and applies the changes.
Verification
Every incoming envelope is verified before merging:
- Ed25519 signature check — the envelope must be signed by a known peer’s key.
- Peer trust check — the signing DID must be in the node’s trusted peer list.
- DID validation — the DID format and multicodec prefix must be valid.
Envelopes that fail any check are rejected and logged.
Convergence
After a successful sync round, both nodes hold the same Automerge document heads. This is a strong consistency guarantee within the CRDT model — given the same set of changes, all nodes produce the same state regardless of delivery order.
Approval Gates
Approval gates put a human in the loop for high-risk agent actions. When an agent calls a tool that matches an approval policy, execution blocks until a human approves or denies the action.
Approval policies
An ApprovalPolicy defines which tool calls require approval. Policies match
on tool name patterns:
- Exact match:
send_email— matches only that tool. - Glob match:
send_*— matches any tool starting withsend_. - Wildcard:
*— matches all tool calls (useful for high-security agents).
Policies are configured per agent in runtime.toml.
How blocking works
When an agent hits an approval gate:
- The agent’s thread blocks on a condvar (condition variable) in the
ApprovalStore. - The pending approval appears in the dashboard and API.
- A human approves or denies via the dashboard UI or API.
- The condvar is signaled and the agent resumes (or receives a denial error).
Timeout: if no decision is made within the configured timeout (default 5 minutes), the request is automatically denied. This prevents agents from blocking indefinitely.
Risk scoring
Each pending approval includes a computed risk score (0.0 to 1.0) to help humans prioritize:
- Tool classification: tools are categorized as financial, communications, system, or data operations. Each category has a base risk weight.
- Argument heuristics: the scorer inspects arguments for high-risk patterns (large amounts, sensitive domains, destructive operations).
- Composite score: the final score combines tool classification and argument heuristics.
The risk score is informational — it helps humans triage but does not auto-approve or auto-deny.
Batched approvals
When multiple agents have pending approvals, the dashboard groups them by category. You can approve or deny an entire batch at once instead of handling each request individually.
API endpoints
| Endpoint | Method | Purpose |
|---|---|---|
/api/v1/approvals | GET | List pending approvals |
/api/v1/approvals/{id} | POST | Approve or deny a single request |
/api/v1/approvals/batch | POST | Batch approve or deny |
Economics & Spend Policy
Akshi tracks and limits what agents spend on paid APIs and services. Every agent can have a spend policy that caps costs, restricts payment methods, and enables dry-run simulation.
SpendPolicy
Each agent’s spend policy includes:
| Field | Purpose |
|---|---|
budget_cap | Maximum total spend for the agent’s lifetime (or per reset period) |
per_action_limit | Maximum cost of a single action |
allowed_rails | Which payment rails are permitted (e.g., ["stripe", "l402"]) |
simulation_mode | When true, costs are tracked but not actually charged |
Agents without a configured spend policy cannot make paid requests.
SpendLedger
The SpendLedger tracks cumulative spend per agent. Before each paid action,
the policy engine checks the ledger against the agent’s policy. If the action
would exceed any limit, it is denied.
Deny reasons
When a spend request is blocked, the agent receives a structured denial:
- NoBudgetConfigured — no spend policy exists for this agent.
- BudgetExceeded — the action would push total spend over
budget_cap. - PerActionLimitExceeded — the single action cost exceeds
per_action_limit. - RailNotAllowed — the payment method is not in
allowed_rails.
L402 support
Akshi parses L402
HTTP challenges (402 Payment Required responses with a WWW-Authenticate: L402 header). This allows agents to interact with paid Lightning Network
endpoints when L402 is an allowed rail.
Kill switch
Set AKSHI_SPEND_KILL_SWITCH=1 to immediately halt all paid actions across
every agent. This is a global emergency stop — useful if you suspect runaway
spending. The kill switch takes effect without restarting the runtime.
Security Model
Akshi’s security model is defense in depth: multiple independent layers, each designed to contain failures in the layers above it. No single layer is sufficient on its own.
Security layers
| Layer | What it does | Where it lives |
|---|---|---|
| WASM Sandbox | Memory isolation, no filesystem/network/syscall access | Wasmtime runtime |
| Secrets Broker | Credential isolation, allowlist enforcement | Sandbox boundary |
| Policy Engine | Approval gates, risk scoring, spend limits | Pre-execution checks |
| Identity | Ed25519 signing, DID-based peer trust | Sync and mesh transport |
Execution tiers
| Tier | Isolation mechanism | Status |
|---|---|---|
| Tier 1 | Pure WASM (Wasmtime) | Shipped |
| Tier 2 | Native process + OS-level sandbox (seccomp, landlock) | Planned |
| Tier 3 | Native process + hardware isolation (VMs, enclaves) | Planned |
Tier 1 is the only tier available today. It provides strong isolation for agents compiled to WASM. Tier 2 and 3 will extend support to native binaries that cannot be compiled to WASM.
Capability scan
On startup, the runtime scans the host platform for available sandbox capabilities (Wasmtime version, OS-level sandbox support, hardware features). This information is logged and available via the dashboard, so operators know what isolation mechanisms are active.
Governance risk tiers
Akshi classifies agent actions into risk tiers at the field level:
- Low risk: read-only operations, local inference, journal reads.
- Medium risk: outbound HTTP, A2A messaging, MCP tool calls.
- High risk: financial actions, communications, system modifications.
Risk classification drives approval gate behavior and risk scoring. See Approval Gates for details on how risk scores are computed.
Kill switches
Two emergency stops are available:
- Spend kill switch (
AKSHI_SPEND_KILL_SWITCH=1): halts all paid actions. - Supervisor shutdown:
akshi stopterminates all agents and the runtime.
Both take effect immediately without waiting for in-flight operations to complete.
Formal verification
TLA+ specifications are planned for critical subsystems (broker allowlist evaluation, sync envelope verification, approval gate state machine). These are not yet shipped.
CVE response
Wasmtime is a critical dependency. The project targets a 24-48 hour patch SLA for critical Wasmtime CVEs. The dependency is pinned to a specific version and updated deliberately, not automatically.
Threat model summary
The security model assumes agents are untrusted by default:
- Agents cannot access the host filesystem, network, or other agents’ memory.
- All external access goes through capability-gated host imports.
- Credentials never enter the sandbox.
- High-risk actions require human approval.
- Spend is capped and tracked per agent.
The model does not protect against side-channel attacks on WASM execution or timing attacks on the inference router. These are accepted risks in the current Tier 1 design.
Developing Agents
Akshi agents are WebAssembly components that run inside the sandboxed runtime. You can write agents in any language that compiles to WASM components.
Language guides
- Rust WASM Agent — Native Rust with the akshi-sdk crate
- Python Component — Python via componentize-py
- TypeScript Component — TypeScript via jco
Common topics
- Using the SDK — Database, journal, inference, HTTP, and A2A patterns
- Host Capabilities — Declaring and managing sandbox permissions
- MCP Integration — Connecting agents to MCP tool servers
Rust WASM Agent
Build an agent in Rust using the akshi-sdk crate. This is the most direct path with full type safety and the smallest binary size.
Prerequisites
- Rust toolchain with the
wasm32-wasip1target - Akshi CLI installed
rustup target add wasm32-wasip1
Create the project
akshi create agent my-agent --lang rust
cd my-agent
This scaffolds a Cargo project with the SDK dependency and WIT bindings pre-configured.
Project structure
my-agent/
Cargo.toml
src/
main.rs
wit/
deps/ # akshi WIT interfaces (auto-managed)
world.wit # agent world definition
Write agent logic
Edit src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct MyAgent;
impl Agent for MyAgent {
fn tick(&mut self, ctx: &Context) -> Result<()> {
let prompt = "Summarize recent activity.";
let response = ctx.infer(prompt)?;
ctx.journal_insert("summary", &response)?;
Ok(())
}
}
export_agent!(MyAgent);
}
The tick method runs each time the runtime schedules the agent. Use ctx to
access host capabilities: inference, journal, HTTP fetch, and more.
Build
cargo build --target wasm32-wasip1 --release
The output is at target/wasm32-wasip1/release/my_agent.wasm.
Configure
Add the agent to runtime.toml:
[[agents]]
name = "my-agent"
wasm_path = "target/wasm32-wasip1/release/my_agent.wasm"
tick_interval_secs = 60
fuel_limit = 1_000_000
[agents.capabilities]
inference = true
journal = true
Run
akshi run
Check status with akshi status or open the dashboard at
http://127.0.0.1:3210.
Next steps
- Using the SDK for common patterns
- Host Capabilities for permission details
- Agent Configuration for all config options
Python Component Agent
Build an Akshi agent in Python using WIT bindings and componentize-py.
Prerequisites
- Python 3.11+
componentize-pyinstalled (pip install componentize-py)- Akshi CLI installed
Create the project
akshi create agent my-py-agent --lang python
cd my-py-agent
Project structure
my-py-agent/
app.py
wit/
deps/
world.wit
Write agent logic
Edit app.py:
from akshi_sdk import Agent, Context
class MyPyAgent(Agent):
def tick(self, ctx: Context):
logs = ctx.db_query("SELECT * FROM events WHERE level = 'error'")
if logs:
summary = ctx.infer(f"Summarize these errors: {logs}")
ctx.journal_insert("error-summary", summary)
Build
componentize-py -d wit/world.wit -w agent componentize app -o my_py_agent.wasm
Configure
[[agents]]
name = "my-py-agent"
wasm_path = "my_py_agent.wasm"
tick_interval_secs = 300
fuel_limit = 2_000_000
[agents.capabilities]
inference = true
journal = true
database = true
Run
akshi run
Limitations
- Python components are larger (~5 MB) due to the embedded interpreter.
- Startup time is higher than Rust agents. The runtime caches the compiled module after the first load.
- Not all Python standard library modules are available in the WASM environment.
Next steps
- Using the SDK for database, journal, and inference patterns
- Host Capabilities for capability declarations
TypeScript Component Agent
Build an Akshi agent in TypeScript using WIT bindings and jco.
Prerequisites
- Node.js 20+
jcoinstalled (npm install -g @bytecodealliance/jco)componentize-jsinstalled (npm install -g @bytecodealliance/componentize-js)- Akshi CLI installed
Create the project
akshi create agent my-ts-agent --lang typescript
cd my-ts-agent
Project structure
my-ts-agent/
src/
index.ts
wit/
deps/
world.wit
package.json
tsconfig.json
Write agent logic
Edit src/index.ts:
import { Agent, Context } from "akshi-sdk";
export class MyTsAgent implements Agent {
tick(ctx: Context): void {
const response = ctx.infer("What are today's top headlines?");
ctx.journalInsert("headlines", response);
}
}
Build
Compile TypeScript to JavaScript, then componentize:
npm run build
jco componentize dist/index.js -d wit/world.wit -w agent -o my_ts_agent.wasm
Configure
[[agents]]
name = "my-ts-agent"
wasm_path = "my_ts_agent.wasm"
tick_interval_secs = 3600
fuel_limit = 2_000_000
[agents.capabilities]
inference = true
journal = true
Run
akshi run
Limitations
- TypeScript components include the JavaScript engine (~4 MB).
- Some Node.js APIs (filesystem, network) are not available; use host capabilities instead.
Next steps
- Using the SDK for common SDK patterns
- Host Capabilities for permission details
Using the SDK
Common patterns for interacting with the Akshi runtime from agent code. Examples are in Rust; the same functions are available in Python and TypeScript SDKs with language-appropriate naming.
Inference
Request LLM inference through the router:
#![allow(unused)]
fn main() {
let answer = ctx.infer("Explain quantum computing in one paragraph.")?;
// With model hint
let answer = ctx.infer_with_model("claude-sonnet", "Explain quantum computing.")?;
}
The router selects local or cloud models based on your route profile.
Journal operations
Store and retrieve agent memory:
#![allow(unused)]
fn main() {
// Insert an entry
ctx.journal_insert("daily-summary", &summary_text)?;
// Vector similarity search
let results = ctx.journal_search("network anomaly detection", 5)?;
for entry in results {
println!("{}: {} (score: {})", entry.key, entry.text, entry.score);
}
// Hybrid search (vector + keyword)
let results = ctx.journal_hybrid_search("anomaly", "network", 5)?;
}
Each journal entry is automatically embedded for vector search and tagged with provenance metadata (agent name, timestamp, DID signature).
Database operations
Use the built-in SQLite database for structured data:
#![allow(unused)]
fn main() {
ctx.db_exec("CREATE TABLE IF NOT EXISTS events (id INTEGER PRIMARY KEY, msg TEXT)")?;
ctx.db_exec("INSERT INTO events (msg) VALUES (?)", &[&event_message])?;
let rows = ctx.db_query("SELECT msg FROM events ORDER BY id DESC LIMIT 10")?;
}
HTTP fetch
Make outbound HTTP requests (requires http_fetch capability):
#![allow(unused)]
fn main() {
let response = ctx.http_fetch("https://api.example.com/data")?;
let body = response.text()?;
}
Only domains listed in the agent’s endpoint allowlist are permitted.
A2A messaging
Delegate tasks to other agents:
#![allow(unused)]
fn main() {
// Send a task to another agent
let task_id = ctx.a2a_send("researcher", json!({"query": "Rust WASM runtimes"}))?;
// Check task result (non-blocking)
if let Some(result) = ctx.a2a_poll(task_id)? {
println!("Result: {}", result);
}
}
MCP tool calls
Call tools exposed by configured MCP servers:
#![allow(unused)]
fn main() {
let result = ctx.mcp_call("search", json!({"query": "akshi documentation"}))?;
}
See MCP Integration for server configuration.
Logging
#![allow(unused)]
fn main() {
ctx.log_info("Processing started");
ctx.log_warn("Rate limit approaching");
ctx.log_error("Failed to fetch data");
}
Logs are visible via akshi logs, the dashboard, and the /api/v1/logs endpoint.
Host Capabilities
Akshi uses a fail-closed capability model. Agents have no access to host resources unless explicitly granted in configuration.
Declaring capabilities
In runtime.toml, each agent entry has a capabilities section:
[[agents]]
name = "researcher"
wasm_path = "researcher.wasm"
[agents.capabilities]
inference = true
journal = true
http_fetch = true
database = true
a2a = true
mcp = true
Capability list
| Capability | Description | Default |
|---|---|---|
inference | Call LLM inference through the router | off |
journal | Read/write journal entries and vector search | off |
database | SQLite database access | off |
http_fetch | Outbound HTTP requests | off |
a2a | Send/receive A2A task messages | off |
mcp | Call MCP tool servers | off |
filesystem | Read files from allowed paths | off |
spend | Use economic spend budget | off |
Endpoint allowlists
When http_fetch is enabled, restrict which domains the agent can reach:
[[agents]]
name = "researcher"
[agents.capabilities]
http_fetch = true
[agents.endpoints]
allowed = ["api.example.com", "cdn.example.com"]
Requests to unlisted domains are blocked and logged.
Approval-gated capabilities
Some actions can require human approval even when the capability is enabled:
[agents.capabilities]
http_fetch = true
[agents.approval]
http_fetch = true # Require approval for each HTTP request
See Approval Workflow for details.
Fail-closed behavior
If an agent calls a host function for a capability it does not have, the call returns an error immediately. The agent is not terminated; it can handle the error and continue.
If a capability is gated on approval and the approval is denied, the call
returns an error with reason "approval_denied".
Runtime enforcement
Capabilities are enforced at the WASM host boundary. The sandbox prevents agents from bypassing capability checks through memory manipulation or other means. Capability violations are recorded in the audit log.
MCP Integration
Connect agents to external tool servers using the Model Context Protocol (MCP).
Configure an MCP server
Add an MCP server to your agent’s configuration in runtime.toml:
[[agents]]
name = "researcher"
wasm_path = "researcher.wasm"
[agents.capabilities]
mcp = true
inference = true
[[agents.mcp_servers]]
name = "web-search"
argv = ["npx", "-y", "@anthropic/mcp-server-web-search"]
[[agents.mcp_servers]]
name = "filesystem"
argv = ["npx", "-y", "@anthropic/mcp-server-filesystem", "/data"]
The argv array specifies the command to start the MCP server process. Akshi
manages the server lifecycle and communicates over stdio.
Call MCP tools from agent code
#![allow(unused)]
fn main() {
// List available tools
let tools = ctx.mcp_list_tools("web-search")?;
// Call a specific tool
let result = ctx.mcp_call("web-search", "search", json!({
"query": "Akshi agent runtime"
}))?;
}
Example: research agent with search
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct Researcher;
impl Agent for Researcher {
fn tick(&mut self, ctx: &Context) -> Result<()> {
// Search the web
let results = ctx.mcp_call("web-search", "search", json!({
"query": "latest Rust WASM developments"
}))?;
// Summarize with LLM
let summary = ctx.infer(&format!(
"Summarize these search results:\n{results}"
))?;
// Store in journal
ctx.journal_insert("research-summary", &summary)?;
Ok(())
}
}
export_agent!(Researcher);
}
Multiple MCP servers
An agent can connect to multiple MCP servers. Each is identified by its name
field. Tool names are scoped per server, so tools with the same name on
different servers do not conflict.
Server lifecycle
- MCP servers start when the agent first calls
mcp_list_toolsormcp_call. - Servers are restarted automatically if they crash.
- Servers stop when the agent is unloaded or the runtime shuts down.
- Server stdout/stderr is captured in the runtime logs.
Configuration
Akshi is configured through a single runtime.toml file with sections for the
runtime, agents, inference routing, mesh networking, and the dashboard.
Topics
- Runtime Configuration — Top-level runtime settings
- Agent Configuration — Per-agent entries and options
- Router Configuration — Inference routing and model selection
- Mesh Configuration — Multi-node mesh and sync settings
- Dashboard Configuration — HTTP API, auth, CORS, and rate limits
Runtime Configuration
The runtime.toml file is the central configuration for an Akshi node.
Typical configuration
# Runtime identity
[identity]
name = "my-node"
data_dir = "./akshi-data"
# did = "did:web:node.example.com" # auto-generated if omitted
# Agent definitions
[[agents]]
name = "log-monitor"
wasm_path = "agents/log_monitor.wasm"
tick_interval_secs = 60
fuel_limit = 1_000_000
[agents.capabilities]
inference = true
journal = true
filesystem = true
[agents.endpoints]
allowed = []
# Inference routing
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
# Dashboard and HTTP API
[dashboard]
port = 3210
token = "my-secret-token"
cors_origins = ["http://127.0.0.1:3210"]
rate_limit = 300
rate_limit_burst = 50
Section reference
| Section | Purpose |
|---|---|
[identity] | Node name, data directory, DID identity |
[[agents]] | Agent entries (one per agent) |
[router] | Inference routing configuration |
[dashboard] | HTTP API and dashboard settings |
[mesh] | Multi-node mesh networking (optional) |
[broker] | Secrets broker settings (optional) |
[economics] | Spend policy and budgets (optional) |
Data directory
The data_dir path stores journals, databases, keys, and audit logs. It is
created automatically on first run. Use an absolute path in production.
Environment variable substitution
Values can reference environment variables:
[dashboard]
token = "${AKSHI_TOKEN}"
[[router.routes]]
api_key = "${ANTHROPIC_API_KEY}"
See Environment Variables for the full list of recognized variables.
Validation
Check your configuration before running:
akshi config-check
This validates syntax, required fields, capability consistency, and endpoint reachability.
Agent Configuration
Each agent is defined as an [[agents]] entry in runtime.toml.
Full example
[[agents]]
name = "log-monitor"
wasm_path = "agents/log_monitor.wasm"
tick_interval_secs = 60
fuel_limit = 1_000_000
max_restarts = 5
restart_backoff_secs = 10
[agents.capabilities]
inference = true
journal = true
database = true
http_fetch = false
a2a = true
[agents.endpoints]
allowed = ["api.example.com"]
[agents.approval]
http_fetch = true
[agents.spend]
daily_budget_usd = 1.00
per_inference_max_usd = 0.05
[[agents.mcp_servers]]
name = "search"
argv = ["npx", "-y", "@anthropic/mcp-server-web-search"]
Field reference
| Field | Type | Default | Description |
|---|---|---|---|
name | string | required | Unique agent identifier |
wasm_path | string | required | Path to the WASM component |
tick_interval_secs | integer | 60 | Seconds between tick invocations |
fuel_limit | integer | 1_000_000 | Max WASM fuel per tick (null = unlimited) |
max_restarts | integer | 10 | Max automatic restarts before giving up |
restart_backoff_secs | integer | 5 | Backoff between restarts |
Capabilities
See Host Capabilities for the full list and fail-closed semantics.
Endpoint allowlist
When http_fetch is enabled, the endpoints.allowed list restricts outbound
domains. An empty list blocks all HTTP requests even with the capability enabled.
Spend policy
The optional [agents.spend] section sets economic guardrails:
daily_budget_usd— Maximum daily spend across all inference calls.per_inference_max_usd— Maximum cost for a single inference request.
When the budget is exhausted the agent receives a budget_exceeded error.
MCP servers
Each [[agents.mcp_servers]] entry defines an MCP tool server. See
MCP Integration for details.
Router Configuration
The inference router selects which model backend handles each agent request.
Basic setup
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[[router.routes]]
name = "cloud"
provider = "anthropic"
model = "claude-sonnet-4-20250514"
api_key = "${ANTHROPIC_API_KEY}"
Route selection
The router picks a route based on:
- Agent hint — The agent can request a specific model via
infer_with_model. - Route profile — Weighted selection based on configured thresholds.
- Default route — Falls back to
default_routeif no match.
Route profiles
Fine-tune routing behavior per agent:
[[agents]]
name = "researcher"
[agents.route_profile]
prefer = "cloud"
fallback = "local"
max_latency_ms = 5000
cost_weight = 0.3
quality_weight = 0.7
| Field | Description |
|---|---|
prefer | Preferred route name |
fallback | Route to use if preferred is unavailable |
max_latency_ms | Switch to fallback if latency exceeds this |
cost_weight | Weight for cost optimization (0.0-1.0) |
quality_weight | Weight for quality optimization (0.0-1.0) |
Provider reference
| Provider | provider value | Required fields |
|---|---|---|
| Ollama | ollama | base_url, model |
| Anthropic | anthropic | api_key, model |
| OpenAI-compatible | openai | base_url, api_key, model |
Testing routes
Verify routing with:
akshi config-check --test-routes
This sends a test prompt to each configured route and reports latency and availability.
Mesh Configuration
Enable multi-node journal synchronization over encrypted WireGuard tunnels.
Enable mesh
[mesh]
enabled = true
listen_addr = "0.0.0.0:4820"
peer_discovery = "static" # or "dht"
[[mesh.peers]]
name = "node-beta"
endpoint = "192.168.1.20:4820"
public_key = "base64-encoded-wireguard-public-key"
[[mesh.peers]]
name = "node-gamma"
endpoint = "192.168.1.30:4820"
public_key = "base64-encoded-wireguard-public-key"
[mesh.sync]
interval_secs = 10
batch_size = 50
max_lag_before_full_sync = 1000
Field reference
| Field | Description |
|---|---|
enabled | Enable mesh networking |
listen_addr | Address and port for incoming peer connections |
peer_discovery | static (explicit peers) or dht (distributed hash table) |
Peer configuration
Each [[mesh.peers]] entry defines a remote node:
| Field | Description |
|---|---|
name | Human-readable peer name |
endpoint | IP:port of the remote node |
public_key | WireGuard public key for the peer |
Sync settings
| Field | Default | Description |
|---|---|---|
interval_secs | 10 | How often to sync with peers |
batch_size | 50 | Envelopes per sync batch |
max_lag_before_full_sync | 1000 | Sequence lag threshold for full resync |
DHT discovery
For dynamic environments, use DHT-based peer discovery:
[mesh]
peer_discovery = "dht"
[mesh.dht]
bootstrap_nodes = ["192.168.1.10:4821"]
Nodes announce themselves and discover peers automatically.
Key management
Generate WireGuard keys with:
akshi mesh keygen
This creates a keypair in the data directory. Share the public key with peers.
Dashboard Configuration
Configure the HTTP API and web dashboard.
Basic setup
[dashboard]
port = 3210
token = "${AKSHI_TOKEN}"
Full options
[dashboard]
port = 3210
bind_address = "127.0.0.1"
token = "${AKSHI_TOKEN}"
auth = true
cors_origins = ["http://127.0.0.1:3210"]
rate_limit = 300
rate_limit_burst = 50
sse_keepalive_secs = 15
Field reference
| Field | Type | Default | Description |
|---|---|---|---|
port | integer | 3210 | HTTP listen port |
bind_address | string | 127.0.0.1 | Bind address |
token | string | auto-generated | API authentication token |
auth | boolean | true | Enable authentication |
cors_origins | string[] | ["http://127.0.0.1:3210"] | Allowed CORS origins |
rate_limit | integer | 300 | Requests per minute |
rate_limit_burst | integer | 50 | Burst request limit |
sse_keepalive_secs | integer | 15 | SSE keepalive ping interval |
Disabling authentication
For local development only:
[dashboard]
auth = false
Never disable auth on a network-accessible interface.
CORS for external dashboards
If you host the dashboard UI separately:
[dashboard]
cors_origins = ["https://dashboard.example.com", "http://localhost:5173"]
Rate limiting
Rate limits apply per source IP. When exceeded, the API returns 429 Too Many Requests with a Retry-After header. Adjust limits based on your client count
and monitoring frequency.
Operations
Day-to-day operational tasks for running and maintaining an Akshi deployment.
Topics
- Monitoring & Dashboard — Real-time status and metrics
- Logs & Traces — Log retrieval, trace exploration, replay
- Approval Workflow — Manage approval gates for agent actions
- Journal & Memory — Agent memory, vector search, provenance
- Hot Reload — Update agents without downtime
- Auditing — Export and verify audit receipts
Monitoring & Dashboard
The Akshi dashboard provides real-time visibility into agent status, logs, and event streams.
Opening the dashboard
After starting the runtime, open:
http://127.0.0.1:3210
If auth is enabled, enter the token from runtime.toml or use the one printed
at startup.
Agent status view
The main view shows all agents with their current state:
- Running — Agent is actively executing
- Idle — Agent is waiting for the next tick
- Stopped — Agent has been manually stopped
- Error — Agent encountered a fatal error
Click an agent to see its detailed status including uptime, restart count, memory usage, and fuel remaining.
Live event stream
The events panel shows real-time agent activity:
- Inference tokens as they stream
- Tool calls with arguments and results
- Agent tick completions with duration
- Errors and warnings
This uses the SSE endpoint at /api/v1/events.
Prometheus metrics
Export metrics for external monitoring:
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:3210/api/v1/metrics
Configure Prometheus to scrape this endpoint:
scrape_configs:
- job_name: akshi
bearer_token: "your-token"
static_configs:
- targets: ["127.0.0.1:3210"]
metrics_path: /api/v1/metrics
Key metrics to monitor
| Metric | Alert threshold | Description |
|---|---|---|
akshi_agents_total | Drops below expected | Agent crashed and not restarted |
akshi_agent_restarts_total | Rapid increase | Agent crash loop |
akshi_approvals_pending | Grows unbounded | Approvals not being handled |
akshi_fuel_consumed_total | Near fuel limit | Agent may be throttled |
CLI status
Quick status check from the terminal:
akshi status
Logs & Traces
Akshi captures structured logs from all agents and supports trace-level debugging with replay.
Viewing logs
CLI
# Recent logs
akshi logs
# Follow mode
akshi logs --follow
# Filter by agent
akshi logs --agent log-monitor
# Filter by level
akshi logs --level warn
API
Poll for new log entries:
curl -H "Authorization: Bearer $TOKEN" \
"http://127.0.0.1:3210/api/v1/logs?since=0&limit=50"
Use next_seq from the response as the since parameter for subsequent calls.
Trace exploration
Traces capture the full execution sequence of an agent tick: inference calls, tool invocations, journal writes, and timing.
# List recent traces
akshi logs --traces
# View a specific trace
akshi logs --trace-id abc123
Traces are stored in the data directory under traces/.
Replay
Replay a recorded agent tick to debug behavior:
akshi replay --trace-id abc123
Replay re-executes the agent with the same inputs and shows a side-by-side comparison of the original and replayed outputs. This is useful for diagnosing non-deterministic behavior.
Log retention
Logs are retained based on the data directory size. Configure rotation:
[identity]
log_retention_days = 30
log_max_size_mb = 500
Structured format
All logs follow the format:
{
"seq": 155,
"ts": "2026-03-17T10:00:01Z",
"level": "info",
"agent": "log-monitor",
"message": "Scan cycle completed"
}
Export logs as JSON for external analysis:
akshi logs --format json > logs.json
Approval Workflow
Approval gates add human-in-the-loop oversight to sensitive agent actions.
Configure approval policy
In runtime.toml, enable approval for specific capabilities:
[[agents]]
name = "researcher"
[agents.capabilities]
http_fetch = true
spend = true
[agents.approval]
http_fetch = true # Require approval for each HTTP request
spend = true # Require approval for spend actions
When the agent invokes a gated capability, the action is paused until approved or denied.
Review pending approvals
Dashboard
The dashboard shows a notification badge when approvals are pending. Click to review each request with its details.
CLI
akshi status --approvals
API
curl -H "Authorization: Bearer $TOKEN" \
http://127.0.0.1:3210/api/v1/approvals
Approve or deny
Single approval
curl -X POST -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"decision": "approve"}' \
http://127.0.0.1:3210/api/v1/approvals/apr-001
Batch approval
Review grouped approvals and approve in bulk:
# View batched by agent and capability
curl -H "Authorization: Bearer $TOKEN" \
http://127.0.0.1:3210/api/v1/approvals/batched
# Approve all in a batch
curl -X POST -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"ids": ["apr-001","apr-002","apr-003"], "decision": "approve"}' \
http://127.0.0.1:3210/api/v1/approvals/batch
Timeout behavior
If an approval is not decided within the configured timeout, the agent receives
an approval_timeout error and can retry on the next tick.
[agents.approval]
timeout_secs = 300 # 5 minutes
Audit trail
All approval decisions are recorded in the audit log with the decision, timestamp, and the operator identity (if available).
Journal & Memory
The journal is each agent’s persistent memory store with vector search capabilities.
How it works
Every journal entry is:
- Stored in the agent’s local database.
- Embedded into a vector for similarity search.
- Signed with the agent’s DID key for provenance.
- Replicated to mesh peers (if mesh is enabled).
Insert entries
From agent code:
#![allow(unused)]
fn main() {
ctx.journal_insert("daily-report", "System healthy, 0 anomalies detected.")?;
}
Entries are keyed by a string tag and contain free-form text.
Vector search
Find semantically similar entries:
#![allow(unused)]
fn main() {
let results = ctx.journal_search("network issues", 5)?;
for r in results {
println!("[{}] {} (score: {:.2})", r.key, r.text, r.score);
}
}
Returns the top N entries ranked by cosine similarity.
Hybrid search
Combine vector similarity with keyword matching:
#![allow(unused)]
fn main() {
let results = ctx.journal_hybrid_search("security", "CVE", 10)?;
}
The first argument is the semantic query; the second is a keyword filter.
Provenance
Each journal entry carries:
- Agent DID — Identity of the agent that created the entry
- Timestamp — When the entry was created
- Signature — Ed25519 signature over the content
- Sequence — Monotonic sequence number for ordering
Verify provenance from the CLI:
akshi audit verify-journal --agent researcher --seq 42
API access
Read journal entries via the sync API:
curl -H "Authorization: Bearer $TOKEN" \
"http://127.0.0.1:3210/api/v1/sync/envelopes?agent=researcher&since=0&limit=10"
Storage
Journal data is stored in the agent’s directory under data_dir:
akshi-data/
journals/
researcher/
entries.db
vectors.idx
The vector index is rebuilt automatically on startup if corrupted.
Hot Reload
Update agent WASM binaries without stopping the runtime.
Reload command
akshi reload
This reloads all agents whose WASM files have changed on disk. Agents whose binaries are unchanged continue running without interruption.
Reload a specific agent
akshi reload --agent log-monitor
What happens during reload
- The runtime detects the updated WASM binary.
- The current agent tick (if running) completes.
- The old WASM module is unloaded.
- The new WASM module is compiled and instantiated.
- The agent resumes with its existing journal and database state.
There is a brief pause between steps 3 and 4. No ticks are skipped; the next tick uses the new code.
Zero-downtime updates
For production deployments:
- Build the new WASM binary.
- Copy it to the configured
wasm_path. - Run
akshi reload. - Verify with
akshi status.
cargo build --target wasm32-wasip1 --release
cp target/wasm32-wasip1/release/my_agent.wasm agents/
akshi reload --agent my-agent
akshi status
Configuration changes
akshi reload only reloads WASM binaries. To apply configuration changes
(tick interval, capabilities, routes), restart the runtime:
akshi stop && akshi run
File watching (development)
In development, enable automatic reload on file changes:
akshi run --watch
This watches all configured wasm_path files and reloads automatically when
they change.
Auditing
Akshi produces cryptographically signed audit receipts for agent actions.
What is audited
Every capability invocation generates a receipt:
- Inference calls (model, prompt hash, response hash, cost)
- HTTP fetch requests (URL, status code)
- Journal writes (entry key, content hash)
- A2A messages (sender, receiver, task ID)
- Approval decisions (approval ID, decision, operator)
- Spend transactions (amount, budget remaining)
Export receipts
# Export all receipts as JSON
akshi audit export --format json > receipts.json
# Export for a specific agent
akshi audit export --agent researcher --format json
# Export a time range
akshi audit export --since 2026-03-01 --until 2026-03-17
Verify receipts
Verify the integrity of exported receipts:
akshi audit verify receipts.json
This checks:
- Each receipt’s Ed25519 signature against the agent’s DID.
- Sequence continuity (no gaps or duplicates).
- Hash chain integrity (each receipt references the previous).
Audit bundles
Create a self-contained audit bundle for compliance:
akshi audit bundle --output audit-2026-03.tar.gz
The bundle includes receipts, the DID document, and verification metadata. Share the bundle with auditors; they can verify it with:
akshi audit verify-bundle audit-2026-03.tar.gz
Storage
Receipts are stored in data_dir/audit/:
akshi-data/
audit/
receipts.db # SQLite database of all receipts
chain.log # Hash chain log
Retention
Configure audit retention:
[identity]
audit_retention_days = 365
Receipts older than the retention period are pruned on startup.
Deployment
Options for deploying Akshi from local development to production clusters.
Topics
- Single Node — Local development with Ollama
- Systemd Service — Run as a Linux system service
- Docker — Container deployment with docker-compose
- Remote Deploy — Push agents to remote nodes via SSH
- Mesh Setup — Multi-node mesh with WireGuard
- Production Checklist — Security hardening and best practices
Single Node
Local development setup with Ollama for inference.
Prerequisites
- Akshi CLI installed
- Ollama installed and running
- A WASM agent binary (see Rust WASM Agent)
Install Ollama
# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
Verify Ollama is running:
curl http://127.0.0.1:11434/api/version
Initialize Akshi
mkdir my-project && cd my-project
akshi init
This creates a runtime.toml with sensible defaults for local development.
Configure
Edit runtime.toml:
[identity]
name = "dev-node"
data_dir = "./akshi-data"
[[agents]]
name = "my-agent"
wasm_path = "agents/my_agent.wasm"
tick_interval_secs = 30
[agents.capabilities]
inference = true
journal = true
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[dashboard]
port = 3210
auth = false
Run
akshi run
Open http://127.0.0.1:3210 to see the dashboard.
Verify
akshi status
curl http://127.0.0.1:3210/api/v1/health
Next steps
- Add more agents to
runtime.toml - Enable auth before exposing the port: set
dashboard.auth = trueand configure a token - See Docker for containerized deployment
Systemd Service
Run Akshi as a system service on Linux.
Create a service user
sudo useradd --system --home-dir /opt/akshi --create-home akshi
sudo chown -R akshi:akshi /opt/akshi
Install files
sudo cp akshi /usr/local/bin/akshi
sudo cp runtime.toml /opt/akshi/runtime.toml
sudo cp agents/*.wasm /opt/akshi/agents/
Create the unit file
Write /etc/systemd/system/akshi.service:
[Unit]
Description=Akshi Agent Runtime
After=network.target
Wants=network-online.target
[Service]
Type=simple
User=akshi
Group=akshi
WorkingDirectory=/opt/akshi
ExecStart=/usr/local/bin/akshi run --config /opt/akshi/runtime.toml
ExecReload=/usr/local/bin/akshi reload
Restart=on-failure
RestartSec=5
Environment=AKSHI_TOKEN=your-secret-token
Environment=ANTHROPIC_API_KEY=sk-ant-...
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Enable and start
sudo systemctl daemon-reload
sudo systemctl enable akshi
sudo systemctl start akshi
Manage
# Check status
sudo systemctl status akshi
# View logs
sudo journalctl -u akshi -f
# Reload agents (hot reload)
sudo systemctl reload akshi
# Restart
sudo systemctl restart akshi
Environment file
For sensitive variables, use an environment file:
sudo tee /opt/akshi/.env << 'EOF'
AKSHI_TOKEN=your-secret-token
ANTHROPIC_API_KEY=sk-ant-...
EOF
sudo chmod 600 /opt/akshi/.env
sudo chown akshi:akshi /opt/akshi/.env
Add to the unit file:
EnvironmentFile=/opt/akshi/.env
Docker
Run Akshi in a container with docker-compose.
Dockerfile
FROM rust:1.83-slim AS builder
WORKDIR /build
COPY . .
RUN cargo build --release --bin akshi
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /build/target/release/akshi /usr/local/bin/akshi
RUN useradd --system akshi
USER akshi
WORKDIR /opt/akshi
ENTRYPOINT ["akshi", "run"]
docker-compose.yml
services:
akshi:
build: .
ports:
- "3210:3210"
volumes:
- ./runtime.toml:/opt/akshi/runtime.toml:ro
- ./agents:/opt/akshi/agents:ro
- akshi-data:/opt/akshi/akshi-data
environment:
- AKSHI_TOKEN=${AKSHI_TOKEN}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
restart: unless-stopped
ollama:
image: ollama/ollama
ports:
- "11434:11434"
volumes:
- ollama-data:/root/.ollama
volumes:
akshi-data:
ollama-data:
Run
# Start services
docker compose up -d
# Pull a model into Ollama
docker compose exec ollama ollama pull llama3.2
# Check status
curl http://127.0.0.1:3210/api/v1/health
Configuration notes
- Mount
runtime.tomlread-only (:ro). - Use a named volume for
akshi-datato persist journals and databases. - Set
dashboard.bind_address = "0.0.0.0"inruntime.tomlso the container listens on all interfaces. - Reference the Ollama container by service name in
runtime.toml:
[[router.routes]]
name = "local"
provider = "ollama"
base_url = "http://ollama:11434"
model = "llama3.2"
Updating agents
# Copy new WASM binary
cp new_agent.wasm agents/
# Reload without restart
docker compose exec akshi akshi reload
Remote Deploy
Push agent updates to remote Akshi nodes via SSH.
Prerequisites
- SSH access to the remote node
- Akshi installed on the remote node
- The remote node running with a
runtime.toml
Deploy command
akshi deploy --host user@remote-node.example.com
This performs:
- Copies updated WASM binaries to the remote node.
- Runs
akshi reloadon the remote node. - Verifies agent status after reload.
Specifying agents
Deploy a specific agent:
akshi deploy --host user@remote-node.example.com --agent log-monitor
Custom paths
If the remote node uses a non-default installation path:
akshi deploy --host user@remote-node.example.com \
--remote-dir /opt/akshi \
--remote-config /opt/akshi/runtime.toml
Deploy with configuration
Push configuration changes along with WASM binaries:
akshi deploy --host user@remote-node.example.com --include-config
This copies runtime.toml and restarts the runtime (since config changes
require a restart).
SSH key authentication
The deploy command uses your SSH agent or ~/.ssh/config. For CI/CD pipelines,
set the key path:
akshi deploy --host user@remote-node.example.com --ssh-key ~/.ssh/deploy_key
Multiple nodes
Deploy to several nodes:
for host in node-a.example.com node-b.example.com; do
akshi deploy --host "akshi@$host"
done
Rollback
If the new agent fails, restore the previous version:
akshi deploy --host user@remote-node.example.com --rollback
This restores the previous WASM binary from the backup created during deploy.
Mesh Setup
Connect multiple Akshi nodes for journal synchronization over encrypted WireGuard tunnels.
Architecture
Node A (192.168.1.10) <--WireGuard--> Node B (192.168.1.20)
\ /
\-----WireGuard----Node C----------/
(192.168.1.30)
Each node runs an independent Akshi runtime. The mesh layer replicates journal entries between nodes so agents on any node can access shared memory.
Step 1: Generate keys on each node
# On each node
akshi mesh keygen
This creates a WireGuard keypair in the data directory. Note the public key printed to stdout.
Step 2: Configure Node A
[mesh]
enabled = true
listen_addr = "0.0.0.0:4820"
[[mesh.peers]]
name = "node-b"
endpoint = "192.168.1.20:4820"
public_key = "<node-b-public-key>"
[[mesh.peers]]
name = "node-c"
endpoint = "192.168.1.30:4820"
public_key = "<node-c-public-key>"
[mesh.sync]
interval_secs = 10
batch_size = 50
Repeat on Nodes B and C with the appropriate peer keys and endpoints.
Step 3: Start all nodes
# On each node
akshi run
Step 4: Verify connectivity
akshi mesh status
Expected output:
Peers:
node-b 192.168.1.20:4820 connected lag=0
node-c 192.168.1.30:4820 connected lag=2
Step 5: Verify journal sync
Check convergence for a specific agent:
curl -H "Authorization: Bearer $TOKEN" \
"http://127.0.0.1:3210/api/v1/sync/convergence?agent=researcher"
Firewall rules
Ensure port 4820/UDP is open between all mesh nodes. If using a cloud provider, add the port to the security group.
Troubleshooting
- Peers show “disconnected” — Check firewall rules and endpoint addresses.
- High lag — Increase
batch_sizeor decreaseinterval_secs. - Signature errors — Verify public keys match the remote node’s keypair.
Production Checklist
Security hardening and operational readiness for production deployments.
Authentication
- Set a strong
dashboard.token(at least 32 characters) - Use environment variables or a secrets manager for tokens
- Never set
dashboard.auth = falsein production
[dashboard]
token = "${AKSHI_TOKEN}"
auth = true
Network binding
- Bind to localhost unless external access is needed
- Use a reverse proxy (nginx, caddy) for TLS termination
[dashboard]
bind_address = "127.0.0.1"
CORS
- Restrict CORS origins to known dashboard URLs
- Do not use
*as a CORS origin
[dashboard]
cors_origins = ["https://dashboard.example.com"]
Rate limiting
- Configure rate limits appropriate for your client count
- Monitor
429responses in your alerting system
[dashboard]
rate_limit = 300
rate_limit_burst = 50
Agent fuel limits
- Set fuel limits on all agents to prevent runaway execution
- Set spend budgets on agents with inference access
[[agents]]
fuel_limit = 1_000_000
[agents.spend]
daily_budget_usd = 5.00
per_inference_max_usd = 0.10
Endpoint allowlists
- Restrict
http_fetchto necessary domains only - Review allowlists periodically
[agents.endpoints]
allowed = ["api.example.com"]
Approval gates
- Enable approval for high-risk capabilities in production
- Assign an operator to monitor the approval queue
Secrets broker
- Use the broker for API keys instead of environment variables
- Revoke unused grants with the broker API
Audit
- Enable audit logging (on by default)
- Set retention period appropriate for compliance requirements
- Periodically export and verify audit bundles
Monitoring
- Configure Prometheus scraping for
/api/v1/metrics - Set alerts for agent crash loops, pending approvals, and budget exhaustion
- Monitor disk usage in the data directory
Backups
- Back up the
data_dirregularly (journals, databases, keys) - Test restore from backup periodically
Build a Log Monitor
End-to-end tutorial: create an agent that watches log files, detects anomalies, and reports findings.
What you will build
A log-monitor agent that:
- Reads log files from a configured directory.
- Detects error patterns using LLM inference.
- Stores findings in the journal.
- Reports findings via the API.
Prerequisites
- Akshi CLI installed
- Ollama running with
llama3.2pulled - Rust toolchain with
wasm32-wasip1target
Step 1: Create the workspace
mkdir log-monitor-demo && cd log-monitor-demo
akshi init
akshi create agent log-monitor --lang rust
Step 2: Add sample log files
mkdir logs
cat > logs/app.log << 'EOF'
2026-03-17 10:00:01 INFO Application started
2026-03-17 10:00:05 INFO Connected to database
2026-03-17 10:01:12 ERROR Connection timeout to payment service
2026-03-17 10:01:13 ERROR Retry 1/3 failed for payment service
2026-03-17 10:01:14 ERROR Retry 2/3 failed for payment service
2026-03-17 10:01:15 WARN Circuit breaker opened for payment service
2026-03-17 10:02:00 INFO Health check passed
2026-03-17 10:05:00 ERROR Out of memory in worker pool
EOF
Step 3: Write the agent
Edit log-monitor/src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct LogMonitor;
impl Agent for LogMonitor {
fn tick(&mut self, ctx: &Context) -> Result<()> {
// Read the log file
let logs = ctx.fs_read("logs/app.log")?;
// Check the last analysis timestamp
let last_check = ctx.db_query(
"SELECT value FROM state WHERE key = 'last_check'"
)?;
// Ask the LLM to analyze
let prompt = format!(
"Analyze these application logs. Identify errors and anomalies. \
Report each issue with severity (info/warning/critical):\n\n{logs}"
);
let analysis = ctx.infer(&prompt)?;
// Store findings
if analysis.contains("ERROR") || analysis.contains("critical") {
ctx.journal_insert("log-analysis", &analysis)?;
ctx.log_warn(&format!("Issues detected: {}", &analysis[..80]));
} else {
ctx.log_info("Log scan complete, no issues found");
}
// Update last check time
ctx.db_exec(
"INSERT OR REPLACE INTO state (key, value) VALUES ('last_check', ?)",
&[&ctx.now_iso()],
)?;
Ok(())
}
}
export_agent!(LogMonitor);
}
Step 4: Build
cd log-monitor
cargo build --target wasm32-wasip1 --release
cd ..
Step 5: Configure
Edit runtime.toml:
[identity]
name = "log-demo"
data_dir = "./akshi-data"
[[agents]]
name = "log-monitor"
wasm_path = "log-monitor/target/wasm32-wasip1/release/log_monitor.wasm"
tick_interval_secs = 30
fuel_limit = 2_000_000
[agents.capabilities]
inference = true
journal = true
database = true
filesystem = true
[agents.filesystem]
allowed_paths = ["./logs"]
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[dashboard]
port = 3210
auth = false
Step 6: Run
akshi run
Step 7: Check results
Wait for the first tick (30 seconds), then:
# Check agent status
akshi status
# View logs
akshi logs --agent log-monitor
# Check findings via API
curl http://127.0.0.1:3210/api/v1/findings?since=0
The findings endpoint returns the agent’s analysis of the log file, including detected errors and their severity.
Step 8: Add more log data
Append new entries to test ongoing monitoring:
echo "2026-03-17 10:10:00 ERROR Disk space below 5%" >> logs/app.log
The agent picks up new entries on the next tick and reports additional findings.
Next steps
- Add more log files to the
logs/directory - Tune
tick_interval_secsfor your monitoring frequency - Enable approval gates for critical findings
- Set up Prometheus alerts for finding severity
Multi-Agent Pipeline
Build a two-agent pipeline where Agent A generates data and Agent B processes it using A2A messaging.
What you will build
- data-collector — Gathers data and sends it as an A2A task.
- data-processor — Receives tasks, processes them with LLM inference, and stores results in the journal.
Prerequisites
- Akshi CLI installed
- Ollama running with
llama3.2 - Rust toolchain with
wasm32-wasip1target
Step 1: Create the workspace
mkdir pipeline-demo && cd pipeline-demo
akshi init
akshi create agent data-collector --lang rust
akshi create agent data-processor --lang rust
Step 2: Write the data collector
Edit data-collector/src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct DataCollector;
impl Agent for DataCollector {
fn tick(&mut self, ctx: &Context) -> Result<()> {
// Simulate collecting data
let data = format!(
"System metrics at {}: CPU=72%, Memory=85%, Disk=60%",
ctx.now_iso()
);
ctx.log_info(&format!("Collected: {data}"));
// Send to the processor agent via A2A
let task_id = ctx.a2a_send("data-processor", json!({
"type": "analyze",
"data": data
}))?;
ctx.log_info(&format!("Sent task {task_id} to data-processor"));
Ok(())
}
}
export_agent!(DataCollector);
}
Step 3: Write the data processor
Edit data-processor/src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct DataProcessor;
impl Agent for DataProcessor {
fn tick(&mut self, ctx: &Context) -> Result<()> {
// Check for pending A2A tasks
let tasks = ctx.a2a_receive()?;
for task in tasks {
ctx.log_info(&format!("Processing task {}", task.id));
let input = task.input.get("data").unwrap_or(&json!("")).to_string();
// Analyze with LLM
let analysis = ctx.infer(&format!(
"Analyze these system metrics and flag any concerns:\n{input}"
))?;
// Store result
ctx.journal_insert("analysis", &analysis)?;
// Complete the task
ctx.a2a_complete(task.id, json!({"analysis": analysis}))?;
ctx.log_info(&format!("Task {} completed", task.id));
}
Ok(())
}
}
export_agent!(DataProcessor);
}
Step 4: Build both agents
cd data-collector && cargo build --target wasm32-wasip1 --release && cd ..
cd data-processor && cargo build --target wasm32-wasip1 --release && cd ..
Step 5: Configure
Edit runtime.toml:
[identity]
name = "pipeline-demo"
data_dir = "./akshi-data"
[[agents]]
name = "data-collector"
wasm_path = "data-collector/target/wasm32-wasip1/release/data_collector.wasm"
tick_interval_secs = 60
[agents.capabilities]
a2a = true
[[agents]]
name = "data-processor"
wasm_path = "data-processor/target/wasm32-wasip1/release/data_processor.wasm"
tick_interval_secs = 10
[agents.capabilities]
a2a = true
inference = true
journal = true
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[dashboard]
port = 3210
auth = false
Step 6: Run and observe
akshi run
Watch the pipeline in action:
# Follow logs from both agents
akshi logs --follow
# Check A2A task status
curl http://127.0.0.1:3210/api/v1/a2a/tasks
# View the processor's journal entries
curl "http://127.0.0.1:3210/api/v1/sync/envelopes?agent=data-processor&since=0"
How it works
- The collector ticks every 60 seconds and sends an A2A task.
- The processor ticks every 10 seconds, picks up pending tasks, analyzes the data with the LLM, and stores the result.
- Both agents’ logs are visible in the unified event stream.
Next steps
- Add a third agent that reads the processor’s journal and generates reports
- Enable approval gates on the processor
- Deploy to multiple nodes with journal sync
Research Agent with MCP
Build a research agent that uses MCP tool servers to search the web, fetch pages, and store summaries in the journal.
What you will build
A researcher agent that:
- Searches the web via an MCP search tool.
- Fetches relevant pages.
- Summarizes content with LLM inference.
- Stores results in the journal for later retrieval.
Prerequisites
- Akshi CLI installed
- Ollama running with
llama3.2(or an Anthropic API key for better results) - Rust toolchain with
wasm32-wasip1target - Node.js 20+ (for MCP server)
Step 1: Create the workspace
mkdir research-demo && cd research-demo
akshi init
akshi create agent researcher --lang rust
Step 2: Write the agent
Edit researcher/src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct Researcher;
impl Agent for Researcher {
fn tick(&mut self, ctx: &Context) -> Result<()> {
// Define the research topic
let topic = "WebAssembly component model 2026 updates";
ctx.log_info(&format!("Researching: {topic}"));
// Step 1: Search the web via MCP
let search_results = ctx.mcp_call("web-search", "search", json!({
"query": topic
}))?;
ctx.log_info(&format!("Found {} results", search_results.len()));
// Step 2: Fetch top result
let url = search_results[0]["url"].as_str().unwrap_or("");
let page_content = ctx.http_fetch(url)?;
let body = page_content.text()?;
// Step 3: Summarize with LLM
let summary = ctx.infer(&format!(
"Summarize this web page about '{topic}'. \
Focus on key developments and dates.\n\n{body}"
))?;
// Step 4: Store in journal
ctx.journal_insert(&format!("research-{topic}"), &summary)?;
ctx.log_info("Research complete, summary stored in journal");
Ok(())
}
}
export_agent!(Researcher);
}
Step 3: Build
cd researcher && cargo build --target wasm32-wasip1 --release && cd ..
Step 4: Configure
Edit runtime.toml:
[identity]
name = "research-demo"
data_dir = "./akshi-data"
[[agents]]
name = "researcher"
wasm_path = "researcher/target/wasm32-wasip1/release/researcher.wasm"
tick_interval_secs = 3600
fuel_limit = 5_000_000
[agents.capabilities]
inference = true
journal = true
http_fetch = true
mcp = true
[agents.endpoints]
allowed = ["*"] # Allow all domains for research (restrict in production)
[[agents.mcp_servers]]
name = "web-search"
argv = ["npx", "-y", "@anthropic/mcp-server-web-search"]
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[dashboard]
port = 3210
auth = false
Step 5: Run
akshi run
Step 6: Verify
# Watch the agent work
akshi logs --follow --agent researcher
# After the first tick, check journal entries
curl "http://127.0.0.1:3210/api/v1/sync/envelopes?agent=researcher&since=0"
# Search the journal for specific topics
# (from another agent or via the SDK)
Step 7: Query past research
The journal supports vector search, so you can find relevant past research:
#![allow(unused)]
fn main() {
// From another agent
let results = ctx.journal_search("WASM component updates", 5)?;
}
Or view findings:
curl http://127.0.0.1:3210/api/v1/findings?since=0
Customizing the research topic
To make the topic dynamic, store topics in the database and rotate through them:
#![allow(unused)]
fn main() {
fn tick(&mut self, ctx: &Context) -> Result<()> {
let topics = ctx.db_query("SELECT topic FROM research_queue LIMIT 1")?;
if let Some(topic) = topics.first() {
// ... research logic ...
ctx.db_exec("DELETE FROM research_queue WHERE topic = ?", &[topic])?;
}
Ok(())
}
}
Next steps
- Add more MCP servers (filesystem, database) for richer research
- Use A2A messaging to feed topics from other agents
- Enable approval gates for HTTP fetches
Two-Node Mesh Sync
Set up two Akshi nodes and verify journal synchronization between them.
What you will build
Two nodes (Alpha and Beta) connected via WireGuard mesh. An agent on Alpha writes journal entries that automatically replicate to Beta.
Prerequisites
- Two machines (or two terminal sessions on localhost with different ports)
- Akshi CLI installed on both
- Ollama on at least one node
Step 1: Initialize both nodes
Node Alpha:
mkdir alpha && cd alpha
akshi init
akshi create agent writer --lang rust
Node Beta:
mkdir beta && cd beta
akshi init
Step 2: Write a simple journal-writing agent
On Alpha, edit writer/src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct Writer;
impl Agent for Writer {
fn tick(&mut self, ctx: &Context) -> Result<()> {
let entry = format!("Heartbeat from Alpha at {}", ctx.now_iso());
ctx.journal_insert("heartbeat", &entry)?;
ctx.log_info(&entry);
Ok(())
}
}
export_agent!(Writer);
}
Build:
cd writer && cargo build --target wasm32-wasip1 --release && cd ..
Step 3: Generate mesh keys
# On Alpha
cd alpha && akshi mesh keygen
# Note the public key, e.g.: "abc123..."
# On Beta
cd beta && akshi mesh keygen
# Note the public key, e.g.: "def456..."
Step 4: Configure Node Alpha
Edit alpha/runtime.toml:
[identity]
name = "alpha"
data_dir = "./akshi-data"
[[agents]]
name = "writer"
wasm_path = "writer/target/wasm32-wasip1/release/writer.wasm"
tick_interval_secs = 15
[agents.capabilities]
journal = true
[mesh]
enabled = true
listen_addr = "0.0.0.0:4820"
[[mesh.peers]]
name = "beta"
endpoint = "192.168.1.20:4821"
public_key = "def456..."
[mesh.sync]
interval_secs = 5
[dashboard]
port = 3210
auth = false
Step 5: Configure Node Beta
Edit beta/runtime.toml:
[identity]
name = "beta"
data_dir = "./akshi-data"
[mesh]
enabled = true
listen_addr = "0.0.0.0:4821"
[[mesh.peers]]
name = "alpha"
endpoint = "192.168.1.10:4820"
public_key = "abc123..."
[mesh.sync]
interval_secs = 5
[dashboard]
port = 3211
auth = false
Step 6: Start both nodes
# Terminal 1 (Alpha)
cd alpha && akshi run
# Terminal 2 (Beta)
cd beta && akshi run
Step 7: Verify mesh connectivity
# On Alpha
akshi mesh status
# Expected: beta connected lag=0
# On Beta
akshi mesh status
# Expected: alpha connected lag=0
Step 8: Verify journal sync
Wait for a few ticks (15 seconds each), then check Beta for replicated entries:
# On Beta — query Alpha's writer journal
curl "http://127.0.0.1:3211/api/v1/sync/envelopes?agent=writer&since=0"
You should see the heartbeat entries written by Alpha’s agent.
Check convergence:
curl "http://127.0.0.1:3210/api/v1/sync/convergence?agent=writer"
Expected:
{
"agent": "writer",
"local_seq": 4,
"peers": [{"peer_id": "beta", "acked_seq": 4, "lag": 0}],
"converged": true
}
Troubleshooting
- Peers disconnected — Verify IP addresses, ports, and firewall rules.
- Lag increasing — Check that Beta’s sync interval is not too long.
- Signature mismatch — Regenerate keys and update both configs.
Next steps
- Add agents on Beta that read Alpha’s journal entries
- Scale to three or more nodes
- Enable auditing to verify sync integrity
Custom Inference Routing
Configure and tune inference routing to balance cost, quality, and latency between local and cloud models.
What you will build
A routing setup with:
- A local Ollama model for fast, low-cost inference.
- A cloud Anthropic model for high-quality inference.
- A route profile that dynamically selects based on agent needs.
Prerequisites
- Akshi CLI installed
- Ollama running with
llama3.2 - An Anthropic API key
Step 1: Initialize
mkdir routing-demo && cd routing-demo
akshi init
akshi create agent summarizer --lang rust
Step 2: Write a test agent
Edit summarizer/src/main.rs:
#![allow(unused)]
fn main() {
use akshi_sdk::prelude::*;
struct Summarizer;
impl Agent for Summarizer {
fn tick(&mut self, ctx: &Context) -> Result<()> {
// Simple task — will use local model
let quick = ctx.infer("What day is it today?")?;
ctx.log_info(&format!("Quick answer (local): {quick}"));
// Complex task — request cloud model explicitly
let detailed = ctx.infer_with_model("cloud",
"Write a detailed technical analysis of WebAssembly \
component model advantages for multi-language agent systems."
)?;
ctx.journal_insert("analysis", &detailed)?;
ctx.log_info("Detailed analysis stored (cloud model)");
Ok(())
}
}
export_agent!(Summarizer);
}
Build:
cd summarizer && cargo build --target wasm32-wasip1 --release && cd ..
Step 3: Configure routes
Edit runtime.toml:
[identity]
name = "routing-demo"
data_dir = "./akshi-data"
[[agents]]
name = "summarizer"
wasm_path = "summarizer/target/wasm32-wasip1/release/summarizer.wasm"
tick_interval_secs = 120
fuel_limit = 5_000_000
[agents.capabilities]
inference = true
journal = true
[agents.route_profile]
prefer = "local"
fallback = "cloud"
max_latency_ms = 3000
cost_weight = 0.6
quality_weight = 0.4
[agents.spend]
daily_budget_usd = 2.00
per_inference_max_usd = 0.10
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[[router.routes]]
name = "cloud"
provider = "anthropic"
model = "claude-sonnet-4-20250514"
api_key = "${ANTHROPIC_API_KEY}"
[dashboard]
port = 3210
auth = false
Step 4: Run
export ANTHROPIC_API_KEY="sk-ant-..."
akshi run
Step 5: Observe routing decisions
# Watch logs to see which route is selected
akshi logs --follow --agent summarizer
# Check routing metrics
curl http://127.0.0.1:3210/api/v1/metrics | grep inference
Expected metrics:
akshi_inference_requests_total{route="local"} 1
akshi_inference_requests_total{route="cloud"} 1
Step 6: Test route profiles
Weight tuning
Adjust the balance between cost and quality:
# Cost-optimized: prefer local
[agents.route_profile]
cost_weight = 0.9
quality_weight = 0.1
# Quality-optimized: prefer cloud
[agents.route_profile]
cost_weight = 0.1
quality_weight = 0.9
Latency fallback
If the local model is slow, the router falls back to cloud:
[agents.route_profile]
prefer = "local"
fallback = "cloud"
max_latency_ms = 2000 # Switch to cloud if local takes > 2s
Step 7: Verify routes
akshi config-check --test-routes
Output:
Route "local": ollama/llama3.2 OK latency=450ms
Route "cloud": anthropic/claude-sonnet-4-20250514 OK latency=1200ms
Spend tracking
Monitor inference costs:
akshi spend --agent summarizer
Output:
Agent: summarizer
Today: $0.03 / $2.00
This week: $0.15
Next steps
- Add more models (OpenAI, local GGUF) as additional routes
- Create per-agent route profiles with different cost/quality tradeoffs
- Set up Prometheus alerts for spend thresholds
CLI Reference
The akshi command-line interface controls the runtime, agents, mesh, and
supporting subsystems. The global invocation pattern is:
akshi <command> [subcommand] [options]
Global flags
| Flag | Description |
|---|---|
-c, --config <path> | Path to runtime.toml (default: ./config.toml) |
--json | Emit machine-readable JSON output |
-v, --verbose | Increase log verbosity (repeatable) |
-h, --help | Print help for any command |
-V, --version | Print version |
Command categories
| Category | Commands | Description |
|---|---|---|
| Lifecycle | init, run, stop, restart, reload | Initialize, start, stop, and hot-reload the runtime |
| Observability | status, logs | Inspect agent states and stream logs |
| Scaffolding | quickstart, create agent, builder | Generate projects and starter agents |
| Validation | config-check | Validate configuration files |
| Identity | identity | Manage DID keys and agent identities |
| Deployment | deploy | Push agents to remote nodes |
| Mesh | mesh | Manage mesh peers and WireGuard tunnels |
| Registry | registry | Publish and fetch agent packages |
| Economics | spend | Query spend ledger and fuel balances |
| Governance | mutation, audit, capability | Mutation control, audit trails, capability grants |
| Replay | snapshot, replay | Capture and replay journal state |
| Protocols | mcp-serve | Start the MCP server endpoint |
akshi init
Initialize a new Akshi node with default configuration.
Synopsis
akshi init
Description
Creates the default directory structure and configuration files for a new Akshi node in the current working directory. This is typically the first command run when setting up a new node.
The generated config.toml contains sensible defaults and can be customized
before starting the runtime with akshi run.
Options
This command takes no options.
Examples
Initialize a new node in the current directory:
akshi init
Initialize and immediately start the runtime:
akshi init && akshi run
akshi run
Start the Akshi agent runtime.
Synopsis
akshi run [OPTIONS]
Description
Launches the agent runtime, loading all agents defined in the configuration file. The runtime uses a synchronous thread-per-agent execution model. WASM modules are compiled and cached by default to speed up subsequent starts.
Options
| Option | Default | Description |
|---|---|---|
-c, --config | config.toml | Path to the node configuration file |
--otlp_endpoint | — | OpenTelemetry collector endpoint for trace export |
--stream | false | Enable streaming output from agents |
--no_wasm_cache | false | Disable the WASM compilation cache |
Examples
Start with default configuration:
akshi run
Start with a custom config and telemetry export:
akshi run -c mynode.toml --otlp_endpoint http://localhost:4317
Start with streaming and no WASM cache:
akshi run --stream --no_wasm_cache
akshi stop
Stop a running Akshi node.
Synopsis
akshi stop
Description
Sends a graceful shutdown signal to the running Akshi node process. All agents are stopped and resources are released. If the node is not running, the command exits with an error.
Options
This command takes no options.
Examples
Stop the running node:
akshi stop
akshi restart
Restart a running Akshi node.
Synopsis
akshi restart [OPTIONS]
Description
Performs a graceful stop followed by a start of the Akshi node. Useful after configuration changes that cannot be applied via hot-reload.
Options
| Option | Default | Description |
|---|---|---|
-c, --config | config.toml | Path to the node configuration file |
Examples
Restart with the default config:
akshi restart
Restart with an updated configuration file:
akshi restart -c updated-config.toml
akshi reload
Hot-reload WASM modules without restarting the node.
Synopsis
akshi reload [OPTIONS]
Description
Triggers a hot-reload of WASM agent modules in the running runtime. By default
all agents are reloaded. Use --agent to target a specific agent. The runtime
does not restart; in-flight messages are drained before the new module is
swapped in.
Options
| Option | Default | Description |
|---|---|---|
--agent | all | Name of a specific agent to reload |
-c, --config | config.toml | Path to the node configuration file |
Examples
Reload all agents:
akshi reload
Reload a single agent:
akshi reload --agent my-monitor
akshi status
Show the status of agents on the running node.
Synopsis
akshi status
Description
Queries the running Akshi node and prints a summary table of all loaded agents, including their current state (running, stopped, errored) and resource usage.
Options
This command takes no options.
Examples
Check agent status:
akshi status
akshi logs
View agent logs from the running node.
Synopsis
akshi logs [OPTIONS]
Description
Retrieves log entries from the running Akshi node. Logs can be filtered by
agent name and streamed in real time using Server-Sent Events. The --since
flag accepts a sequence cursor to resume from a previous position.
Options
| Option | Default | Description |
|---|---|---|
-a, --agent | all | Filter logs to a specific agent |
-f, --follow | false | Stream logs in real time via SSE |
--timestamps | false | Show timestamps on each log line |
--since | — | Sequence cursor to resume from |
Examples
View all logs:
akshi logs
Stream logs for a specific agent with timestamps:
akshi logs -a my-agent -f --timestamps
Resume logs from a cursor:
akshi logs --since 00000042
akshi quickstart
Bootstrap a demo node with sample agents.
Synopsis
akshi quickstart [OPTIONS]
Description
Sets up a ready-to-run demo environment with pre-configured sample agents. This is the fastest way to explore Akshi without writing any configuration. The command initializes the node, installs demo WASM modules, and starts the runtime.
Options
| Option | Default | Description |
|---|---|---|
--config | config.quickstart.toml | Path to the quickstart configuration file |
Examples
Run the quickstart demo:
akshi quickstart
Run quickstart with a custom config:
akshi quickstart --config my-demo.toml
akshi create agent
Scaffold a new agent project.
Synopsis
akshi create agent <name> [OPTIONS]
Description
Generates a new agent project from a template. The scaffolded project includes
build configuration, a starter WASM module, and a README. Use --template to
choose a starting point and --lang to select the implementation language.
Options
| Option | Default | Description |
|---|---|---|
--template | blank | Project template: blank, monitor, channel-integration, research, builder |
--lang | rust | Implementation language: rust, python, typescript |
--output_dir | ./<name> | Directory to write the scaffolded project into |
Examples
Create a blank Rust agent:
akshi create agent my-agent
Create a monitor agent in TypeScript:
akshi create agent health-check --template monitor --lang typescript
Scaffold into a specific directory:
akshi create agent data-sync --output_dir ./agents/data-sync
akshi config-check
Validate a node configuration file.
Synopsis
akshi config-check [OPTIONS]
Description
Parses and validates the specified configuration file, checking for syntax
errors, unknown keys, and constraint violations. Returns a human-readable
report by default or structured JSON when --json is set.
Options
| Option | Default | Description |
|---|---|---|
-c, --config | config.toml | Path to the configuration file to validate |
--json | false | Output validation results as JSON |
Examples
Validate the default config:
akshi config-check
Validate a specific config and get JSON output:
akshi config-check -c production.toml --json
akshi identity
Manage the node’s decentralized identity.
Synopsis
akshi identity
akshi sign-payload <file>
akshi verify-payload <file>
Description
akshi identity displays the node’s DID:key, which is derived from the node’s
keypair and used for authentication across the mesh.
akshi sign-payload signs an arbitrary file with the node’s private key and
writes the detached signature to stdout.
akshi verify-payload verifies a signed payload against the node’s public key.
Options
These commands take no additional options beyond the required file argument.
Examples
Show the node’s DID:key:
akshi identity
Sign a payload file:
akshi sign-payload message.json > message.sig
Verify a signed payload:
akshi verify-payload message.json
akshi deploy
Deploy agents to a remote node via SSH.
Synopsis
akshi deploy <target> [OPTIONS]
akshi update-config <target> [OPTIONS]
Description
akshi deploy packages the local agent configuration and WASM modules, then
deploys them to a remote node over SSH. Import flags allow converting agent
definitions from other frameworks before deploying.
akshi update-config pushes only the configuration file to the remote node
without redeploying agent modules.
Options
| Option | Default | Description |
|---|---|---|
-c, --config | config.toml | Path to the local configuration file |
--from_langgraph | false | Convert a LangGraph project before deploying |
--from_crewai | false | Convert a CrewAI project before deploying |
Examples
Deploy to a remote host:
akshi deploy user@192.168.1.10
Deploy a converted LangGraph project:
akshi deploy user@host --from_langgraph -c langgraph.toml
Push only an updated config:
akshi update-config user@host -c updated.toml
akshi mesh
Mesh networking commands for node discovery and connectivity.
Synopsis
akshi mesh-discover [OPTIONS]
akshi mesh-transport-check [OPTIONS]
akshi mesh-dht <subcommand>
akshi mesh-tunnel <subcommand> [OPTIONS]
Description
The mesh command family manages peer-to-peer networking between Akshi nodes.
- mesh-discover — Scan the local network for peer nodes.
- mesh-transport-check — Verify that mesh transports are healthy.
- mesh-dht — Interact with the distributed hash table (bootstrap, publish, resolve).
- mesh-tunnel — Manage encrypted tunnels between nodes.
Options
| Command | Option | Default | Description |
|---|---|---|---|
mesh-discover | --timeout_ms | 5000 | Discovery timeout in milliseconds |
mesh-transport-check | --require_ready | false | Fail if any transport is not ready |
mesh-tunnel up/down | --dry_run | false | Preview tunnel changes without applying |
Examples
Discover peers with a 10-second timeout:
akshi mesh-discover --timeout_ms 10000
Bootstrap the DHT and publish the local node:
akshi mesh-dht bootstrap && akshi mesh-dht publish
Resolve a peer by DID:
akshi mesh-dht resolve did:key:z6Mkf...
Bring up a tunnel (dry run):
akshi mesh-tunnel up --dry_run
akshi registry
Publish, list, and install agent packages from the registry.
Synopsis
akshi registry-publish
akshi registry-list [OPTIONS]
akshi registry-reseal [OPTIONS]
akshi install <package> [OPTIONS]
akshi rollback <package>
Description
- registry-publish — Publish the current agent package to the registry.
- registry-list — List available packages (local or remote).
- registry-reseal — Re-seal a tampered or migrated package manifest.
- install — Install a package from the registry.
- rollback — Roll back an installed package to the previous version.
Options
| Command | Option | Default | Description |
|---|---|---|---|
registry-list | --remote | false | Query the remote registry instead of local |
registry-list | --api_token_env_var | — | Environment variable holding the API token |
registry-reseal | --acknowledge_tamper_risk | false | Required safety flag to confirm intent |
install | --version | latest | Specific version to install |
install | --remote | false | Install from the remote registry |
Examples
Publish the current package:
akshi registry-publish
List remote packages:
akshi registry-list --remote --api_token_env_var AKSHI_TOKEN
Install a specific version:
akshi install my-agent --version 1.2.0 --remote
Rollback a package:
akshi rollback my-agent
akshi spend
Check and simulate agent spend budgets.
Synopsis
akshi spend-check [OPTIONS]
akshi spend-simulate [OPTIONS]
Description
- spend-check — Display the current spend budget and usage for an agent.
- spend-simulate — Simulate a spend transaction to verify that it would be allowed under the configured budget rails.
Options
| Command | Option | Default | Description |
|---|---|---|---|
spend-check | --agent | all | Agent name to check |
spend-simulate | --agent | — | Agent name (required) |
spend-simulate | --amount | — | Amount to simulate (required) |
spend-simulate | --rail | — | Budget rail to simulate against |
Examples
Check spend status for all agents:
akshi spend-check
Simulate a transaction:
akshi spend-simulate --agent my-agent --amount 50 --rail api-calls
akshi mutation
Propose, inspect, apply, and rollback agent mutations.
Synopsis
akshi mutation-propose [OPTIONS]
akshi mutation-show <file>
akshi mutation-apply <proposal> [OPTIONS]
akshi mutation-rollback [OPTIONS]
Description
The mutation command family enables controlled, auditable changes to agent behavior at runtime.
- mutation-propose — Generate a mutation proposal for an agent.
- mutation-show — Display the contents of a proposal file.
- mutation-apply — Apply a proposal to the running agent.
- mutation-rollback — Revert the last applied mutation.
Options
| Command | Option | Default | Description |
|---|---|---|---|
mutation-propose | --agent | — | Target agent name (required) |
mutation-propose | --strategy | — | Mutation strategy to use |
mutation-apply | --dry_run | false | Preview changes without applying |
mutation-rollback | --agent | — | Target agent name (required) |
Examples
Propose a mutation:
akshi mutation-propose --agent my-agent --strategy optimize
Preview and then apply:
akshi mutation-apply proposal-001.json --dry_run
akshi mutation-apply proposal-001.json
Rollback the last mutation:
akshi mutation-rollback --agent my-agent
akshi audit
Export and verify audit trails and receipts.
Synopsis
akshi export-audit <output> [OPTIONS]
akshi verify-receipt <file>
akshi verify-audit <file>
akshi verify-bundle <manifest> <signature>
Description
- export-audit — Export the audit log to a file.
- verify-receipt — Verify a single execution receipt.
- verify-audit — Verify an exported audit log for integrity.
- verify-bundle — Verify a signed bundle against its manifest.
Options
| Command | Option | Default | Description |
|---|---|---|---|
export-audit | --limit | all | Maximum number of entries to export |
Examples
Export the full audit log:
akshi export-audit audit.json
Export the last 100 entries:
akshi export-audit recent.json --limit 100
Verify a receipt and an audit file:
akshi verify-receipt receipt-0042.json
akshi verify-audit audit.json
Verify a signed bundle:
akshi verify-bundle manifest.json manifest.sig
akshi snapshot
Export and restore node state snapshots.
Synopsis
akshi snapshot-export <output> [OPTIONS]
akshi snapshot-restore <input>
Description
- snapshot-export — Serialize the current node state (agents, config, data) to a portable archive file.
- snapshot-restore — Restore a node from a previously exported snapshot.
The node should be stopped before restoring a snapshot to avoid data conflicts.
Options
| Command | Option | Default | Description |
|---|---|---|---|
snapshot-export | -c, --config | config.toml | Path to the node configuration file |
Examples
Export a snapshot:
akshi snapshot-export backup-2026-03-17.tar
Restore from a snapshot:
akshi stop
akshi snapshot-restore backup-2026-03-17.tar
akshi run
akshi mcp-serve
Serve an agent as a Model Context Protocol (MCP) server.
Synopsis
akshi mcp-serve <agent> [OPTIONS]
Description
Exposes a running agent’s capabilities as an MCP-compatible server, allowing external LLM toolchains to invoke the agent’s tools and read its resources over the standard MCP transport.
Options
| Option | Default | Description |
|---|---|---|
-c, --config | config.toml | Path to the node configuration file |
Examples
Serve an agent over MCP:
akshi mcp-serve my-agent
Serve with a custom config:
akshi mcp-serve my-agent -c production.toml
akshi capability
Scan and report on agent capabilities.
Synopsis
akshi capability-scan
akshi capability-report <output>
Description
- capability-scan — Inspect all loaded agents and print a summary of their declared capabilities (tools, resources, permissions).
- capability-report — Write a detailed capability report to a file for audit or compliance purposes.
Options
These commands take no additional options beyond the required output argument
for capability-report.
Examples
Scan capabilities of all loaded agents:
akshi capability-scan
Generate a capability report:
akshi capability-report capabilities.json
akshi replay
Replay a recorded execution trace.
Synopsis
akshi replay <trace_id> [OPTIONS]
Description
Re-executes a previously recorded agent execution trace, replaying all inputs and verifying that outputs match the original run. Useful for debugging and regression testing.
Options
| Option | Default | Description |
|---|---|---|
--agent | — | Restrict replay to a specific agent in the trace |
Examples
Replay a trace:
akshi replay abc123def456
Replay only a specific agent’s portion of the trace:
akshi replay abc123def456 --agent my-agent
akshi builder
Generate agents from specifications and manage local models.
Synopsis
akshi builder-generate <spec> [OPTIONS]
akshi builder-verify <manifest>
akshi verify-model <model>
akshi install-model <model>
akshi list-models
akshi remove-model <model>
Description
- builder-generate — Generate an agent project from a declarative spec file.
- builder-verify — Verify a generated agent manifest for correctness.
- verify-model — Check integrity of a locally installed model.
- install-model — Download and install a model for local inference.
- list-models — List all locally installed models.
- remove-model — Remove a locally installed model.
Options
| Command | Option | Default | Description |
|---|---|---|---|
builder-generate | --output_dir | ./ | Directory for the generated project |
Examples
Generate an agent from a spec:
akshi builder-generate agent-spec.yaml --output_dir ./generated
Verify the generated manifest:
akshi builder-verify ./generated/manifest.json
Install and list models:
akshi install-model llama-3-8b
akshi list-models
Remove a model:
akshi remove-model llama-3-8b
HTTP API Reference
The Akshi runtime exposes an HTTP API for monitoring, control, and inter-agent communication. The dashboard and SDKs use this API internally.
Base URL
http://127.0.0.1:3210/api/v1/
The port is configured by dashboard.port in runtime.toml.
Authentication
All endpoints require authentication unless the runtime is started in local-only mode. Two methods are supported:
| Method | Header / Cookie |
|---|---|
| Bearer token (recommended) | Authorization: Bearer <token> |
| Dashboard cookie | akshi_dashboard_token=<token> |
See Authentication for details on CSRF and CORS.
CSRF protection
Mutating requests (POST, PUT, PATCH, DELETE) that use cookie
authentication must include the X-Akshi-Csrf header. Bearer-token requests
are exempt.
Rate limiting
Every response includes rate-limit headers:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests per window |
X-RateLimit-Remaining | Remaining requests in the current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Error format
All errors return a JSON body:
{"error": "code", "detail": "human-readable message"}
HTTP status codes follow standard semantics: 400 for bad input, 401 for
missing auth, 403 for forbidden, 404 for not found, 429 for rate limited.
Endpoint categories
| Category | Page | Prefix |
|---|---|---|
| Health & Metrics | health-metrics | /api/v1/health, /api/v1/metrics |
| Agent Status | agent-status | /api/v1/agents |
| Logs & Findings | logs-findings | /api/v1/logs, /api/v1/findings |
| Event Streams | event-streams | /api/v1/events (SSE) |
| A2A Tasks | a2a-tasks | /api/v1/a2a |
| Approvals | approvals | /api/v1/approvals |
| Broker Grants | broker-grants | /api/v1/broker |
| Sync & Convergence | sync-convergence | /api/v1/sync |
| Registry | registry | /api/v1/registry |
| MCP Server | mcp-server | /api/v1/mcp |
| Discovery | discovery | /.well-known/ |
Authentication
All API requests must be authenticated unless the runtime is started with
dashboard.auth = false (local testing only).
Authentication methods
Bearer token (recommended)
Pass the token in the Authorization header:
Authorization: Bearer <token>
Generate a token with akshi init or set dashboard.token in runtime.toml.
Bearer-token requests are exempt from CSRF requirements.
Dashboard cookie
The dashboard login endpoint sets the akshi_dashboard_token cookie. API
requests using this cookie must also include the CSRF header on mutating
methods (see below).
No authentication (local testing)
Set dashboard.auth = false in runtime.toml. The runtime accepts all
requests without credentials. Never use this in production or on a
network-accessible interface.
CSRF protection
Mutating requests (POST, PUT, PATCH, DELETE) that authenticate via
cookie must include:
X-Akshi-Csrf: <csrf-token>
The CSRF token is returned in the login response and stored in the
akshi_csrf cookie. Bearer-token requests do not require this header.
CORS configuration
The runtime sets CORS headers based on dashboard.cors_origins in
runtime.toml. By default only http://127.0.0.1:3210 is allowed.
[dashboard]
cors_origins = ["https://my-dashboard.example.com"]
Preflight OPTIONS requests are handled automatically.
Rate limiting
Rate limits apply per source IP. Defaults:
| Window | Limit |
|---|---|
| 1 minute | 300 requests |
| Burst | 50 requests |
When exceeded the API returns 429 Too Many Requests with a Retry-After
header. Adjust limits with dashboard.rate_limit and
dashboard.rate_limit_burst in runtime.toml.
Health & Metrics
Liveness and observability endpoints. No authentication required for /health.
GET /api/v1/health
Returns runtime liveness status.
Response 200 OK
{"status": "ok"}
Returns 503 Service Unavailable if the runtime is shutting down.
GET /api/v1/metrics
Returns Prometheus-format metrics. Requires authentication.
Response 200 OK (text/plain; charset=utf-8)
# HELP akshi_build_info Build metadata.
# TYPE akshi_build_info gauge
akshi_build_info{version="0.5.0",commit="a1b2c3d"} 1
# HELP akshi_agents_total Current number of loaded agents.
# TYPE akshi_agents_total gauge
akshi_agents_total 3
# HELP akshi_agent_restarts_total Cumulative agent restart count.
# TYPE akshi_agent_restarts_total counter
akshi_agent_restarts_total{agent="log-monitor"} 0
akshi_agent_restarts_total{agent="researcher"} 1
# HELP akshi_inference_requests_total Inference requests routed.
# TYPE akshi_inference_requests_total counter
akshi_inference_requests_total{route="local"} 42
akshi_inference_requests_total{route="cloud"} 7
# HELP akshi_fuel_consumed_total WASM fuel consumed.
# TYPE akshi_fuel_consumed_total counter
akshi_fuel_consumed_total{agent="log-monitor"} 128000
Key metric families
| Metric | Type | Description |
|---|---|---|
akshi_build_info | gauge | Version and commit metadata |
akshi_agents_total | gauge | Number of loaded agents |
akshi_agent_restarts_total | counter | Per-agent restart count |
akshi_inference_requests_total | counter | Inference requests by route type |
akshi_fuel_consumed_total | counter | WASM fuel consumed per agent |
akshi_sync_envelopes_total | counter | Sync envelopes sent/received |
akshi_approvals_pending | gauge | Pending approval requests |
akshi_http_request_duration_seconds | histogram | API request latency |
Agent Status
Query the current state of all loaded agents.
GET /api/v1/status
Returns an array of agent status objects.
Response 200 OK
[
{
"name": "log-monitor",
"status": "running",
"uptime_ms": 360000,
"last_activity_ms_ago": 1200,
"restart_count": 0,
"memory_profile": {
"heap_bytes": 2097152,
"fuel_remaining": 500000
}
},
{
"name": "researcher",
"status": "idle",
"uptime_ms": 360000,
"last_activity_ms_ago": 45000,
"restart_count": 1,
"memory_profile": {
"heap_bytes": 4194304,
"fuel_remaining": 250000
}
}
]
Agent status fields
| Field | Type | Description |
|---|---|---|
name | string | Agent name from configuration |
status | string | One of running, idle, stopped, error |
uptime_ms | integer | Milliseconds since agent was last started |
last_activity_ms_ago | integer | Milliseconds since the agent last executed |
restart_count | integer | Number of times the agent has been restarted |
memory_profile.heap_bytes | integer | Current WASM linear memory usage |
memory_profile.fuel_remaining | integer | Remaining WASM fuel budget (null if unlimited) |
Filtering
Pass ?agent=NAME to filter to a single agent:
GET /api/v1/status?agent=log-monitor
Logs & Findings
Retrieve runtime logs and agent-reported findings.
GET /api/v1/logs
Returns log entries starting after the given sequence number.
Query parameters
| Param | Type | Default | Description |
|---|---|---|---|
since | integer | 0 | Return entries after this sequence number |
limit | integer | 100 | Maximum entries to return |
Response 200 OK
{
"next_seq": 157,
"logs": [
{
"seq": 155,
"ts": "2026-03-17T10:00:01Z",
"level": "info",
"agent": "log-monitor",
"message": "Scan cycle completed, 3 new entries"
},
{
"seq": 156,
"ts": "2026-03-17T10:00:02Z",
"level": "warn",
"agent": "log-monitor",
"message": "High error rate detected in /var/log/app.log"
}
]
}
Use next_seq as the since value for polling.
GET /api/v1/findings
Returns agent-reported findings (alerts, anomalies, summaries).
Query parameters
| Param | Type | Default | Description |
|---|---|---|---|
since | integer | 0 | Return findings after this sequence number |
limit | integer | 50 | Maximum findings to return |
Response 200 OK
{
"next_seq": 12,
"findings": [
{
"seq": 11,
"ts": "2026-03-17T10:00:02Z",
"agent": "log-monitor",
"severity": "warning",
"title": "Error rate spike",
"detail": "Error rate exceeded 5% threshold in /var/log/app.log"
}
]
}
POST /api/v1/findings
Submit a finding programmatically (typically used by external integrations).
Request body
{
"agent": "external-scanner",
"severity": "info",
"title": "Scan complete",
"detail": "No issues found in latest scan."
}
Response 201 Created
{"seq": 13}
Event Streams (SSE)
Server-Sent Events endpoints for real-time monitoring. All SSE endpoints use
text/event-stream content type with keep-alive pings every 15 seconds.
GET /api/v1/events
Unified event stream for all agents.
Event types
| Event | Data | Description |
|---|---|---|
token | {"agent":"NAME","text":"..."} | Inference token streamed |
tool_call_start | {"agent":"NAME","tool":"...","args":{}} | Agent started a tool call |
tool_call_end | {"agent":"NAME","tool":"...","result":"..."} | Tool call completed |
agent_complete | {"agent":"NAME","duration_ms":N} | Agent tick finished |
error | {"agent":"NAME","error":"..."} | Agent error |
keepalive | {} | Connection keep-alive ping |
Example stream
event: token
data: {"agent":"researcher","text":"The"}
event: token
data: {"agent":"researcher","text":" results"}
event: tool_call_start
data: {"agent":"researcher","tool":"http_fetch","args":{"url":"https://example.com"}}
event: tool_call_end
data: {"agent":"researcher","tool":"http_fetch","result":"200 OK"}
event: agent_complete
data: {"agent":"researcher","duration_ms":1234}
GET /api/v1/agents/stream
Per-agent event stream.
Query parameters
| Param | Required | Description |
|---|---|---|
agent | yes | Agent name to subscribe to |
Returns the same event types as /events, filtered to the specified agent.
GET /api/v1/agui/events
AG-UI protocol event stream for frontend integration. Follows the AG-UI lifecycle specification.
Event types
| Event | Description |
|---|---|
STATE_SNAPSHOT | Full state of all agents (sent on connect) |
STATE_DELTA | Incremental state change |
RUN_STARTED | Agent execution cycle started |
RUN_FINISHED | Agent execution cycle completed |
TEXT_MESSAGE_START | Beginning of a text message |
TEXT_MESSAGE_CONTENT | Streamed text content chunk |
TEXT_MESSAGE_END | End of a text message |
TOOL_CALL_START | Tool invocation started |
TOOL_CALL_END | Tool invocation completed |
Example
event: STATE_SNAPSHOT
data: {"agents":[{"name":"log-monitor","status":"running"}]}
event: RUN_STARTED
data: {"agent":"log-monitor","run_id":"abc123"}
event: TEXT_MESSAGE_CONTENT
data: {"agent":"log-monitor","run_id":"abc123","text":"Scanning..."}
event: RUN_FINISHED
data: {"agent":"log-monitor","run_id":"abc123","duration_ms":500}
A2A Tasks
Agent-to-Agent task delegation following the A2A protocol. Agents can create tasks for other agents, track progress, and receive results.
GET /api/v1/a2a/tasks
List all tasks.
Query parameters
| Param | Type | Default | Description |
|---|---|---|---|
status | string | all | Filter by pending, running, completed, failed |
agent | string | all | Filter by assigned agent |
Response 200 OK
[
{
"id": "task-001",
"from": "orchestrator",
"to": "researcher",
"status": "completed",
"input": {"query": "latest Rust async runtimes"},
"output": {"summary": "Top 3 runtimes: tokio, async-std, smol"},
"created_at": "2026-03-17T09:00:00Z",
"completed_at": "2026-03-17T09:00:05Z"
}
]
GET /api/v1/a2a/tasks/
Get a single task by ID.
Response 200 OK — same shape as the array element above.
Returns 404 if the task does not exist.
POST /api/v1/a2a/tasks
Create a new task.
Request body
{
"from": "orchestrator",
"to": "researcher",
"input": {"query": "compare WASM runtimes"}
}
Response 201 Created
{
"id": "task-002",
"status": "pending",
"created_at": "2026-03-17T10:00:00Z"
}
GET /api/v1/a2a/events
SSE stream of task lifecycle events.
Event types
| Event | Description |
|---|---|
task_created | New task submitted |
task_started | Agent began processing |
task_progress | Intermediate progress update |
task_completed | Task finished successfully |
task_failed | Task failed with error |
Approvals
Manage approval gates for sensitive agent actions. When an agent requests a capability that requires approval, the request is queued until granted or denied.
GET /api/v1/approvals
List pending approval requests.
Response 200 OK
[
{
"id": "apr-001",
"agent": "researcher",
"capability": "http_fetch",
"detail": "GET https://api.example.com/data",
"requested_at": "2026-03-17T10:00:00Z"
}
]
GET /api/v1/approvals/batched
Group pending approvals by agent and capability for bulk review.
Response 200 OK
{
"groups": [
{
"agent": "researcher",
"capability": "http_fetch",
"count": 3,
"ids": ["apr-001", "apr-002", "apr-003"]
}
]
}
POST /api/v1/approvals/
Approve or deny a single request.
Request body
{"decision": "approve"}
decision must be "approve" or "deny".
Response 200 OK
{"id": "apr-001", "decision": "approve", "decided_at": "2026-03-17T10:01:00Z"}
Returns 404 if the approval ID does not exist or has already been decided.
POST /api/v1/approvals/batch
Approve or deny multiple requests at once.
Request body
{
"ids": ["apr-001", "apr-002", "apr-003"],
"decision": "approve"
}
Response 200 OK
{
"decided": 3,
"ids": ["apr-001", "apr-002", "apr-003"]
}
Broker Grants
View and manage secrets broker grants. The broker controls which agents can access which secret domains (API keys, credentials, tokens).
GET /api/v1/broker/grants
List all current grants.
Response 200 OK
[
{
"agent": "researcher",
"domain": "anthropic",
"status": "active",
"granted_at": "2026-03-17T08:00:00Z"
},
{
"agent": "researcher",
"domain": "github",
"status": "revoked",
"granted_at": "2026-03-17T08:00:00Z",
"revoked_at": "2026-03-17T09:30:00Z"
}
]
Grant fields
| Field | Type | Description |
|---|---|---|
agent | string | Agent name |
domain | string | Secret domain identifier |
status | string | active or revoked |
granted_at | string | ISO 8601 timestamp of grant creation |
revoked_at | string | ISO 8601 timestamp of revocation (if revoked) |
POST /api/v1/broker/grants/{agent}/{domain}/revoke
Revoke an active grant. The agent will lose access to the secret domain immediately.
Response 200 OK
{"agent": "researcher", "domain": "github", "status": "revoked"}
Returns 404 if no active grant exists for the agent/domain pair.
POST /api/v1/broker/grants/{agent}/{domain}/restore
Restore a previously revoked grant.
Response 200 OK
{"agent": "researcher", "domain": "github", "status": "active"}
Returns 404 if no revoked grant exists for the agent/domain pair.
Sync & Convergence
Endpoints for journal synchronization between mesh nodes.
GET /api/v1/sync/convergence
Check convergence status for an agent’s journal.
Query parameters
| Param | Required | Description |
|---|---|---|
agent | yes | Agent name |
Response 200 OK
{
"agent": "researcher",
"local_seq": 142,
"peers": [
{"peer_id": "node-beta", "acked_seq": 140, "lag": 2},
{"peer_id": "node-gamma", "acked_seq": 142, "lag": 0}
],
"converged": false
}
converged is true when all peers have acknowledged the local sequence.
GET /api/v1/sync/envelopes
Retrieve sync envelopes (journal entries packaged for replication).
Query parameters
| Param | Type | Default | Description |
|---|---|---|---|
agent | string | required | Agent name |
since | integer | 0 | Return envelopes after this sequence |
limit | integer | 100 | Maximum envelopes to return |
Response 200 OK
{
"envelopes": [
{
"seq": 141,
"agent": "researcher",
"payload_hash": "sha256:abc123...",
"timestamp": "2026-03-17T10:00:00Z",
"signature": "ed25519:..."
}
]
}
POST /api/v1/sync/envelopes
Push envelopes from a remote peer. Used by the mesh sync protocol.
Request body
{
"peer_id": "node-beta",
"envelopes": [
{
"seq": 143,
"agent": "researcher",
"payload": "base64-encoded-data",
"signature": "ed25519:..."
}
]
}
Response 200 OK
{"accepted": 1, "rejected": 0}
Envelopes with invalid signatures or duplicate sequences are rejected.
Registry
Manage the local agent package registry. Packages are WASM components with metadata that can be published and shared across nodes.
GET /api/v1/registry/packages
List all registered packages.
Response 200 OK
[
{
"id": "log-monitor",
"version": "1.2.0",
"description": "Monitors log files for anomalies",
"wasm_size_bytes": 524288,
"published_at": "2026-03-15T12:00:00Z",
"checksum": "sha256:def456..."
}
]
POST /api/v1/registry/packages
Publish a new package or update an existing one.
Request body (multipart/form-data)
| Field | Type | Description |
|---|---|---|
id | string | Package identifier |
version | string | Semver version string |
description | string | Human-readable description |
wasm | file | The compiled WASM component binary |
Response 201 Created
{
"id": "log-monitor",
"version": "1.3.0",
"checksum": "sha256:abc789...",
"published_at": "2026-03-17T10:00:00Z"
}
Returns 409 Conflict if the exact version already exists.
GET /api/v1/registry/packages/
Get metadata for a specific package.
Response 200 OK
{
"id": "log-monitor",
"version": "1.2.0",
"description": "Monitors log files for anomalies",
"wasm_size_bytes": 524288,
"published_at": "2026-03-15T12:00:00Z",
"checksum": "sha256:def456...",
"versions": ["1.0.0", "1.1.0", "1.2.0"]
}
Returns 404 if the package does not exist.
MCP Server
Expose agents as MCP (Model Context Protocol) tool servers. Each agent with MCP enabled gets its own endpoint that accepts JSON-RPC 2.0 requests.
POST /api/v1/mcp/
Send a JSON-RPC 2.0 MCP request to the named agent.
List tools
Request
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}
Response 200 OK
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "search_logs",
"description": "Search log files for a pattern",
"inputSchema": {
"type": "object",
"properties": {
"pattern": {"type": "string"},
"path": {"type": "string"}
},
"required": ["pattern"]
}
}
]
}
}
Call a tool
Request
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "search_logs",
"arguments": {"pattern": "ERROR", "path": "/var/log/app.log"}
}
}
Response 200 OK
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{"type": "text", "text": "Found 3 matches in /var/log/app.log"}
]
}
}
Error responses
MCP errors use standard JSON-RPC error codes:
| Code | Meaning |
|---|---|
-32600 | Invalid request |
-32601 | Method not found |
-32602 | Invalid params |
-32603 | Internal error |
Returns HTTP 404 if {agent_name} does not exist or does not have MCP enabled.
Discovery (.well-known)
Standard discovery endpoints at the well-known URI path. These do not require authentication and are used by peers, clients, and crawlers.
GET /.well-known/agent-card.json
A2A discovery card describing the node’s agents and capabilities.
Response 200 OK
{
"name": "my-akshi-node",
"url": "https://node.example.com",
"version": "0.5.0",
"capabilities": {
"streaming": true,
"pushNotifications": false
},
"skills": [
{
"id": "log-monitor",
"name": "Log Monitor",
"description": "Monitors log files for anomalies"
},
{
"id": "researcher",
"name": "Research Agent",
"description": "Performs web research and summarization"
}
]
}
GET /.well-known/did.json
DID (Decentralized Identifier) document for the node identity.
Response 200 OK
{
"@context": "https://www.w3.org/ns/did/v1",
"id": "did:web:node.example.com",
"verificationMethod": [
{
"id": "did:web:node.example.com#key-1",
"type": "Ed25519VerificationKey2020",
"publicKeyMultibase": "z6Mkf5rGM..."
}
],
"authentication": ["did:web:node.example.com#key-1"]
}
GET /.well-known/agents.md
Human-readable Markdown summary of the node and its agents. Useful for AI crawlers and documentation generators.
Response 200 OK (text/markdown)
# my-akshi-node
Akshi runtime v0.5.0
## Agents
- **log-monitor** — Monitors log files for anomalies
- **researcher** — Performs web research and summarization
## API
Base URL: https://node.example.com/api/v1/
A2A endpoint: https://node.example.com/api/v1/a2a/tasks
SDK Reference
Akshi provides multiple SDK options depending on your language and execution model.
| SDK | Language | Model | Page |
|---|---|---|---|
| Rust SDK | Rust | Compiled to WASM, runs inside the sandbox | rust-sdk |
| Python SDK | Python | HTTP client against the API | python-sdk |
| TypeScript SDK | TypeScript | HTTP client against the API | typescript-sdk |
| WIT Interface | IDL | Component Model contract | wit-interface |
The Rust SDK (akshi-sdk crate) is the primary interface for agents
running inside the WASM sandbox. It provides direct host-capability calls with
no network overhead.
The Python and TypeScript SDKs are thin HTTP clients that wrap the HTTP API. They are suited for external tooling, dashboards, and orchestration scripts.
The WIT interface defines the canonical contract between the sandbox and the host runtime using the WebAssembly Component Model.
Rust SDK
The akshi-sdk crate is the primary interface for agents running inside the
Akshi WASM sandbox. Add it to your agent’s Cargo.toml:
[dependencies]
akshi-sdk = "0.1"
Buffer safety: Many SDK functions return string slices backed by a shared host buffer. The data is only valid until the next SDK call. If you need to store a value, copy it immediately with
.to_string()or.to_vec().
Database
db_query
#![allow(unused)]
fn main() {
fn db_query(sql: &str) -> Result<String, String>
}
Execute a read-only SQL query against the agent’s sandboxed SQLite database. Returns the result set as a JSON string.
Capability: database.read
#![allow(unused)]
fn main() {
let rows = akshi_sdk::db_query("SELECT * FROM events LIMIT 10")?;
}
db_execute
#![allow(unused)]
fn main() {
fn db_execute(sql: &str) -> Result<u64, String>
}
Execute a write SQL statement. Returns the number of affected rows.
Capability: database.write
#![allow(unused)]
fn main() {
let n = akshi_sdk::db_execute("INSERT INTO events (msg) VALUES ('hello')")?;
}
Journal
journal_append
#![allow(unused)]
fn main() {
fn journal_append(payload: &str) -> Result<(), String>
}
Append an entry to the agent’s append-only CRDT journal. Entries sync automatically across mesh peers.
Capability: journal.write
#![allow(unused)]
fn main() {
akshi_sdk::journal_append("{\"event\":\"scan_complete\"}")?;
}
journal_read
#![allow(unused)]
fn main() {
fn journal_read(since: u64) -> Result<String, String>
}
Read journal entries since the given sequence number. Returns a JSON array.
Capability: journal.read
#![allow(unused)]
fn main() {
let entries = akshi_sdk::journal_read(0)?;
}
MCP (Model Context Protocol)
mcp_call_tool
#![allow(unused)]
fn main() {
fn mcp_call_tool(server: &str, tool: &str, args: &str) -> Result<String, String>
}
Invoke a tool on a connected MCP server. args is a JSON object.
Capability: mcp.call
#![allow(unused)]
fn main() {
let result = akshi_sdk::mcp_call_tool(
"filesystem",
"read_file",
r#"{"path":"/tmp/data.txt"}"#,
)?;
}
mcp_list_tools
#![allow(unused)]
fn main() {
fn mcp_list_tools(server: &str) -> Result<String, String>
}
List available tools on the named MCP server. Returns a JSON array.
Capability: mcp.list
HTTP
http_request
#![allow(unused)]
fn main() {
fn http_request(method: &str, url: &str, headers: &str, body: &str) -> Result<String, String>
}
Make an outbound HTTP request. headers is a JSON object of key-value pairs.
Returns a JSON object with status, headers, and body fields.
Capability: http.request
#![allow(unused)]
fn main() {
let resp = akshi_sdk::http_request(
"GET",
"https://api.example.com/data",
"{}",
"",
)?;
}
Inference
inference_prompt
#![allow(unused)]
fn main() {
fn inference_prompt(prompt: &str) -> Result<String, String>
}
Send a prompt to the inference router. The router selects the model based on the active route profile.
Capability: inference.prompt
#![allow(unused)]
fn main() {
let answer = akshi_sdk::inference_prompt("Summarize these findings.")?;
}
inference_prompt_with_model
#![allow(unused)]
fn main() {
fn inference_prompt_with_model(model: &str, prompt: &str) -> Result<String, String>
}
Send a prompt to a specific model, bypassing the router.
Capability: inference.prompt
Config
config_get
#![allow(unused)]
fn main() {
fn config_get(key: &str) -> Result<String, String>
}
Read a value from the agent’s configuration namespace.
Capability: config.read
#![allow(unused)]
fn main() {
let val = akshi_sdk::config_get("scan_interval")?;
}
config_set
#![allow(unused)]
fn main() {
fn config_set(key: &str, value: &str) -> Result<(), String>
}
Write a value to the agent’s configuration namespace. Triggers a config-change event.
Capability: config.write
A2A (Agent-to-Agent)
a2a_send
#![allow(unused)]
fn main() {
fn a2a_send(target: &str, payload: &str) -> Result<String, String>
}
Send a task to another agent via the A2A protocol. target is the agent name
or DID. Returns the task ID.
Capability: a2a.send
#![allow(unused)]
fn main() {
let task_id = akshi_sdk::a2a_send("analyzer", r#"{"action":"scan"}"#)?;
}
a2a_receive
#![allow(unused)]
fn main() {
fn a2a_receive() -> Result<String, String>
}
Poll for inbound A2A tasks. Returns a JSON array of pending tasks.
Capability: a2a.receive
WebSocket
ws_send
#![allow(unused)]
fn main() {
fn ws_send(channel: &str, message: &str) -> Result<(), String>
}
Send a message to a WebSocket channel.
Capability: websocket.send
ws_receive
#![allow(unused)]
fn main() {
fn ws_receive(channel: &str) -> Result<String, String>
}
Receive the next message from a WebSocket channel. Blocks until a message arrives or the timeout expires.
Capability: websocket.receive
Utility
log
#![allow(unused)]
fn main() {
fn log(level: &str, message: &str) -> Result<(), String>
}
Emit a structured log line. level is one of trace, debug, info,
warn, error.
Capability: none (always allowed)
#![allow(unused)]
fn main() {
akshi_sdk::log("info", "Agent started successfully")?;
}
sleep_ms
#![allow(unused)]
fn main() {
fn sleep_ms(millis: u64) -> Result<(), String>
}
Sleep for the given number of milliseconds. The host may cap the maximum sleep duration.
Capability: none (always allowed)
random_bytes
#![allow(unused)]
fn main() {
fn random_bytes(len: u32) -> Result<Vec<u8>, String>
}
Generate cryptographically secure random bytes.
Capability: none (always allowed)
Python SDK
The Python SDK is an HTTP client that wraps the Akshi HTTP API. Use it for external tooling, dashboards, and orchestration scripts.
Installation
pip install akshi-sdk
Quick start
from akshi_sdk import AkshiClient
client = AkshiClient(base_url="http://127.0.0.1:3210", token="my-token")
if client.health().ok:
print("Runtime is healthy")
Client methods
| Method | HTTP | Returns | Description |
|---|---|---|---|
health() | GET /api/v1/health | HealthResponse | Runtime health check |
status() | GET /api/v1/agents | list[AgentStatus] | All agent statuses |
logs(since=None) | GET /api/v1/logs | list[LogEntry] | Log entries, optionally filtered by ISO timestamp |
findings(since=None) | GET /api/v1/findings | list[FindingRecord] | Findings, optionally filtered by ISO timestamp |
post_findings(records) | POST /api/v1/findings | None | Submit a batch of findings |
FindingRecord
from dataclasses import dataclass
from typing import Optional
@dataclass
class FindingRecord:
agent: str # Agent name that produced the finding
severity: str # "info" | "low" | "medium" | "high" | "critical"
title: str # Short summary
detail: str # Full description
source: Optional[str] # Origin (e.g., file path or URL)
timestamp: Optional[str] # ISO 8601; auto-set if omitted
Error handling
All methods raise AkshiApiError on non-2xx responses. The exception exposes
status_code, error, and detail attributes matching the API error format.
from akshi_sdk import AkshiApiError
try:
client.post_findings(records)
except AkshiApiError as e:
print(f"{e.status_code}: {e.detail}")
TypeScript SDK
The TypeScript SDK is an HTTP client that wraps the Akshi HTTP API. Use it for external tooling, dashboards, and orchestration scripts.
Installation
npm install @akshi/sdk
Quick start
import { AkshiClient } from "@akshi/sdk";
const client = new AkshiClient({
baseUrl: "http://127.0.0.1:3210",
token: "my-token",
});
const health = await client.health();
console.log(health.ok ? "Healthy" : "Unhealthy");
Client methods
| Method | HTTP | Returns | Description |
|---|---|---|---|
health() | GET /api/v1/health | HealthResponse | Runtime health check |
status() | GET /api/v1/agents | AgentStatus[] | All agent statuses |
logs(since?) | GET /api/v1/logs | LogEntry[] | Log entries, optionally filtered by ISO timestamp |
findings(since?) | GET /api/v1/findings | FindingRecord[] | Findings, optionally filtered by ISO timestamp |
postFindings(records) | POST /api/v1/findings | void | Submit a batch of findings |
FindingRecord
interface FindingRecord {
agent: string; // Agent name that produced the finding
severity: "info" | "low" | "medium" | "high" | "critical";
title: string; // Short summary
detail: string; // Full description
source?: string; // Origin (e.g., file path or URL)
timestamp?: string; // ISO 8601; auto-set if omitted
}
Error handling
All methods throw AkshiApiError on non-2xx responses. The error exposes
statusCode, error, and detail properties matching the API error format.
import { AkshiApiError } from "@akshi/sdk";
try {
await client.postFindings(records);
} catch (e) {
if (e instanceof AkshiApiError) {
console.error(`${e.statusCode}: ${e.detail}`);
}
}
WIT Interface
The WIT (WebAssembly Interface Type) definition is the canonical contract between the WASM sandbox and the Akshi host runtime. It uses the Component Model specification.
Status: Component Model execution is in-progress. The runtime currently uses a preview adapter. The WIT contract is stable and defines the target interface.
Package
package akshi:runtime@0.1.0;
Interface: host
interface host {
// Database
db-query: func(sql: string) -> result<string, string>;
db-execute: func(sql: string) -> result<u64, string>;
// Journal
journal-append: func(payload: string) -> result<_, string>;
journal-read: func(since: u64) -> result<string, string>;
// MCP
mcp-call-tool: func(server: string, tool: string, args: string) -> result<string, string>;
mcp-list-tools: func(server: string) -> result<string, string>;
// HTTP
http-request: func(method: string, url: string, headers: string, body: string) -> result<string, string>;
// Inference
inference-prompt: func(prompt: string) -> result<string, string>;
inference-prompt-with-model: func(model: string, prompt: string) -> result<string, string>;
// Config
config-get: func(key: string) -> result<string, string>;
config-set: func(key: string, value: string) -> result<_, string>;
// A2A
a2a-send: func(target: string, payload: string) -> result<string, string>;
a2a-receive: func() -> result<string, string>;
// WebSocket
ws-send: func(channel: string, message: string) -> result<_, string>;
ws-receive: func(channel: string) -> result<string, string>;
// Utility
log: func(level: string, message: string) -> result<_, string>;
sleep-ms: func(millis: u64) -> result<_, string>;
random-bytes: func(len: u32) -> result<list<u8>, string>;
}
World
world agent {
import host;
export run: func() -> result<_, string>;
}
The run export is the agent entry point. The runtime calls it once per tick
(or once at startup in single-shot mode). All host imports are available for
the agent to call during execution.
Configuration Reference
Akshi’s behavior is controlled by a layered configuration system. This section documents every configurable surface.
| Page | Description |
|---|---|
| runtime.toml | Full field-by-field reference for the main runtime configuration file |
| Agent Entry | Schema for [[agents]] entries including endpoints, spend policy, and sync settings |
| Route Profile | JSON schema for inference router model-selection profiles |
| Environment Variables | All environment variables recognized by the runtime and installer |
The runtime resolves configuration in this order (later wins):
- Compiled defaults
runtime.toml(path set by-c/--config)- Environment variables
- CLI flags
runtime.toml
Complete field reference for Akshi’s main configuration file. Fields are grouped by TOML section. All fields are optional unless marked required.
[dashboard]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable the HTTP dashboard and API |
port | u16 | 3210 | TCP port for the dashboard |
bind | string | "127.0.0.1" | Bind address |
token | string | – | Bearer token for API authentication |
auth | bool | true | Require authentication |
cors_origins | string[] | ["http://127.0.0.1:3210"] | Allowed CORS origins |
rate_limit | u32 | 300 | Requests per minute per IP |
rate_limit_burst | u32 | 50 | Burst request allowance |
tls_cert | string | – | Path to TLS certificate (PEM) |
tls_key | string | – | Path to TLS private key (PEM) |
[router]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable inference routing |
profile | string | "default" | Name of the route profile to use |
profile_path | string | – | Path to a custom route profile JSON file |
threshold | f64 | 0.55 | Classification threshold for model selection |
fallback_model | string | – | Model to use when the router cannot classify |
timeout_secs | u64 | 120 | Per-request timeout for inference calls |
max_retries | u32 | 2 | Retry count on transient inference failures |
circuit_breaker_threshold | u32 | 5 | Consecutive failures before opening the circuit |
circuit_breaker_reset_secs | u64 | 60 | Seconds before a half-open retry |
[telemetry]
| Field | Type | Default | Description |
|---|---|---|---|
log_level | string | "info" | Minimum log level: trace, debug, info, warn, error |
log_format | string | "pretty" | Log format: pretty, json, compact |
log_file | string | – | Write logs to file in addition to stderr |
otel_endpoint | string | – | OpenTelemetry collector endpoint |
otel_protocol | string | "grpc" | OTLP protocol: grpc or http |
metrics_enabled | bool | true | Expose Prometheus metrics on /api/v1/metrics |
[runtime]
| Field | Type | Default | Description |
|---|---|---|---|
tick_interval_ms | u64 | 1000 | Agent tick interval in milliseconds |
max_agents | u32 | 64 | Maximum number of concurrent agents |
fuel_limit | u64 | 1_000_000 | Default WASM fuel budget per tick |
memory_limit_mb | u32 | 64 | Per-agent WASM memory limit |
data_dir | string | "./data" | Directory for runtime state (journals, DBs) |
sandbox_mode | string | "wasm" | Sandbox backend: wasm or native |
kill_timeout_secs | u64 | 10 | Grace period before force-killing an agent |
[mesh]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable mesh networking |
node_name | string | hostname | Human-readable node name |
listen_port | u16 | 7946 | Gossip protocol listen port |
advertise_addr | string | – | Address advertised to peers |
bootstrap_peers | string[] | [] | Initial peer addresses to join |
[mesh.dht]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable DHT-based peer discovery |
replication_factor | u32 | 3 | Number of replicas for DHT entries |
[mesh.wireguard]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable WireGuard tunnel between peers |
private_key | string | – | WireGuard private key (base64) |
listen_port | u16 | 51820 | WireGuard UDP listen port |
allowed_ips | string[] | [] | Allowed IP ranges for the tunnel |
[mesh.sync]
| Field | Type | Default | Description |
|---|---|---|---|
strategy | string | "crdt" | Sync strategy: crdt or snapshot |
interval_secs | u64 | 5 | Sync interval between peers |
max_batch_size | u32 | 1000 | Maximum entries per sync batch |
conflict_resolution | string | "lww" | Conflict resolution: lww (last-writer-wins) or merge |
[channels.slack]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Slack notifications |
webhook_url | string | – | Slack incoming webhook URL |
channel | string | – | Override channel name |
notify_on | string[] | ["critical"] | Severity levels that trigger notifications |
[channels.discord]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Discord notifications |
webhook_url | string | – | Discord webhook URL |
notify_on | string[] | ["critical"] | Severity levels that trigger notifications |
[channels.email]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable email notifications |
smtp_host | string | – | SMTP server hostname |
smtp_port | u16 | 587 | SMTP port |
smtp_user | string | – | SMTP username |
smtp_pass | string | – | SMTP password (prefer env var or broker secret) |
from | string | – | Sender address |
to | string[] | – | Recipient addresses |
notify_on | string[] | ["critical"] | Severity levels that trigger notifications |
[registry]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable registry client |
url | string | – | Registry server URL |
token | string | – | Authentication token for the registry |
cache_dir | string | "./cache/registry" | Local cache for fetched packages |
[approval]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable approval gates |
default_policy | string | "deny" | Default policy when no rule matches: allow or deny |
timeout_secs | u64 | 3600 | Time before a pending approval auto-expires |
notify_channel | string | – | Channel name (slack/discord) to send approval requests |
rules | table[] | [] | Approval rule definitions (see Approval Gates) |
Agent Entry
Each agent is defined as an [[agents]] entry in runtime.toml. This page
documents every field in the agent entry schema.
Example
[[agents]]
name = "log-monitor"
wasm = "./agents/log_monitor.wasm"
enabled = true
tick_interval_ms = 5000
fuel_limit = 500_000
capabilities = ["database.read", "database.write", "http.request", "inference.prompt"]
[[agents.endpoints]]
name = "check-logs"
transport = "http"
url = "https://logs.example.com/api/query"
[agents.spend_policy]
max_daily_usd = 1.00
alert_threshold_pct = 80
[agents.sync.journal]
enabled = true
topic = "log-monitor"
[agents.intent_policy]
mutation_allowed = false
require_approval = ["http.request"]
Top-level fields
| Field | Type | Default | Description |
|---|---|---|---|
name | string | required | Unique agent name (used in logs, A2A, dashboard) |
wasm | string | required | Path to the compiled .wasm module |
enabled | bool | true | Whether the agent starts with the runtime |
tick_interval_ms | u64 | inherits runtime.tick_interval_ms | Per-agent tick override |
fuel_limit | u64 | inherits runtime.fuel_limit | Per-agent fuel budget override |
memory_limit_mb | u32 | inherits runtime.memory_limit_mb | Per-agent memory limit override |
capabilities | string[] | [] | Host capabilities granted to this agent |
env | table | {} | Environment variables injected into the agent |
description | string | "" | Human-readable description shown in dashboard |
tags | string[] | [] | Metadata tags for filtering and grouping |
[[agents.endpoints]]
Defines external service endpoints the agent may call.
| Field | Type | Default | Description |
|---|---|---|---|
name | string | required | Endpoint identifier used in SDK calls |
transport | string | "http" | Transport type: http, grpc, ws |
url | string | required | Endpoint URL |
headers | table | {} | Static headers added to every request |
timeout_secs | u64 | 30 | Per-request timeout |
[agents.spend_policy]
Controls inference spend limits for the agent.
| Field | Type | Default | Description |
|---|---|---|---|
max_daily_usd | f64 | 0.0 (unlimited) | Maximum daily spend in USD |
max_monthly_usd | f64 | 0.0 (unlimited) | Maximum monthly spend in USD |
alert_threshold_pct | u32 | 80 | Percentage of budget that triggers an alert |
[agents.sync.journal]
Controls journal synchronization for this agent.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable journal sync for this agent |
topic | string | agent name | Sync topic; agents sharing a topic share journal state |
[agents.intent_policy]
Per-agent governance rules.
| Field | Type | Default | Description |
|---|---|---|---|
mutation_allowed | bool | true | Whether the agent can perform mutating operations |
require_approval | string[] | [] | Capabilities that require human approval before execution |
max_actions_per_tick | u32 | 0 (unlimited) | Rate-limit actions per tick |
Route Profile
A route profile is a JSON file that controls how the inference router selects between models. The router computes a weighted score from prompt features and compares it against a threshold.
Schema versions
Two schema versions are supported. Version 2 adds additional prompt-analysis features.
v1
{
"schema_version": 1,
"bias": 0.0,
"weights": {
"token_count": 0.3,
"question_mark": 0.2,
"code_marker": 0.25,
"tool_hint": 0.25
}
}
v2
{
"schema_version": 2,
"bias": 0.0,
"weights": {
"token_count": 0.15,
"question_mark": 0.10,
"code_marker": 0.15,
"tool_hint": 0.15,
"prompt_length_bucket": 0.15,
"reasoning_chain_depth": 0.15,
"natural_language_ratio": 0.15
}
}
Fields
| Field | Type | Description |
|---|---|---|
schema_version | int | 1 or 2 |
bias | float | Constant added to the score before threshold comparison |
weights.token_count | float | Weight for normalized token count |
weights.question_mark | float | Weight for presence of question marks |
weights.code_marker | float | Weight for code-related markers (backticks, keywords) |
weights.tool_hint | float | Weight for tool-use indicators |
weights.prompt_length_bucket | float | (v2) Weight for prompt length category |
weights.reasoning_chain_depth | float | (v2) Weight for detected chain-of-thought depth |
weights.natural_language_ratio | float | (v2) Weight for ratio of natural language to structured tokens |
The default classification threshold is 0.55. Scores above the threshold
route to the higher-capability model; scores below route to the faster model.
Override the threshold with router.threshold in runtime.toml.
Environment Variables
Environment variables recognized by the Akshi runtime and installer. Variables
take precedence over runtime.toml values where applicable.
Runtime variables
| Variable | Default | Description |
|---|---|---|
AKSHI_HOME_DIR | ~/.akshi | Root directory for Akshi state, keys, and data |
AKSHI_POLICY_RULEPACK_FILE | – | Path to a custom policy rule-pack file |
AKSHI_MUTATION_KILL_SWITCH | false | Set to true to globally disable all mutating operations |
AKSHI_GOVERNANCE_APPROVAL_EVIDENCE | – | Path to pre-approved evidence file (CI use) |
AKSHI_GOVERNANCE_POLICY_FILE | – | Path to an external governance policy file |
AKSHI_LOG_LEVEL | info | Override telemetry.log_level |
AKSHI_LOG_FORMAT | pretty | Override telemetry.log_format |
AKSHI_DASHBOARD_TOKEN | – | Override dashboard.token |
AKSHI_DASHBOARD_PORT | 3210 | Override dashboard.port |
AKSHI_DATA_DIR | ./data | Override runtime.data_dir |
AKSHI_FUEL_LIMIT | 1000000 | Override runtime.fuel_limit |
Installer variables
These variables configure the behavior of the curl | sh installer script.
| Variable | Default | Description |
|---|---|---|
AKSHI_REPO | AkshiSystems/akshi | GitHub repository for release downloads |
AKSHI_INSTALL_DIR | ~/.akshi/bin | Directory to install the akshi binary |
AKSHI_VERSION | latest | Specific version to install (e.g., 0.2.1) |
AKSHI_ALLOW_DOWNGRADE | false | Allow installing an older version over a newer one |
AKSHI_DOWNLOAD_RETRIES | 3 | Number of download retry attempts |
AKSHI_DOWNLOAD_TIMEOUT | 60 | Download timeout in seconds |
AKSHI_NO_MODIFY_PATH | false | Skip adding install directory to PATH |
AKSHI_CHECKSUM_VERIFY | true | Verify SHA-256 checksums after download |
Glossary
Key terms used throughout the Akshi documentation.
| Term | Definition |
|---|---|
| A2A | Agent-to-Agent protocol. Enables agents to send tasks and messages to each other, both within a node and across the mesh. |
| AG-UI | Agent-User Interface protocol. Provides real-time streaming of agent state to human-facing dashboards via SSE. |
| Agent Card | A JSON metadata document (/.well-known/agent.json) that describes an agent’s capabilities, endpoints, and authentication requirements. Used for discovery. |
| Automerge | A CRDT library used by the journal sync layer to merge concurrent edits from multiple mesh peers without conflicts. |
| Circuit Breaker | A fault-tolerance pattern in the inference router that stops sending requests to a failing model provider after consecutive errors, re-trying after a cooldown period. |
| CRDT | Conflict-free Replicated Data Type. A data structure that can be merged across distributed nodes without coordination. Akshi journals use CRDTs for mesh sync. |
| DID | Decentralized Identifier. A self-sovereign identity standard used by Akshi agents and nodes for authentication and message signing. |
| Fuel | A unit of WASM execution budget. Each tick grants an agent a fuel allowance; when fuel is exhausted the tick halts. Controls runaway computation. |
| Host Capability | A permission granted to a sandboxed agent (e.g., database.read, http.request). The runtime enforces capabilities at the host-call boundary. |
| Journal | An append-only log maintained per agent. Stores events, findings, and state changes. Journals sync across mesh peers via CRDTs. |
| MCP | Model Context Protocol. An open standard for connecting AI agents to external tool servers. Akshi agents call MCP tools through the host capability layer. |
| Mesh | A peer-to-peer network of Akshi nodes that share journal state and route A2A messages. Built on gossip-based membership with optional WireGuard encryption. |
| Route Profile | A JSON configuration that controls how the inference router scores prompts and selects between model tiers. |
| Sandbox | The WASM isolation boundary that confines agent code. Agents cannot access the filesystem, network, or host memory directly; they must use host capabilities. |
| SSE | Server-Sent Events. A unidirectional streaming protocol used by the dashboard API to push real-time agent events to clients. |
| Supervisor | The runtime component that manages agent lifecycles: loading WASM modules, scheduling ticks, enforcing fuel limits, and restarting failed agents. |
| WASI | WebAssembly System Interface. A set of standardized APIs for WASM modules to interact with system resources. Akshi uses WASI Preview 2. |
| WIT | WebAssembly Interface Type. A schema language for defining typed interfaces between WASM components and their hosts. Akshi’s SDK contract is defined in WIT. |