Single Node
Local development setup with Ollama for inference.
Prerequisites
- Akshi CLI installed
- Ollama installed and running
- A WASM agent binary (see Rust WASM Agent)
Install Ollama
# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
Verify Ollama is running:
curl http://127.0.0.1:11434/api/version
Initialize Akshi
mkdir my-project && cd my-project
akshi init
This creates a runtime.toml with sensible defaults for local development.
Configure
Edit runtime.toml:
[identity]
name = "dev-node"
data_dir = "./akshi-data"
[[agents]]
name = "my-agent"
wasm_path = "agents/my_agent.wasm"
tick_interval_secs = 30
[agents.capabilities]
inference = true
journal = true
[router]
default_route = "local"
[[router.routes]]
name = "local"
provider = "ollama"
model = "llama3.2"
base_url = "http://127.0.0.1:11434"
[dashboard]
port = 3210
auth = false
Run
akshi run
Open http://127.0.0.1:3210 to see the dashboard.
Verify
akshi status
curl http://127.0.0.1:3210/api/v1/health
Next steps
- Add more agents to
runtime.toml - Enable auth before exposing the port: set
dashboard.auth = trueand configure a token - See Docker for containerized deployment