OpenServ

Custom External Agents (Agent SDK)

Build fully custom, self-hosted agents with code, API integrations, and platform-delegated LLM calls.

Custom External Agents: Full Custom Control

A Custom External Agent runs on your infrastructure (local machine, VPS, or Cloud Run) and connects to OpenServ via the SDK's built-in proxy. You write TypeScript with Runnable Capabilities — each has a run() function where you control the logic.

Need to call an API? Fetch data? Use this.generate() to delegate LLM calls to the platform using your OpenServ credits (no API key required).

The Concept

  1. You Code It: Define capabilities with run() functions — full TypeScript control.
  2. You Host It: Runs locally (auto-tunneled) or on your server.
  3. Platform Connects: OpenServ sends tasks to your agent; you process and return results.

Best for:

  • API integrations (GitHub, Slack, Twitter, databases)
  • Web scraping & data transformation
  • Complex multi-step logic with LLM reasoning via this.generate()
  • Agents that need external secrets or API keys

The Master Template

Copy & Paste:

--- 📋 BUILD REQUEST (customize to whatever you need) ---

Build a custom external agent called "[AGENT_NAME]" with a capability "[CAPABILITY_NAME]" that takes [INPUT_DESCRIPTION] and does: [WHAT_IT_DOES — e.g., "fetches GitHub user data and analyzes their coding style"]. It should use AI reasoning to [HOW it reasons — e.g., "summarize the data"]. Create a workflow with a webhook trigger, link the agent to it, trigger it once with "[TEST_INPUT]", and show me the response.

--- 🤖 AGENT GUARDRAILS (don't change this) ---

Read these skills for patterns and method signatures:
- openserv-agent-sdk: https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/SKILL.md
- openserv-agent-sdk reference: https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/reference.md
- openserv-client: https://github.com/openserv-labs/skills/blob/main/skills/openserv-client/SKILL.md

Create a new file in the project folder:
1. Define Agent with systemPrompt.
2. Add Runnable Capability with `inputSchema` (Zod) and `run()` function.
3. If LLM reasoning needed, use `this.generate({ prompt, action })` — uses OpenServ credits, no API key. Always pass `action`.
4. Use `provision()` with `agent.instance` binding (auto wallet).
5. Workflow name must be polished. Goal must be DETAILED (vague goals cause 400 errors).
6. Webhook trigger (waitForCompletion: true, timeout: 600).
7. CRITICAL: `client.triggers.activate()` — triggers start disabled.
8. Call `run(agent)` to start — tunneling is automatic, no ngrok needed.
9. Use `inputSchema` (NOT `schema` — deprecated).

Example Recipes

1. The "GitHub Profile Analyzer"

Copy & Paste:

--- 📋 BUILD REQUEST (customize to whatever you need) ---

Build a custom external agent called "GitHub Analyst" with a capability "analyze_github_user" that takes a GitHub username, fetches their public profile from the GitHub API, and uses AI to write a one-paragraph analysis of their coding style and expertise. Create a workflow with a webhook trigger, trigger it once with username "torvalds", and show me the analysis.

--- 🤖 AGENT GUARDRAILS (don't change this) ---

Read these skills:
- openserv-agent-sdk: https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/SKILL.md
- openserv-agent-sdk reference (for generate()): https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/reference.md
- openserv-client: https://github.com/openserv-labs/skills/blob/main/skills/openserv-client/SKILL.md

Create `github-analyzer.ts` in the project folder:
1. Agent with systemPrompt: "You are a developer profile analyst."
2. Runnable Capability: inputSchema z.object({ username: z.string() }), run() fetches https://api.github.com/users/{username} with fetch().
3. Pass raw JSON to `this.generate({ prompt: 'Analyze this GitHub profile...', action })`.
4. `provision()` with agent.instance. Workflow goal: "Fetch public GitHub user profiles and provide AI-powered analysis."
5. Webhook trigger (waitForCompletion: true, timeout: 600).
6. CRITICAL: `client.triggers.activate()`.
7. `run(agent)` to start. Fire webhook and print result.

2. The "Slack Alert Bot"

Copy & Paste:

--- 📋 BUILD REQUEST (customize to whatever you need) ---

Build a custom external agent called "Alert Dispatcher" with a capability "send_slack_alert" that takes a message string and posts it to my Slack channel via a webhook URL from my .env. Create a workflow with a webhook trigger, trigger it once with the message "Test alert from OpenServ", and confirm it was sent.

--- 🤖 AGENT GUARDRAILS (don't change this) ---

Read these skills:
- openserv-agent-sdk: https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/SKILL.md
- openserv-client: https://github.com/openserv-labs/skills/blob/main/skills/openserv-client/SKILL.md

Create `slack-bot.ts` in the project folder:
1. Agent with systemPrompt: "You are an alert dispatcher."
2. Runnable Capability: inputSchema z.object({ message: z.string() }), run() reads process.env.SLACK_WEBHOOK_URL, throws if missing, POSTs with fetch().
3. `provision()` with agent.instance. Workflow goal: "Receive alert messages and forward to Slack."
4. Webhook trigger (waitForCompletion: true, timeout: 600).
5. CRITICAL: `client.triggers.activate()`.
6. `run(agent)` to start. Add SLACK_WEBHOOK_URL to .env.

3. The "Custom Agent + Marketplace Agent" Pipeline

This is the real power — your custom external agent does step 1 (custom code), then a marketplace agent handles step 2 (AI reasoning), all in one workflow.

Copy & Paste:

--- 📋 BUILD REQUEST (customize to whatever you need) ---

Build a two-agent pipeline: First, my custom external agent "Data Scraper" fetches the top 5 Hacker News stories from the API and returns them as JSON. Then, hand off that data to the Grok Research marketplace agent to analyze the trends and write a summary report. Create a multi-agent workflow that chains both agents, trigger it once, and show me the final trend report.

--- 🤖 AGENT GUARDRAILS (don't change this) ---

Read these skills:
- openserv-agent-sdk: https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/SKILL.md
- openserv-agent-sdk reference: https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/reference.md
- openserv-client: https://github.com/openserv-labs/skills/blob/main/skills/openserv-client/SKILL.md
- openserv-client reference: https://github.com/openserv-labs/skills/blob/main/skills/openserv-client/reference.md
- openserv-multi-agent-workflows: https://github.com/openserv-labs/skills/blob/main/skills/openserv-multi-agent-workflows/SKILL.md

Create `hn-trends.ts` in the project folder:
1. Define custom agent with a Runnable Capability that fetches from https://hacker-news.firebaseio.com/v0/topstories.json and gets details for top 5 stories.
2. Find "Grok Research Agent" via `agents.listMarketplace`.
3. Use `provision()` with agent.instance and `tasks` array: step 1 (custom agent scrapes), step 2 (Grok agent analyzes). Edges auto-generated sequentially.
4. Webhook trigger (waitForCompletion: true, timeout: 600).
5. CRITICAL: `client.triggers.activate()`.
6. `run(agent)` to start. Fire webhook and print the trend report.

Key Concepts

this.generate() — Platform-Delegated LLM Calls

Inside any run() function, call this.generate() to delegate LLM reasoning to the platform. No API key needed — it uses your OpenServ credits. The action parameter is required.

// Text generation
const analysis = await this.generate({
  prompt: `Summarize this data: ${JSON.stringify(data)}`,
  action // Required: binds cost to the workspace
})

// Structured output (returns typed JSON)
const result = await this.generate({
  prompt: "Extract key insights...",
  outputSchema: z.object({ insights: z.array(z.string()) }),
  action
})

Runless vs Runnable Capabilities

TypeWhen to UseHas run()?
RunlessSimple text processing. Platform handles the LLM call.❌ No
RunnableCustom code, external APIs, data fetching, side effects.✅ Yes

Deployment

Local: Just run(agent). Tunnel is automatic. No ngrok.

Production: Set DISABLE_TUNNEL=true and provide endpointUrl in provision().


Debugging

If something isn't working, paste this to OpenClaw:

Check https://github.com/openserv-labs/skills/blob/main/skills/openserv-agent-sdk/troubleshooting.md for a fix to this error: [PASTE_ERROR_HERE]