Agent Starter SDK
A starter project to help you get started building AI agents with the OpenServ SDK - a TypeScript framework that simplifies agent development. Whether you're new to AI development or an experienced developer, this guide will help you get started quickly.
What You'll Learn
Setting up your development environment
Creating a basic AI agent using the OpenServ SDK
Testing your agent locally with
process()
using OpenAI APIDeploying your agent to the OpenServ platform
Prerequisites
Basic knowledge of JavaScript/TypeScript
Node.js installed on your computer
An OpenServ account (create one at platform.openserv.ai)
(Optional) An OpenAI API key for local testing
Getting Started
1. Set Up Your Project
First, clone this agent-starter template repository to get a pre-configured project:
2. Configure Your Environment
Copy the example environment file and update it with your credentials:
Edit the .env
file to add:
OPENSERV_API_KEY
: Your OpenServ API key (required for platform integration)OPENAI_API_KEY
: Your OpenAI API key (optional, for local testing)PORT
: The port for your agent's server (default: 7378)
3. Understand the Project Structure
The agent-starter project has a minimal structure:
This simple structure keeps everything in one file, making it easy to understand and modify.
Understanding the Agent Code
Let's examine the src/index.ts
file to understand how an agent is defined with the SDK and how this works:
Key Components of the Agent
Agent Creation:
This creates a new agent with a system prompt that guides its behavior.
Adding Capabilities:
This defines a capability named
sum
that:Provides a description for the platform to understand when to use it
Uses Zod schema for type safety and validation
Implements the logic in the
run
function
Starting the Server:
This launches an HTTP server that handles requests from the OpenServ platform.
Local Testing with
process()
:This demonstrates how to test your agent locally without deploying it to the platform.
Testing Locally with process()
process()
The process()
method is a SDK feature that allows you to test your agent locally before deploying it to the OpenServ platform. This is especially useful during development to verify your agent works as expected.
How process()
Works
process()
WorksWhen you call process()
:
The SDK sends the user message to a LLM Large Language Model (using your OpenAI API key)
The AI model determines if your agent's capabilities should be used
If needed, it invokes your capabilities with the appropriate arguments
It returns the response to you for testing
Testing Complex Inputs and Edge Cases
You can extend the local testing in main()
to try different inputs:
Exposing Your Local Server with Tunneling
During development, OpenServ needs to reach your agent running on your computer. Since your development machine typically doesn't have a public internet address, we'll use a tunneling tool.
What is Tunneling?
Tunneling creates a temporary secure pathway from the internet to your local development environment, allowing OpenServ to send requests to your agent while you're developing it. Think of it as creating a secure "tunnel" from OpenServ to your local machine.
Tunneling Options
Choose a tunneling tool:
ngrok (recommended for beginners)
Easy setup with graphical and command-line interfaces
Generous free tier with 1 concurrent connection
Web interface to inspect requests
localtunnel (open source option)
Completely free and open source
Simple command-line interface
No account required
Quick Setup with ngrok
Open your terminal and run:
Look for a line like
Forwarding https://abc123.ngrok-free.app -> http://localhost:7378
Copy the https URL (e.g.,
https://abc123.ngrok-free.app
) - you'll need this for the next steps
Integration with the OpenServ Platform
The agent.start()
function in your code starts the HTTP server that communicates with the OpenServ platform. When the platform sends a request to your agent:
The server receives the request
The SDK parses the request and determines which capability to use
It executes the capability's
run
functionIt formats and returns the response to the platform
Testing on the Platform
To test your agent on the OpenServ platform:
Start your local server:
or
Expose your server with a tunneling tool as described in the previous section
Register your agent on the OpenServ platform:
Go to Developer → Add Agent
Enter your agent name and capabilities
Set the Agent Endpoint to your tunneling tool URL
Create a Secret Key and update your
.env
file
Create a project on the platform:
Projects → Create New Project
Add your agent to the project
Interact with your agent through the platform
Advanced Capabilities
As you get more comfortable with the SDK, you can leverage more advanced methods and features such as file operations, task management, user interaction via chat and messaging. Check the methods in the API Reference.
Production Deployment
When your agent is all set for production, it’s time to get it out there! Just deploy it to a hosting service so that it can be available 24/7 for users to enjoy.
Build your project:
Deploy to a hosting service like (from simplest to most advanced):
Serverless (Beginner-friendly)
Vercel - Free tier available, easy deployment from GitHub
Netlify Functions - Similar to Vercel with a generous free tier
AWS Lambda - More complex but very scalable
Container-based (More control)
Render - Easy Docker deployment with free tier
Railway - Developer-friendly platform
Fly.io - Global deployment with generous free tier
Open source self-hosted (Maximum freedom)
Update your agent endpoint on the OpenServ platform with your production endpoint URL
Submit for review through the Developer dashboard
Happy building! We're excited to see what you will create with the OpenServ SDK.
Last updated
Was this helpful?