Running Agents
Learn the different ways to execute agents: manual testing, scheduled runs, event-driven triggers, chat interfaces, and API calls.
Execution Methods
Agents can run in several ways:
- Manual Execution - You trigger them directly
- Scheduled - Run automatically on a schedule
- Document Events - Triggered by document changes
- Chat Interface - User interactions via chat
- API Calls - Programmatic execution
- Multi-Run Orchestration - Complex tasks broken into sequential steps
Manual Execution
When to Use
Manual execution is best for:
- Testing agents
- One-off tasks
- Debugging
- Development
- Ad-hoc requests
Methods
Agent Chat Interface
Access: Desk → Huf → Agent Chat
Steps:
- Select agent from dropdown
- Type your message
- Send
- View response
Best for: Interactive testing and debugging
Agent DocType Run Button
Access: Desk → Huf → Agent → [Your Agent]
Steps:
- Open agent
- Click “Run” button
- Enter prompt
- Execute
- View result
Best for: Quick testing from agent form
Agent Run DocType
Access: Desk → Huf → Agent Run → New
Steps:
- Create new Agent Run
- Select agent
- Enter prompt
- Save (executes automatically)
- View results
Best for: Programmatic testing and logging
Testing Workflow
Recommended process:
- Start with Agent Chat for quick tests
- Use Agent Run for detailed logging
- Review results in Agent Run logs
- Iterate based on findings
Scheduled Execution
When to Use
Scheduled runs are best for:
- Regular reports
- Periodic checks
- Daily summaries
- Hourly monitoring
- Weekly tasks
Configuration
Set up via Agent Trigger:
- Navigate to Desk → Huf → Agent Trigger
- Click New
- Select your agent
- Choose “Schedule” trigger type
- Set schedule frequency
- Configure prompt template
- Enable and save
Schedule Options
Frequencies:
- Hourly: Every hour
- Daily: Once per day at specified time
- Weekly: Specific day(s) of week
- Custom Cron: Advanced scheduling
Examples:
Daily at 9 AM: "0 9 * * *"
Every 6 hours: "0 */6 * * *"
Monday-Friday at 2 PM: "0 14 * * 1-5"
First day of month: "0 0 1 * *"Prompt Templates
Use templates to pass context:
Generate daily sales summary for {date}:
- Total revenue
- Order count
- Top 5 customers
- Trends vs yesterday
Send report to sales@company.comDynamic data:
- Use
{field_name}placeholders - Access document fields
- Include date/time variables
- Reference related data
Best Practices
Do:
- Run during off-peak hours for heavy tasks
- Stagger multiple agents to avoid overlap
- Use reasonable frequencies
- Monitor costs
- Test schedules before production
Don’t:
- Schedule too frequently (every minute)
- Run multiple heavy agents simultaneously
- Forget timezone considerations
- Ignore failed runs
Document Event Triggers
When to Use
Document events are best for:
- Validation on submission
- Auto-assignment on creation
- Notifications on updates
- Data enrichment on insert
- Workflow automation
Configuration
Set up via Agent Trigger:
- Navigate to Desk → Huf → Agent Trigger
- Click New
- Select your agent
- Choose “Doc Event” trigger type
- Select DocType
- Choose event (After Insert, On Submit, etc.)
- Add conditions if needed
- Configure prompt template
- Enable and save
Available Events
| Event | When It Fires | Common Use Cases |
|---|---|---|
| After Insert | Document created | Auto-assignment, enrichment |
| On Update | Document modified | Notifications, validation |
| On Submit | Document submitted | Validation, approval workflows |
| On Cancel | Document cancelled | Cleanup, notifications |
| Before Save | Before save | Validation, data correction |
Conditional Execution
Add filters to limit triggers:
{
"status": "Open",
"priority": "High"
}Only triggers when both conditions match.
Prompt Templates
Access document fields:
A new {doctype} has been created: {name}
Customer: {customer}
Total: {grand_total}
Please process this document:
1. Validate the data
2. Assign to appropriate team
3. Send notificationsBest Practices
Do:
- Use specific events (not every event)
- Add conditions to limit executions
- Handle errors gracefully
- Test with draft documents first
Don’t:
- Trigger on every update (can cause loops)
- Use Before Save unless necessary
- Forget permission checks
- Create infinite trigger loops
Chat Interface
When to Use
Chat is best for:
- User-facing interactions
- Customer support
- Interactive assistants
- Real-time help
- Conversational workflows
Configuration
Enable chat:
- Open agent in Desk
- Check “Allow Chat”
- Save
Access points:
- Desk: Agent Chat DocType
- Frontend:
/hufinterface (expanding)
User Experience
For end users:
- Access chat interface
- Select or discover agent
- Type message
- Receive response
- Continue conversation
Features:
- Conversation history
- Context awareness
- Multi-turn interactions
- Tool usage transparency
Permissions
Access control:
- Users see only agents they have permission for
- Respects Frappe role-based permissions
- Can restrict by user, role, or custom rules
Best Practices
Do:
- Enable only for user-facing agents
- Set clear expectations in instructions
- Handle errors gracefully
- Provide escalation paths
- Monitor user satisfaction
Don’t:
- Enable for internal automation agents
- Forget to set permissions
- Ignore user feedback
- Leave errors unhandled
API Execution
When to Use
API is best for:
- External integrations
- Webhook handlers
- Custom frontends
- Third-party services
- Programmatic access
Configuration
Endpoint: run_agent_sync (whitelisted method)
Authentication: Standard Frappe API authentication
Request Format:
import requests
response = requests.post(
"https://yoursite.com/api/method/huf.ai.agent_integration.run_agent_sync",
headers={
"Authorization": f"token {api_key}:{api_secret}",
"Content-Type": "application/json"
},
json={
"agent_name": "Customer Support Agent",
"prompt": "What is the status of order SO-001?",
"session_id": "api:external_system:12345"
}
)Parameters
Required:
agent_name: Name of agent to runprompt: Message to send to agent
Optional:
session_id: Conversation session identifiercontext: Additional context datastream: Enable streaming responses
Response Format
{
"response": "Agent's response text",
"run_id": "RUN-2024-001",
"tokens_used": 150,
"cost": 0.0023,
"tool_calls": [...]
}Session Management
For conversations:
- Use consistent
session_id - Format:
{channel}:{user_id}:{identifier} - Example:
api:external_system:user_12345
Benefits:
- Maintains conversation context
- Tracks user interactions
- Enables multi-turn conversations
Error Handling
Handle errors:
try:
response = requests.post(...)
if response.status_code == 200:
data = response.json()
# Process response
else:
# Handle error
error = response.json()
except Exception as e:
# Handle exceptionBest Practices
Do:
- Use proper authentication
- Handle errors gracefully
- Set timeouts
- Log API calls
- Rate limit if needed
Don’t:
- Expose API keys
- Ignore errors
- Make blocking calls
- Forget rate limits
- Skip authentication
Multi-Run Orchestration
When to Use
Orchestration is best for:
- Complex tasks requiring multiple steps
- Long-running processes
- Tasks that need planning before execution
- Workflows with sequential dependencies
- Research and analysis tasks
How It Works
When you enable Multi Run on an agent:
- Trigger: API call creates an orchestration document
- Planning: Agent breaks the task into numbered steps
- Scheduling: Steps execute one-by-one (every minute via cron)
- Context: Scratchpad maintains information between steps
- Completion: Marked complete when all steps finish
Configuration
Enable Multi-Run:
- Open agent in Desk
- Check Enable Multi Run
- Save
That’s it! The next time you call this agent, it will:
- Create an
Agent Orchestrationdocument - Generate a plan
- Execute steps via scheduler
Execution Flow
User Request → Create Orchestration → Planning Phase
↓
Generate Steps
↓
Save to agent_orchestration_plan
↓
Scheduler picks up (every 1 min)
↓
Execute Step 1 → Save output
↓
Execute Step 2 → Save output
↓
...
↓
All steps done → CompletedMonitoring Orchestrations
View orchestrations:
Navigate to Desk → Huf → Agent Orchestration
See:
- Status (Planned, Running, Completed, Failed)
- Current step number
- Plan steps and their status
- Scratchpad with accumulated context
- Error logs if any
Manual Testing
Run scheduler immediately (don’t wait for cron):
bench --site <site_name> execute huf.ai.orchestration.scheduler.process_orchestrationsCheck orchestration status:
bench --site <site_name> mariadb -e "SELECT name, status, current_step FROM \`tabAgent Orchestration\` ORDER BY creation DESC LIMIT 5;"View step progress:
bench --site <site_name> mariadb -e "SELECT step_index, status, LEFT(output_ref, 80) FROM \`tabAgent Orchestration Plan\` WHERE parent='<orch_name>' ORDER BY step_index;"Example Use Case
Content Research Agent:
Prompt: “Research and write a blog post about AI in healthcare”
Generated Plan:
- Identify key topics in AI healthcare
- Gather recent developments and statistics
- Find relevant case studies
- Create outline
- Write introduction
- Write main sections
- Write conclusion
- Review and refine
Each step executes with context from previous steps, building up the final output incrementally.
Best Practices
Do:
- Use for genuinely complex tasks (5+ logical steps)
- Write clear, comprehensive prompts
- Monitor orchestration progress
- Review step outputs for quality
Don’t:
- Enable for simple single-step tasks
- Expect real-time responses (steps execute async)
- Forget that each step costs tokens
- Use for user-facing chat interactions
Limitations (Phase-1)
Current implementation:
- Steps execute sequentially (no parallelism)
- One step per minute via scheduler
- No manual step intervention
- Basic error handling (fail stops orchestration)
- No retry mechanism
Troubleshooting
Orchestration stuck:
- Check scheduler is running
- Run scheduler manually
- Review Error Log in Agent Orchestration
Steps failing:
- Check agent has valid provider/model
- Review step outputs for context issues
- Check Error Log field
Output truncated:
output_refis Long Text (unlimited)- If still issues, check scratchpad for full context
Choosing Execution Method
Decision Matrix
| Method | Use When | Pros | Cons |
|---|---|---|---|
| Manual | Testing, one-offs | Full control, immediate | Requires manual action |
| Scheduled | Regular tasks | Automated, reliable | Fixed timing |
| Doc Event | Document workflows | Reactive, contextual | Tied to DocTypes |
| Chat | User interactions | User-friendly, accessible | Requires user action |
| API | Integrations | Flexible, programmatic | Requires development |
| Orchestration | Complex multi-step tasks | Handles complexity, maintains context | Async, slower execution |
Combining Methods
You can use multiple methods:
- Chat for user interactions
- Scheduled for reports
- Doc events for automation
- API for integrations
Example:
- Customer Support Agent: Chat + API
- Report Generator: Scheduled
- Invoice Validator: Doc Event
- Lead Qualifier: Doc Event + API
Performance Considerations
Response Times
Factors affecting speed:
- Model selection (faster models = faster responses)
- Tool complexity (more tools = more time)
- Conversation history (longer = slower)
- Network latency (for API calls)
Optimization
Improve performance:
- Use faster models when possible
- Minimize tool calls
- Limit conversation history
- Cache common results
- Optimize tool queries
Cost Management
Monitor costs:
- Track token usage
- Review Agent Run logs
- Optimize prompts
- Use cheaper models when appropriate
- Limit conversation history
Troubleshooting
Agent not running:
- Check agent is Active
- Verify trigger is Enabled
- Check permissions
- Review error logs
- Test manually first
Scheduled runs not firing:
- Verify schedule configuration
- Check cron expression
- Ensure agent is Active
- Review scheduler logs
- Test with manual trigger
Doc events not triggering:
- Verify DocType selection
- Check event type
- Review conditions
- Test with manual execution
- Check permissions
API calls failing:
- Verify authentication
- Check endpoint URL
- Review request format
- Check agent name
- Review error messages
What’s Next?
- Monitoring - Track agent performance
- Managing Agents - Lifecycle management
- Creating Agents - Build new agents
- Triggers - Detailed trigger guide
Questions? Check GitHub discussions .