One Function to Rule Them All
The single universal tool that any AI agent can call using natural language to execute enterprise tasks, eliminating complex tool management and creating intelligent workflows that learn and adapt.
# Instead of managing 50+ complex tools
onefunction({
"task": "get customer data from slack and send follow-up email",
"data": {"customer_id": "123"}
})
# Or use as LLM wrapper
client = openai.OpenAI(
api_key=os.environ.get("ONEFUNCTION_API_KEY"),
base_url="https://api.onefunction.xyz/v1"
)
Everything you need to build intelligent workflows
OneFunction provides the complete toolkit for AI agent orchestration, from natural language processing to enterprise-grade security.
See Natural Language → Function Calls
Watch how OneFunction automatically analyzes natural messages and routes them to the right functions with zero configuration.
import onefunction
# Natural message automatically calls the right functions
result = onefunction({
"task": "get customer data from slack and send follow-up email",
"data": {"customer_id": "123"}
})
# OneFunction automatically:
# 1. Analyzes intent: "get customer data" + "send email"
# 2. Routes to: slack_api.get_user() → email_service.send()
# 3. Handles: authentication, data flow, error recovery
# 4. Returns: structured results with execution trace
"get customer data from slack and send follow-up email"
Intelligent Function Routing
OneFunction automatically detects and routes to the right functions from your favorite tools and services.
From Setup to Success in Minutes
OneFunction's "plug-and-play" approach means you can start optimizing your AI agents immediately with zero configuration complexity.
1. One-Line Setup
Install OneFunction and point your LLM client to our endpoint. Auto-scan detects your tasks and optimizations.
2. Intelligent Routing
OneFunction analyzes your task intent and selects the optimal tools and sequences using RAG-powered routing.
3. Execution & Learning
Tasks execute with built-in governance while OneFunction learns patterns to create personalized playbooks.
4. Continuous Improvement
Each execution improves future performance through semantic caching and workflow synthesis.
Intelligent Network Orchestration
OneFunction creates an intelligent network of tools and workflows that adapt and improve with each interaction.
Enterprise-grade reliability with comprehensive monitoring and failover systems.
Semantic caching and workflow optimization reduce LLM calls and operational costs.
Skip complex tool management and focus on building amazing AI experiences.
Simple, Transparent Pricing
Start free and scale as you grow. No hidden fees, no complex pricing tiers. Pay only for what you use.
- Up to 1,000 function calls/month
- Basic tool routing
- Community support
- Standard integrations
- Basic analytics
- Up to 50,000 function calls/month
- Advanced routing & caching
- Priority support
- Custom integrations
- Advanced analytics
- Workflow synthesis
- Local tool support
- Unlimited function calls
- Custom deployment options
- Dedicated support team
- SLA guarantees
- Advanced security features
- Custom integrations
- Training & onboarding
Frequently Asked Questions
What counts as a function call?
A function call is any request to OneFunction, whether it's a simple task execution or a complex workflow. We count the initial request, not the individual tool calls within.
Can I use my own LLM models?
Yes! OneFunction works as a wrapper around any LLM provider. You can use OpenAI, Anthropic, or even self-hosted models while benefiting from our optimization layer.
How does the learning system work?
OneFunction uses tracing and semantic caching to learn from your workflows. It creates personalized playbooks that improve over time, reducing costs and latency.