OmniPalm Documentation
Welcome to the OmniPalm documentation. OmniPalm is Meta's AI agent that connects to large language models (LLMs) and autonomously executes real-world tasks like sending emails, managing files, controlling browsers, and running terminal commands.
OmniPalm is built by Meta AI to provide powerful, enterprise-ready AI automation capabilities. Follow us on X for the latest updates.
Key Features
- Multi-LLM Support — Connect to Claude, GPT-4, Gemini, or local models
- Meta Ecosystem — Facebook, Instagram, WhatsApp, Messenger, Threads, and more
- Autonomous Execution — Chain tasks, run workflows, make decisions
- Local & Private — Runs on your machine, data stays local
- Extensible — Build custom plugins and integrations
Installation
OmniPalm can be installed via pip, Docker, or from source. Choose the method that best fits your needs.
pip (Recommended)
The simplest way to install OmniPalm is via pip:
# Install OmniPalm
pip install omnipalm
# Verify installation
omnipalm --version
Docker
For isolated deployments, use Docker:
# Pull the official image
docker pull meta/omnipalm:latest
# Run with persistent configuration
docker run -d \
--name omnipalm \
-v ~/.omnipalm:/root/.omnipalm \
-e OMNIPALM_LLM_API_KEY=YOUR_KEY \
meta/omnipalm:latest
From Source
For development or the latest features:
# Download OmniPalm
curl -fsSL https://omnipalm.meta.com/install.sh | sh
cd omnipalm
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest tests/
System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Python | 3.10 | 3.11+ |
| RAM | 4 GB | 8 GB+ |
| Disk Space | 500 MB | 2 GB+ |
| OS | Linux, macOS, Windows 10+ | |
Quick Start
Get OmniPalm running in under 5 minutes:
1. Initialize Configuration
# Create configuration directory and files
omnipalm init
# This creates ~/.omnipalm/ with:
# - config.yaml (main configuration)
# - history.db (interaction history)
# - plugins/ (custom plugins directory)
2. Configure Your LLM Provider
# Set your LLM provider (claude, openai, gemini, ollama)
omnipalm config set llm.provider claude
# Set your API key
omnipalm config set llm.api_key sk-ant-api03-xxxxx
# Optionally set a specific model
omnipalm config set llm.model claude-3-opus-20240229
3. Start OmniPalm
# Start interactive chat mode
omnipalm chat
# Or run a single command
omnipalm run "List all files in my Downloads folder"
# Start the background daemon
omnipalm start
4. Try Example Commands
> What's the weather like today?
✓ Searching for weather information...
Currently 72°F (22°C) in San Francisco, partly cloudy.
> Send an email to alice@example.com saying the report is ready
✓ Email sent to alice@example.com
Subject: Report Ready
Body: Hi Alice, the report you requested is now ready...
> Create a new Jira ticket for the bug we discussed
✓ Created ticket PALM-142
Title: Fix authentication timeout on slow connections
Configuration
OmniPalm stores configuration in ~/.omnipalm/config.yaml. You can edit this file directly or use the CLI.
Configuration File Structure
llm:
provider: claude
model: claude-3-opus-20240229
api_key: ${OMNIPALM_LLM_API_KEY}
temperature: 0.7
max_tokens: 4096
agent:
name: OmniPalm
max_iterations: 25
timeout: 300
verbose: false
require_confirmation: true
integrations:
email:
enabled: true
provider: gmail
calendar:
enabled: true
provider: google
jira:
enabled: false
slack:
enabled: false
security:
sandbox_mode: false
allowed_paths:
- ~/
log_all_actions: true
Environment Variables
Sensitive values can be set via environment variables:
| Variable | Description |
|---|---|
OMNIPALM_LLM_API_KEY |
API key for your LLM provider |
OMNIPALM_CONFIG_DIR |
Custom config directory (default: ~/.omnipalm) |
OMNIPALM_LOG_LEVEL |
Logging verbosity (DEBUG, INFO, WARNING, ERROR) |
OMNIPALM_JIRA_TOKEN |
Jira API token |
OMNIPALM_SLACK_TOKEN |
Slack bot token |
Interactive Configuration Builder
Use our interactive tool to generate your configuration:
LLM Providers
OmniPalm supports multiple LLM providers. Each provider has different capabilities, pricing, and requirements.
Claude (Anthropic)
Claude is the recommended provider for complex reasoning tasks.
omnipalm config set llm.provider claude
omnipalm config set llm.model claude-3-opus-20240229
omnipalm config set llm.api_key YOUR_ANTHROPIC_API_KEY
OpenAI (GPT-4)
omnipalm config set llm.provider openai
omnipalm config set llm.model gpt-4-turbo
omnipalm config set llm.api_key YOUR_OPENAI_API_KEY
Gemini (Google)
omnipalm config set llm.provider gemini
omnipalm config set llm.model gemini-pro
omnipalm config set llm.api_key YOUR_GOOGLE_API_KEY
Local Models (Ollama/LM Studio)
Run models locally for privacy and offline use:
# For Ollama
omnipalm config set llm.provider ollama
omnipalm config set llm.base_url http://localhost:11434
omnipalm config set llm.model llama2:70b
# For LM Studio
omnipalm config set llm.provider lmstudio
omnipalm config set llm.base_url http://localhost:1234/v1
Meta Integrations
OmniPalm provides deep integration with Meta's family of apps and services. Each integration allows you to automate tasks, manage content, and interact with users across the Meta ecosystem.
Automate your Facebook presence with full Page and content management capabilities.
integrations:
facebook:
enabled: true
page_id: "your-page-id"
access_token: ${FACEBOOK_ACCESS_TOKEN}
Capabilities: Post updates, schedule content, manage comments, view insights, create events, manage ads.
> Post to Facebook: "Excited to announce our new product launch!"
✓ Posted to Facebook Page
Post ID: 123456789
Reach: Pending
Manage your Instagram content and engage with your audience.
integrations:
instagram:
enabled: true
business_account_id: "your-account-id"
access_token: ${INSTAGRAM_ACCESS_TOKEN}
Capabilities: Post photos and reels, manage stories, respond to comments and DMs, view insights, schedule content.
> Schedule an Instagram post for tomorrow at 9am with image product.jpg
✓ Post scheduled for 2026-02-06 09:00
Media: product.jpg
Caption: Ready for review
Connect with customers through WhatsApp Business API.
integrations:
whatsapp:
enabled: true
phone_number_id: "your-phone-id"
business_account_id: "your-business-id"
access_token: ${WHATSAPP_ACCESS_TOKEN}
Capabilities: Send messages, manage templates, handle incoming messages, send media, broadcast updates.
> Send WhatsApp message to +1234567890: "Your order has shipped!"
✓ Message sent via WhatsApp
To: +1234567890
Status: Delivered
Messenger
Build conversational experiences with Messenger Platform.
integrations:
messenger:
enabled: true
page_id: "your-page-id"
access_token: ${MESSENGER_ACCESS_TOKEN}
Capabilities: Send and receive messages, create chatbot flows, send rich media, handle customer inquiries.
Threads
Engage with the Threads community through automated posting.
integrations:
threads:
enabled: true
user_id: "your-user-id"
access_token: ${THREADS_ACCESS_TOKEN}
Capabilities: Post text updates, reply to threads, quote posts, view engagement metrics.
> Post to Threads: "Just shipped a major update to OmniPalm! 🚀"
✓ Posted to Threads
Post ID: th_987654321
Replies enabled: Yes
Workplace
Streamline internal communication with Workplace integration.
integrations:
workplace:
enabled: true
community_id: "your-community-id"
access_token: ${WORKPLACE_ACCESS_TOKEN}
Capabilities: Post to groups, send messages, manage events, search directory, automate announcements.
Meta AI (Llama)
Access Meta's Llama models for AI-powered features.
integrations:
meta_ai:
enabled: true
model: "llama-3.1-405b"
api_key: ${META_AI_API_KEY}
Capabilities: Text generation, content analysis, code assistance, translation, summarization.
Meta Quest
Control and interact with Meta Quest VR devices and experiences.
integrations:
quest:
enabled: true
device_id: "your-device-id"
developer_token: ${QUEST_DEVELOPER_TOKEN}
Capabilities: Launch apps, manage library, stream content, control settings, integrate with Horizon Worlds.
CLI Reference
Commands
Initialize OmniPalm configuration. Creates the config directory and default configuration files.
Start an interactive chat session with OmniPalm.
Execute a single command and exit.
View or modify configuration values.
Start the OmniPalm daemon in the background.
Stop the running OmniPalm daemon.
Show the current status of OmniPalm and active integrations.
View interaction history.
API Reference
Python SDK
Use OmniPalm programmatically in your Python applications:
from omnipalm import OmniPalm
# Initialize the agent
agent = OmniPalm(
llm_provider="claude",
api_key="your-api-key"
)
# Run a task
result = agent.run("Send an email to alice@example.com")
print(result.success) # True
print(result.output) # "Email sent successfully"
# Chat session
with agent.chat() as session:
response = session.send("What files are in my Downloads?")
print(response.text)
response = session.send("Delete files older than 30 days")
print(response.text)
REST API
When running the daemon, OmniPalm exposes a REST API:
Execute a command and return the result.
| Parameter | Type | Required | Description |
|---|---|---|---|
| command | string | Required | The command to execute |
| context | object | Optional | Additional context for the command |
Plugin Development
OmniPalm's functionality can be extended with custom plugins.
Creating a Plugin
from omnipalm.plugins import Plugin, tool
class MyCustomPlugin(Plugin):
"""A custom plugin for OmniPalm."""
name = "my_custom_plugin"
description = "Adds custom functionality"
version = "1.0.0"
@tool
def my_custom_tool(self, input_text: str) -> str:
"""
Description of what this tool does.
Args:
input_text: The input to process
Returns:
The processed result
"""
# Your implementation here
return f"Processed: {input_text}"
@tool
def another_tool(self, filename: str) -> dict:
"""Another tool with different return type."""
return {"filename": filename, "status": "processed"}
Installing a Plugin
# Install from file
omnipalm plugin install ./my_plugin.py
# Install from registry
omnipalm plugin install weather-plugin
# List installed plugins
omnipalm plugin list
Troubleshooting
Check the following:
- Verify your API key is correct:
omnipalm config get llm.api_key - Ensure you have network connectivity
- Check if the provider has an outage on their status page
- For local models, ensure Ollama/LM Studio is running
For Gmail:
- Run
omnipalm auth gmailto re-authenticate - Ensure "Less secure app access" or App Passwords are configured
- Check if 2FA requires an app-specific password
Increase the timeout setting:
omnipalm config set agent.timeout 600
For complex tasks, consider breaking them into smaller steps.
Frequently Asked Questions
OmniPalm offers a free tier for individual developers. Enterprise plans are available for teams and organizations. You may also incur costs from your LLM provider (Anthropic, OpenAI, etc.) based on their pricing. Using local models via Ollama is completely free.
OmniPalm is built by Meta AI with a focus on enterprise-ready features, multi-LLM support, an extensive plugin system, and seamless integration with Meta's ecosystem of AI tools including Llama models.
OmniPalm sends prompts to your configured LLM provider. If you use local models (Ollama/LM Studio), no data leaves your machine. Your command history and configuration are always stored locally. See our Security Guide for details.
Yes, OmniPalm can run headlessly on servers. Use the daemon mode (omnipalm start) and interact via the REST API. Docker deployment is recommended for production use.