Jordan Ramos fe7c146dc6 feat: Add Gitea MCP integration and project cleanup
## New Features
- **Gitea MCP Tools** (zero API cost):
  - gitea_read_file: Read files from homelab repo
  - gitea_list_files: Browse directories
  - gitea_search_code: Search by filename
  - gitea_get_tree: Get directory tree
- **Gitea Client** (gitea_tools/client.py): REST API wrapper with OAuth
- **Proxmox SSH Scripts** (scripts/): Homelab data collection utilities
- **Obsidian MCP Support** (obsidian_mcp.py): Advanced vault operations
- **Voice Integration Plan** (JARVIS_VOICE_INTEGRATION_PLAN.md)

## Improvements
- **Increased timeout**: 5min → 10min for complex tasks (llm_interface.py)
- **Removed Direct API fallback**: Gitea tools are MCP-only (zero cost)
- **Updated .env.example**: Added Obsidian MCP configuration
- **Enhanced .gitignore**: Protect personal memory files (SOUL.md, MEMORY.md)

## Cleanup
- Deleted 24 obsolete files (temp/test/experimental scripts, outdated docs)
- Untracked personal memory files (SOUL.md, MEMORY.md now in .gitignore)
- Removed: AGENT_SDK_IMPLEMENTATION.md, HYBRID_SEARCH_SUMMARY.md,
  IMPLEMENTATION_SUMMARY.md, MIGRATION.md, test_agent_sdk.py, etc.

## Configuration
- Added config/gitea_config.example.yaml (Gitea setup template)
- Added config/obsidian_mcp.example.yaml (Obsidian MCP template)
- Updated scheduled_tasks.yaml with new task examples

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-18 20:31:32 -07:00

Ajarbot

A lightweight, cost-effective AI agent framework for building proactive bots with Claude and other LLMs. Features intelligent memory management, multi-platform messaging support, and efficient monitoring with the Pulse & Brain architecture.

Table of Contents

Features

  • Flexible Claude Integration: Use Pro subscription OR pay-per-token API via Agent SDK (no server needed)
  • Cost-Optimized AI: Default Haiku 4.5 model (12x cheaper), auto-caching on Sonnet (90% savings), dynamic model switching
  • Smart Memory System: SQLite-based memory with automatic context retrieval and hybrid vector search
  • Multi-Platform Adapters: Run on Slack, Telegram, and more simultaneously
  • 15 Integrated Tools: File ops, shell commands, Gmail, Google Calendar, Contacts
  • Pulse & Brain Monitoring: 92% cost savings with intelligent conditional monitoring (recommended)
  • Task Scheduling: Cron-like scheduled tasks with flexible cadences
  • Multi-LLM Support: Claude (Anthropic) primary, GLM (z.ai) optional

Quick Start

# Clone and install
git clone https://vulcan.apophisnetworking.net/jramos/ajarbot.git
cd ajarbot
pip install -r requirements.txt

# Authenticate with Claude CLI (one-time setup)
claude auth login

# Configure adapters
cp .env.example .env
cp config/adapters.example.yaml config/adapters.local.yaml
# Edit config/adapters.local.yaml with your Slack/Telegram tokens

# Run
run.bat              # Windows
python ajarbot.py    # Linux/Mac

Option 2: API Mode (Pay-per-token)

# Clone and install
git clone https://vulcan.apophisnetworking.net/jramos/ajarbot.git
cd ajarbot
pip install -r requirements.txt

# Configure
cp .env.example .env
# Edit .env and add:
#   AJARBOT_LLM_MODE=api
#   ANTHROPIC_API_KEY=sk-ant-...

# Run
run.bat              # Windows
python ajarbot.py    # Linux/Mac

See CLAUDE_CODE_SETUP.md for detailed setup and mode comparison.

Model Switching Commands

Send these to your bot:

  • /haiku - Fast, cheap (default)
  • /sonnet - Smart, caching enabled (auto 90% cost savings)
  • /status - Check current model and settings

Core Concepts

Agent

The central component that handles LLM interactions with automatic context loading:

  • Loads personality from SOUL.md
  • Retrieves user preferences from users/{username}.md
  • Searches relevant memory chunks
  • Maintains conversation history
from agent import Agent

agent = Agent(provider="claude")
response = agent.chat("Tell me about Python", username="alice")

Memory System

SQLite-based memory with full-text search:

# Write to memory
agent.memory.write_memory("Completed task X", daily=True)

# Update user preferences
agent.memory.update_user("alice", "## Preference\n- Likes Python")

# Search memory
results = agent.memory.search("python")

Task Management

Built-in task tracking:

# Add task
task_id = agent.memory.add_task(
    "Implement API endpoint",
    "Details: REST API for user auth"
)

# Update status
agent.memory.update_task(task_id, status="in_progress")

# Get tasks
pending = agent.memory.get_tasks(status="pending")

Pulse & Brain Architecture

The most cost-effective way to run proactive monitoring:

from agent import Agent
from pulse_brain import PulseBrain

agent = Agent(provider="claude", enable_heartbeat=False)

# Pulse runs pure Python checks (zero cost)
# Brain only invoked when needed (92% cost savings)
pb = PulseBrain(agent, pulse_interval=60)
pb.start()

Cost comparison:

  • Traditional polling: ~$0.48/day
  • Pulse & Brain: ~$0.04/day
  • Savings: 92%

Multi-Platform Adapters

Run your bot on multiple messaging platforms simultaneously:

from adapters.runtime import AdapterRuntime
from adapters.slack.adapter import SlackAdapter
from adapters.telegram.adapter import TelegramAdapter

runtime = AdapterRuntime(agent)
runtime.add_adapter(slack_adapter)
runtime.add_adapter(telegram_adapter)

await runtime.start()

Task Scheduling

Cron-like scheduled tasks:

from scheduled_tasks import TaskScheduler, ScheduledTask

scheduler = TaskScheduler(agent)

task = ScheduledTask(
    "morning-brief",
    "What are today's priorities?",
    schedule="08:00",
    username="alice"
)

scheduler.add_task(task)
scheduler.start()

Usage Examples

Basic Chat with Memory

from agent import Agent

agent = Agent(provider="claude")

# First conversation
agent.chat("I'm working on a Python API", username="bob")

# Later conversation - agent remembers
response = agent.chat("How's the API coming?", username="bob")
# Agent retrieves context about Bob's Python API work

Model Switching

agent = Agent(provider="claude")

# Use Claude for complex reasoning
response = agent.chat("Explain quantum computing")

# Switch to GLM for faster responses
agent.switch_model("glm")
response = agent.chat("What's 2+2?")

Custom Pulse Checks

from pulse_brain import PulseBrain, PulseCheck, BrainTask, CheckType

def check_disk_space():
    import shutil
    usage = shutil.disk_usage("/")
    percent = (usage.used / usage.total) * 100
    return {
        "status": "error" if percent > 90 else "ok",
        "percent": percent
    }

pulse_check = PulseCheck("disk", check_disk_space, interval_seconds=300)

brain_task = BrainTask(
    name="disk-advisor",
    check_type=CheckType.CONDITIONAL,
    prompt_template="Disk is {percent:.1f}% full. Suggest cleanup.",
    condition_func=lambda data: data.get("percent", 0) > 90
)

pb = PulseBrain(agent)
pb.add_pulse_check(pulse_check)
pb.add_brain_task(brain_task)
pb.start()

Skills from Messaging Platforms

from adapters.skill_integration import SkillInvoker

skill_invoker = SkillInvoker()

def skill_preprocessor(message):
    if message.text.startswith("/"):
        parts = message.text.split(maxsplit=1)
        skill_name = parts[0][1:]
        args = parts[1] if len(parts) > 1 else ""

        if skill_name in skill_invoker.list_available_skills():
            skill_info = skill_invoker.get_skill_info(skill_name)
            message.text = skill_info["body"].replace("$ARGUMENTS", args)

    return message

runtime.add_preprocessor(skill_preprocessor)

Then from Slack/Telegram:

@bot /code-review adapters/slack/adapter.py
@bot /deploy --env prod --version v1.2.3

Architecture

┌──────────────────────────────────────────────────────┐
│                    Ajarbot Core                      │
│                                                      │
│  ┌────────────┐  ┌────────────┐  ┌──────────────┐  │
│  │   Agent    │  │   Memory   │  │ LLM Interface│  │
│  │            │──│   System   │──│(Claude/GLM)  │  │
│  └─────┬──────┘  └────────────┘  └──────────────┘  │
│        │                                            │
│        │         ┌────────────────┐                 │
│        └─────────│  Pulse & Brain │                 │
│                  │   Monitoring   │                 │
│                  └────────────────┘                 │
└──────────────────────┬───────────────────────────────┘
                       │
         ┌─────────────┴─────────────┐
         │                           │
    ┌────▼─────┐              ┌──────▼──────┐
    │  Slack   │              │  Telegram   │
    │ Adapter  │              │   Adapter   │
    └──────────┘              └─────────────┘

Key Components

  1. agent.py - Main agent class with automatic context loading
  2. memory_system.py - SQLite-based memory with FTS5 search
  3. llm_interface.py - Unified interface for Claude and GLM
  4. pulse_brain.py - Cost-effective monitoring system
  5. scheduled_tasks.py - Cron-like task scheduler
  6. adapters/ - Multi-platform messaging support
    • base.py - Abstract adapter interface
    • runtime.py - Message routing and processing
    • slack/, telegram/ - Platform implementations
  7. config/ - Configuration management

Documentation

Comprehensive documentation is available in the docs/ directory:

Getting Started

Core Systems

Platform Integration

Advanced Topics

Project Structure

ajarbot/
├── agent.py                      # Main agent class
├── memory_system.py              # Memory management
├── llm_interface.py              # LLM provider interface
├── pulse_brain.py                # Pulse & Brain monitoring
├── scheduled_tasks.py            # Task scheduler
├── heartbeat.py                  # Legacy heartbeat system
├── hooks.py                      # Event hooks
├── bot_runner.py                 # Multi-platform bot runner
├── adapters/                     # Platform adapters
│   ├── base.py                   # Base adapter interface
│   ├── runtime.py                # Adapter runtime
│   ├── skill_integration.py      # Skills system
│   ├── slack/                    # Slack adapter
│   └── telegram/                 # Telegram adapter
├── config/                       # Configuration files
│   ├── config_loader.py
│   └── adapters.yaml
├── docs/                         # Documentation
├── memory_workspace/             # Memory storage
└── examples/                     # Example scripts
    ├── example_usage.py
    ├── example_bot_with_pulse_brain.py
    ├── example_bot_with_scheduler.py
    └── example_bot_with_skills.py

Configuration

Environment Variables

# LLM Mode (optional - defaults to agent-sdk)
export AJARBOT_LLM_MODE="agent-sdk"  # Use Pro subscription
# OR
export AJARBOT_LLM_MODE="api"        # Use pay-per-token API

# Required for API mode only
export ANTHROPIC_API_KEY="sk-ant-..."

# Optional: Alternative LLM
export GLM_API_KEY="..."

# Adapter credentials (stored in config/adapters.local.yaml)
export AJARBOT_SLACK_BOT_TOKEN="xoxb-..."
export AJARBOT_SLACK_APP_TOKEN="xapp-..."
export AJARBOT_TELEGRAM_BOT_TOKEN="123456:ABC..."

Adapter Configuration

Generate configuration template:

python bot_runner.py --init

Edit config/adapters.local.yaml:

adapters:
  slack:
    enabled: true
    credentials:
      bot_token: "xoxb-..."
      app_token: "xapp-..."

  telegram:
    enabled: true
    credentials:
      bot_token: "123456:ABC..."

Testing

Run tests to verify installation:

# Test memory system
python test_skills.py

# Test task scheduler
python test_scheduler.py

Contributing

Contributions are welcome! Please:

  1. Follow PEP 8 style guidelines
  2. Add tests for new features
  3. Update documentation
  4. Keep code concise and maintainable

Credits

License

MIT License - See LICENSE file for details


Need Help?

Description
No description provided
Readme 509 KiB
Languages
Python 88.1%
Shell 10.7%
Batchfile 1.2%