Refactor: Clean up obsolete files and organize codebase structure

This commit removes deprecated modules and reorganizes code into logical directories:

Deleted files (superseded by newer systems):
- claude_code_server.py (replaced by agent-sdk direct integration)
- heartbeat.py (superseded by scheduled_tasks.py)
- pulse_brain.py (unused in production)
- config/pulse_brain_config.py (obsolete config)

Created directory structure:
- examples/ (7 example files: example_*.py, demo_*.py)
- tests/ (5 test files: test_*.py)

Updated imports:
- agent.py: Removed heartbeat module and all enable_heartbeat logic
- bot_runner.py: Removed heartbeat parameter from Agent initialization
- llm_interface.py: Updated deprecated claude_code_server message

Preserved essential files:
- hooks.py (for future use)
- adapters/skill_integration.py (for future use)
- All Google integration tools (Gmail, Calendar, Contacts)
- GLM provider code (backward compatibility)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-02-15 09:57:39 -07:00
parent f018800d94
commit a8665d8c72
26 changed files with 1068 additions and 1067 deletions

256
CLAUDE_CODE_SETUP.md Normal file
View File

@@ -0,0 +1,256 @@
# Claude Agent SDK Setup
Use your **Claude Pro subscription** OR **API key** with ajarbot - no separate server needed.
## What is the Agent SDK?
The Claude Agent SDK lets you use Claude directly from Python using either:
- **Your Pro subscription** (unlimited usage within Pro limits)
- **Your API key** (pay-per-token)
The SDK automatically handles authentication and runs Claude in-process - no FastAPI server required.
## Quick Start
### 1. Install Dependencies
```bash
pip install -r requirements.txt
```
This installs `claude-agent-sdk` along with all other dependencies.
### 2. Choose Your Mode
Set `AJARBOT_LLM_MODE` in your `.env` file (or leave it unset for default):
```bash
# Use Claude Pro subscription (default - recommended for personal use)
AJARBOT_LLM_MODE=agent-sdk
# OR use pay-per-token API
AJARBOT_LLM_MODE=api
ANTHROPIC_API_KEY=sk-ant-...
```
### 3. Authenticate (Agent SDK mode only)
If using `agent-sdk` mode, authenticate with Claude CLI:
```bash
# Install Claude CLI (if not already installed)
# Download from: https://claude.ai/download
# Login with your Claude account
claude auth login
```
This opens a browser window to authenticate with your claude.ai account.
### 4. Run Your Bot
**Windows:**
```bash
run.bat
```
**Linux/Mac:**
```bash
python ajarbot.py
```
That's it! No separate server to manage.
## Architecture Comparison
### Old Setup (Deprecated)
```
Telegram/Slack → ajarbot → FastAPI Server (localhost:8000) → Claude Code SDK → Claude
```
### New Setup (Current)
```
Telegram/Slack → ajarbot → Claude Agent SDK → Claude (Pro OR API)
```
The new setup eliminates the FastAPI server, reducing complexity and removing an extra process.
## Mode Details
### Agent SDK Mode (Default)
**Pros:**
- Uses Pro subscription (unlimited within Pro limits)
- No API key needed
- Higher context window (200K tokens)
- Simple authentication via Claude CLI
**Cons:**
- Requires Node.js and Claude CLI installed
- Subject to Pro subscription rate limits
- Not suitable for multi-user production
**Setup:**
```bash
# .env file
AJARBOT_LLM_MODE=agent-sdk
# Authenticate once
claude auth login
```
### API Mode
**Pros:**
- No CLI authentication needed
- Predictable pay-per-token pricing
- Works in any environment (no Node.js required)
- Better for production/multi-user scenarios
**Cons:**
- Costs money per API call
- Requires managing API keys
**Setup:**
```bash
# .env file
AJARBOT_LLM_MODE=api
ANTHROPIC_API_KEY=sk-ant-...
```
## Cost Comparison
| Mode | Cost Model | Best For |
|------|-----------|----------|
| **Agent SDK (Pro)** | $20/month flat rate | Heavy personal usage |
| **API (pay-per-token)** | ~$0.25-$3 per 1M tokens | Light usage, production |
With default Haiku model, API mode costs approximately:
- ~$0.04/day for moderate personal use (1000 messages/month)
- ~$1.20/month for light usage
## Pre-Flight Checks
The `ajarbot.py` launcher runs automatic checks before starting:
**Agent SDK mode checks:**
- Python 3.10+
- Node.js available
- Claude CLI authenticated
- Config file exists
**API mode checks:**
- Python 3.10+
- `.env` file exists
- `ANTHROPIC_API_KEY` is set
- Config file exists
Run health check manually:
```bash
python ajarbot.py --health
```
## Troubleshooting
### "Node.js not found"
Agent SDK mode requires Node.js. Either:
1. Install Node.js from https://nodejs.org
2. Switch to API mode (set `AJARBOT_LLM_MODE=api`)
### "Claude CLI not authenticated"
```bash
# Check authentication status
claude auth status
# Re-authenticate
claude auth logout
claude auth login
```
### "Agent SDK not available"
```bash
pip install claude-agent-sdk
```
If installation fails, switch to API mode.
### Rate Limits (Agent SDK mode)
If you hit Pro subscription limits:
- Wait for limit refresh (usually 24 hours)
- Switch to API mode temporarily:
```bash
# In .env
AJARBOT_LLM_MODE=api
ANTHROPIC_API_KEY=sk-ant-...
```
### "ANTHROPIC_API_KEY not set" (API mode)
Create a `.env` file in the project root:
```bash
cp .env.example .env
# Edit .env and add your API key
```
Get your API key from: https://console.anthropic.com/settings/keys
## Migration from Old Setup
If you previously used the FastAPI server (`claude_code_server.py`):
1. **Remove old environment variables:**
```bash
# Delete these from .env
USE_CLAUDE_CODE_SERVER=true
CLAUDE_CODE_SERVER_URL=http://localhost:8000
```
2. **Set new mode:**
```bash
# Add to .env
AJARBOT_LLM_MODE=agent-sdk # or "api"
```
3. **Stop the old server** (no longer needed):
- The `claude_code_server.py` process can be stopped
- It's no longer used
4. **Run with new launcher:**
```bash
run.bat # Windows
python ajarbot.py # Linux/Mac
```
See [MIGRATION.md](MIGRATION.md) for detailed migration guide.
## Features
All ajarbot features work in both modes:
- 15 tools (file ops, system commands, Gmail, Calendar, Contacts)
- Multi-platform adapters (Slack, Telegram)
- Memory system with hybrid search
- Task scheduling
- Google integration (Gmail, Calendar, Contacts)
- Usage tracking (API mode only)
## Security
**Agent SDK mode:**
- Uses your Claude.ai authentication
- No API keys to manage
- Credentials stored by Claude CLI (secure)
- Runs entirely on localhost
**API mode:**
- API key in `.env` file (gitignored)
- Environment variable isolation
- No data leaves your machine except to Claude's API
Both modes are suitable for personal bots. API mode is recommended for production/multi-user scenarios.
## Sources
- [Claude Agent SDK GitHub](https://github.com/anthropics/anthropic-sdk-python)
- [Claude CLI Download](https://claude.ai/download)
- [Anthropic API Documentation](https://docs.anthropic.com/)

View File

@@ -15,12 +15,13 @@ A lightweight, cost-effective AI agent framework for building proactive bots wit
## Features
- **Flexible Claude Integration**: Use Pro subscription OR pay-per-token API via Agent SDK (no server needed)
- **Cost-Optimized AI**: Default Haiku 4.5 model (12x cheaper), auto-caching on Sonnet (90% savings), dynamic model switching
- **Smart Memory System**: SQLite-based memory with automatic context retrieval and FTS search
- **Smart Memory System**: SQLite-based memory with automatic context retrieval and hybrid vector search
- **Multi-Platform Adapters**: Run on Slack, Telegram, and more simultaneously
- **15 Integrated Tools**: File ops, shell commands, Gmail, Google Calendar, Contacts
- **Pulse & Brain Monitoring**: 92% cost savings with intelligent conditional monitoring (recommended)
- **Task Scheduling**: Cron-like scheduled tasks with flexible cadences
- **Tool Use System**: File operations, command execution, and autonomous task completion
- **Multi-LLM Support**: Claude (Anthropic) primary, GLM (z.ai) optional
## Quick Start

View File

@@ -3,7 +3,6 @@
import threading
from typing import List, Optional
from heartbeat import Heartbeat
from hooks import HooksSystem
from llm_interface import LLMInterface
from memory_system import MemorySystem
@@ -11,7 +10,7 @@ from self_healing import SelfHealingSystem
from tools import TOOL_DEFINITIONS, execute_tool
# Maximum number of recent messages to include in LLM context
MAX_CONTEXT_MESSAGES = 3 # Reduced from 5 to save tokens
MAX_CONTEXT_MESSAGES = 10 # Increased for better context retention
# Maximum characters of agent response to store in memory
MEMORY_RESPONSE_PREVIEW_LENGTH = 200
# Maximum conversation history entries before pruning
@@ -19,13 +18,12 @@ MAX_CONVERSATION_HISTORY = 50
class Agent:
"""AI Agent with memory, LLM, heartbeat, and hooks."""
"""AI Agent with memory, LLM, and hooks."""
def __init__(
self,
provider: str = "claude",
workspace_dir: str = "./memory_workspace",
enable_heartbeat: bool = False,
) -> None:
self.memory = MemorySystem(workspace_dir)
self.llm = LLMInterface(provider)
@@ -37,12 +35,6 @@ class Agent:
self.memory.sync()
self.hooks.trigger("agent", "startup", {"workspace_dir": workspace_dir})
self.heartbeat: Optional[Heartbeat] = None
if enable_heartbeat:
self.heartbeat = Heartbeat(self.memory, self.llm)
self.heartbeat.on_alert = self._on_heartbeat_alert
self.heartbeat.start()
def _get_context_messages(self, max_messages: int) -> List[dict]:
"""Get recent messages without breaking tool_use/tool_result pairs.
@@ -91,10 +83,6 @@ class Agent:
return result
def _on_heartbeat_alert(self, message: str) -> None:
"""Handle heartbeat alerts."""
print(f"\nHeartbeat Alert:\n{message}\n")
def _prune_conversation_history(self) -> None:
"""Prune conversation history to prevent unbounded growth.
@@ -172,7 +160,7 @@ class Agent:
self._prune_conversation_history()
# Tool execution loop
max_iterations = 5 # Reduced from 10 to save costs
max_iterations = 15 # Increased for complex multi-step operations
# Enable caching for Sonnet to save 90% on repeated system prompts
use_caching = "sonnet" in self.llm.model.lower()
@@ -282,13 +270,9 @@ class Agent:
def switch_model(self, provider: str) -> None:
"""Switch LLM provider."""
self.llm = LLMInterface(provider)
if self.heartbeat:
self.heartbeat.llm = self.llm
def shutdown(self) -> None:
"""Cleanup and stop background services."""
if self.heartbeat:
self.heartbeat.stop()
self.memory.close()
self.hooks.trigger("agent", "shutdown", {})

205
ajarbot.py Normal file
View File

@@ -0,0 +1,205 @@
"""
Unified launcher for ajarbot with pre-flight checks.
This launcher:
1. Performs environment checks (Node.js, Claude CLI auth)
2. Sets sensible defaults (agent-sdk mode)
3. Delegates to bot_runner.main() for actual execution
Usage:
python ajarbot.py # Run with default config
python ajarbot.py --config custom.yaml # Use custom config file
python ajarbot.py --init # Generate config template
python ajarbot.py --setup-google # Set up Google OAuth
python ajarbot.py --health # Run health check
Environment variables:
AJARBOT_LLM_MODE # LLM mode: "agent-sdk" or "api" (default: agent-sdk)
AJARBOT_SLACK_BOT_TOKEN # Slack bot token (xoxb-...)
AJARBOT_SLACK_APP_TOKEN # Slack app token (xapp-...)
AJARBOT_TELEGRAM_BOT_TOKEN # Telegram bot token
ANTHROPIC_API_KEY # Claude API key (only needed for api mode)
"""
import os
import sys
import shutil
import subprocess
from pathlib import Path
class PreflightCheck:
"""Performs environment checks before launching the bot."""
def __init__(self):
self.warnings = []
self.errors = []
def check_nodejs(self) -> bool:
"""Check if Node.js is available (required for agent-sdk mode)."""
try:
result = subprocess.run(
["node", "--version"],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
version = result.stdout.strip()
print(f"✓ Node.js found: {version}")
return True
else:
self.warnings.append("Node.js not found (required for agent-sdk mode)")
return False
except FileNotFoundError:
self.warnings.append("Node.js not found (required for agent-sdk mode)")
return False
except Exception as e:
self.warnings.append(f"Error checking Node.js: {e}")
return False
def check_claude_cli_auth(self) -> bool:
"""Check if Claude CLI is authenticated."""
try:
result = subprocess.run(
["claude", "auth", "status"],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0 and "Authenticated" in result.stdout:
print("✓ Claude CLI authenticated")
return True
else:
self.warnings.append("Claude CLI not authenticated (run: claude auth login)")
return False
except FileNotFoundError:
self.warnings.append("Claude CLI not found (install from: https://claude.ai/download)")
return False
except Exception as e:
self.warnings.append(f"Error checking Claude CLI: {e}")
return False
def check_python_version(self) -> bool:
"""Check if Python version is compatible."""
version_info = sys.version_info
if version_info >= (3, 10):
print(f"✓ Python {version_info.major}.{version_info.minor}.{version_info.micro}")
return True
else:
self.errors.append(
f"Python 3.10+ required (found {version_info.major}.{version_info.minor}.{version_info.micro})"
)
return False
def check_env_file(self) -> bool:
"""Check if .env file exists (for API key storage)."""
env_path = Path(".env")
if env_path.exists():
print(f"✓ .env file found")
return True
else:
self.warnings.append(".env file not found (create one if using API mode)")
return False
def check_config_file(self) -> bool:
"""Check if adapter config exists."""
config_path = Path("config/adapters.local.yaml")
if config_path.exists():
print(f"✓ Config file found: {config_path}")
return True
else:
self.warnings.append(
"config/adapters.local.yaml not found (run: python ajarbot.py --init)"
)
return False
def set_default_llm_mode(self):
"""Set default LLM mode to agent-sdk if not specified."""
if "AJARBOT_LLM_MODE" not in os.environ:
os.environ["AJARBOT_LLM_MODE"] = "agent-sdk"
print(" Using LLM mode: agent-sdk (default)")
else:
mode = os.environ["AJARBOT_LLM_MODE"]
print(f" Using LLM mode: {mode} (from environment)")
def run_all_checks(self) -> bool:
"""Run all pre-flight checks. Returns True if safe to proceed."""
print("=" * 60)
print("Ajarbot Pre-Flight Checks")
print("=" * 60)
print()
# Critical checks
self.check_python_version()
# LLM mode dependent checks
llm_mode = os.environ.get("AJARBOT_LLM_MODE", "agent-sdk")
if llm_mode == "agent-sdk":
print("\n[Agent SDK Mode Checks]")
self.check_nodejs()
self.check_claude_cli_auth()
elif llm_mode == "api":
print("\n[API Mode Checks]")
has_env = self.check_env_file()
if has_env:
if not os.environ.get("ANTHROPIC_API_KEY"):
self.errors.append("ANTHROPIC_API_KEY not set in .env file (required for API mode)")
else:
self.errors.append(".env file with ANTHROPIC_API_KEY required for API mode")
# Common checks
print("\n[Configuration Checks]")
self.check_config_file()
# Display results
print()
print("=" * 60)
if self.errors:
print("ERRORS (must fix before running):")
for error in self.errors:
print(f"{error}")
print()
return False
if self.warnings:
print("WARNINGS (optional, but recommended):")
for warning in self.warnings:
print(f"{warning}")
print()
print("Pre-flight checks complete!")
print("=" * 60)
print()
return True
def main():
"""Main entry point with pre-flight checks."""
# Set default LLM mode before checks
checker = PreflightCheck()
checker.set_default_llm_mode()
# Special commands that bypass pre-flight checks
bypass_commands = ["--init", "--help", "-h"]
if any(arg in sys.argv for arg in bypass_commands):
# Import and run bot_runner directly
from bot_runner import main as bot_main
bot_main()
return
# Run pre-flight checks for normal operation
if not checker.run_all_checks():
print("\nPre-flight checks failed. Please fix the errors above.")
sys.exit(1)
# All checks passed - delegate to bot_runner
print("Launching ajarbot...\n")
from bot_runner import main as bot_main
bot_main()
if __name__ == "__main__":
main()

View File

@@ -77,7 +77,6 @@ class BotRunner:
self.agent = Agent(
provider="claude",
workspace_dir="./memory_workspace",
enable_heartbeat=False,
)
print("[Setup] Agent initialized")

View File

@@ -1,307 +0,0 @@
"""
Custom Pulse & Brain configuration.
Define your own pulse checks (zero cost) and brain tasks (uses tokens).
"""
import subprocess
from typing import Any, Dict, List
import requests
from pulse_brain import BrainTask, CheckType, PulseCheck
# === PULSE CHECKS (Pure Python, Zero Cost) ===
def check_server_uptime() -> Dict[str, Any]:
"""Check if server is responsive (pure Python, no agent)."""
try:
response = requests.get(
"http://localhost:8000/health", timeout=5
)
status = "ok" if response.status_code == 200 else "error"
return {
"status": status,
"message": f"Server responded: {response.status_code}",
}
except Exception as e:
return {
"status": "error",
"message": f"Server unreachable: {e}",
}
def check_docker_containers() -> Dict[str, Any]:
"""Check Docker container status (pure Python)."""
try:
result = subprocess.run(
["docker", "ps", "--format", "{{.Status}}"],
capture_output=True,
text=True,
timeout=5,
)
if result.returncode != 0:
return {
"status": "error",
"message": "Docker check failed",
}
unhealthy = sum(
1
for line in result.stdout.split("\n")
if "unhealthy" in line.lower()
)
if unhealthy > 0:
message = f"{unhealthy} unhealthy container(s)"
else:
message = "All containers healthy"
return {
"status": "error" if unhealthy > 0 else "ok",
"unhealthy_count": unhealthy,
"message": message,
}
except Exception as e:
return {"status": "error", "message": str(e)}
def check_plex_server() -> Dict[str, Any]:
"""Check if Plex is running (pure Python)."""
try:
response = requests.get(
"http://localhost:32400/identity", timeout=5
)
is_ok = response.status_code == 200
return {
"status": "ok" if is_ok else "warn",
"message": (
"Plex server is running"
if is_ok
else "Plex unreachable"
),
}
except Exception as e:
return {
"status": "warn",
"message": f"Plex check failed: {e}",
}
def check_unifi_controller() -> Dict[str, Any]:
"""Check UniFi controller (pure Python)."""
try:
requests.get(
"https://localhost:8443", verify=False, timeout=5
)
return {
"status": "ok",
"message": "UniFi controller responding",
}
except Exception as e:
return {
"status": "error",
"message": f"UniFi unreachable: {e}",
}
def check_gpu_temperature() -> Dict[str, Any]:
"""Check GPU temperature (pure Python, requires nvidia-smi)."""
try:
result = subprocess.run(
[
"nvidia-smi",
"--query-gpu=temperature.gpu",
"--format=csv,noheader",
],
capture_output=True,
text=True,
timeout=5,
)
if result.returncode != 0:
return {"status": "ok", "message": "GPU check skipped"}
temp = int(result.stdout.strip())
if temp > 85:
status = "error"
elif temp > 75:
status = "warn"
else:
status = "ok"
return {
"status": status,
"temperature": temp,
"message": f"GPU temperature: {temp}C",
}
except Exception:
return {"status": "ok", "message": "GPU check skipped"}
def check_star_citizen_patch() -> Dict[str, Any]:
"""Check for Star Citizen patches (pure Python, placeholder)."""
return {
"status": "ok",
"new_patch": False,
"message": "No new Star Citizen patches",
}
# === CUSTOM PULSE CHECKS ===
CUSTOM_PULSE_CHECKS: List[PulseCheck] = [
PulseCheck(
"server-uptime", check_server_uptime,
interval_seconds=60,
),
PulseCheck(
"docker-health", check_docker_containers,
interval_seconds=120,
),
PulseCheck(
"plex-status", check_plex_server,
interval_seconds=300,
),
PulseCheck(
"unifi-controller", check_unifi_controller,
interval_seconds=300,
),
PulseCheck(
"gpu-temp", check_gpu_temperature,
interval_seconds=60,
),
PulseCheck(
"star-citizen", check_star_citizen_patch,
interval_seconds=3600,
),
]
# === BRAIN TASKS (Agent/SDK, Uses Tokens) ===
CUSTOM_BRAIN_TASKS: List[BrainTask] = [
BrainTask(
name="server-medic",
check_type=CheckType.CONDITIONAL,
prompt_template=(
"Server is down!\n\n"
"Status: $message\n\n"
"Please analyze:\n"
"1. What could cause this?\n"
"2. What should I check first?\n"
"3. Should I restart services?\n\n"
"Be concise and actionable."
),
condition_func=lambda data: data.get("status") == "error",
send_to_platform="slack",
send_to_channel="C_ALERTS",
),
BrainTask(
name="docker-diagnostician",
check_type=CheckType.CONDITIONAL,
prompt_template=(
"Docker containers unhealthy!\n\n"
"Unhealthy count: $unhealthy_count\n\n"
"Please diagnose:\n"
"1. What might cause container health issues?\n"
"2. Should I restart them?\n"
"3. What logs should I check?"
),
condition_func=lambda data: (
data.get("unhealthy_count", 0) > 0
),
send_to_platform="telegram",
send_to_channel="123456789",
),
BrainTask(
name="gpu-thermal-advisor",
check_type=CheckType.CONDITIONAL,
prompt_template=(
"GPU temperature is high!\n\n"
"Current: $temperatureC\n\n"
"Please advise:\n"
"1. Is this dangerous?\n"
"2. What can I do to cool it down?\n"
"3. Should I stop current workloads?"
),
condition_func=lambda data: (
data.get("temperature", 0) > 80
),
),
BrainTask(
name="homelab-briefing",
check_type=CheckType.SCHEDULED,
schedule_time="08:00",
prompt_template=(
"Good morning! Homelab status report:\n\n"
"Server: $server_message\n"
"Docker: $docker_message\n"
"Plex: $plex_message\n"
"UniFi: $unifi_message\n"
"Star Citizen: $star_citizen_message\n\n"
"Overnight summary:\n"
"1. Any services restart?\n"
"2. Notable events?\n"
"3. Action items for today?\n\n"
"Keep it brief and friendly."
),
send_to_platform="slack",
send_to_channel="C_HOMELAB",
),
BrainTask(
name="homelab-evening-report",
check_type=CheckType.SCHEDULED,
schedule_time="22:00",
prompt_template=(
"Evening homelab report:\n\n"
"Today's status:\n"
"- Server uptime: $server_message\n"
"- Docker health: $docker_message\n"
"- GPU temp: $gpu_message\n\n"
"Summary:\n"
"1. Any issues today?\n"
"2. Services that needed attention?\n"
"3. Overnight monitoring notes?"
),
send_to_platform="telegram",
send_to_channel="123456789",
),
BrainTask(
name="patch-notifier",
check_type=CheckType.CONDITIONAL,
prompt_template=(
"New Star Citizen patch detected!\n\n"
"Please:\n"
"1. Summarize patch notes (if available)\n"
"2. Note any breaking changes\n"
"3. Recommend if I should update now or wait"
),
condition_func=lambda data: data.get("new_patch", False),
send_to_platform="discord",
send_to_channel="GAMING_CHANNEL",
),
]
def apply_custom_config(pulse_brain: Any) -> None:
"""Apply custom configuration to PulseBrain instance."""
existing_pulse_names = {c.name for c in pulse_brain.pulse_checks}
for check in CUSTOM_PULSE_CHECKS:
if check.name not in existing_pulse_names:
pulse_brain.pulse_checks.append(check)
existing_brain_names = {t.name for t in pulse_brain.brain_tasks}
for task in CUSTOM_BRAIN_TASKS:
if task.name not in existing_brain_names:
pulse_brain.brain_tasks.append(task)
print(
f"Applied custom config: "
f"{len(CUSTOM_PULSE_CHECKS)} pulse checks, "
f"{len(CUSTOM_BRAIN_TASKS)} brain tasks"
)

View File

@@ -5,7 +5,16 @@ tasks:
# Morning briefing - sent to Slack/Telegram
- name: morning-weather
prompt: |
Current weather report for my location. Just the weather - keep it brief.
Check the user profile (Jordan.md) for the location (Centennial, CO). Use the get_weather tool with OpenWeatherMap API to fetch the current weather. Format the report as:
🌤️ **Weather Report for Centennial, CO**
- Current: [current]°F
- High: [high]°F
- Low: [low]°F
- Conditions: [conditions]
- Recommendation: [brief clothing/activity suggestion]
Keep it brief and friendly!
schedule: "daily 06:00"
enabled: true
send_to_platform: "telegram"

View File

@@ -1,192 +0,0 @@
"""Simple Heartbeat System - Periodic agent awareness checks."""
import threading
import time
from datetime import datetime
from typing import Callable, Optional
from llm_interface import LLMInterface
from memory_system import MemorySystem
# Default heartbeat checklist template
_HEARTBEAT_TEMPLATE = """\
# Heartbeat Checklist
Run these checks every heartbeat cycle:
## Memory Checks
- Review pending tasks (status = pending)
- Check if any tasks have been pending > 24 hours
## System Checks
- Verify memory system is synced
- Log heartbeat ran successfully
## Notes
- Return HEARTBEAT_OK if nothing needs attention
- Only alert if something requires user action
"""
# Maximum number of pending tasks to include in context
MAX_PENDING_TASKS_IN_CONTEXT = 5
# Maximum characters of soul content to include in context
SOUL_PREVIEW_LENGTH = 200
class Heartbeat:
"""Periodic background checks with LLM awareness."""
def __init__(
self,
memory: MemorySystem,
llm: LLMInterface,
interval_minutes: int = 30,
active_hours: tuple = (8, 22),
) -> None:
self.memory = memory
self.llm = llm
self.interval = interval_minutes * 60
self.active_hours = active_hours
self.running = False
self.thread: Optional[threading.Thread] = None
self.on_alert: Optional[Callable[[str], None]] = None
self.heartbeat_file = memory.workspace_dir / "HEARTBEAT.md"
if not self.heartbeat_file.exists():
self.heartbeat_file.write_text(
_HEARTBEAT_TEMPLATE, encoding="utf-8"
)
def start(self) -> None:
"""Start heartbeat in background thread."""
if self.running:
return
self.running = True
self.thread = threading.Thread(
target=self._heartbeat_loop, daemon=True
)
self.thread.start()
print(f"Heartbeat started (every {self.interval // 60}min)")
def stop(self) -> None:
"""Stop heartbeat."""
self.running = False
if self.thread:
self.thread.join()
print("Heartbeat stopped")
def _is_active_hours(self) -> bool:
"""Check if current time is within active hours."""
current_hour = datetime.now().hour
start, end = self.active_hours
return start <= current_hour < end
def _heartbeat_loop(self) -> None:
"""Main heartbeat loop."""
while self.running:
try:
if self._is_active_hours():
self._run_heartbeat()
else:
start, end = self.active_hours
print(
f"Heartbeat skipped "
f"(outside active hours {start}-{end})"
)
except Exception as e:
print(f"Heartbeat error: {e}")
time.sleep(self.interval)
def _build_context(self) -> str:
"""Build system context for heartbeat check."""
soul = self.memory.get_soul()
pending_tasks = self.memory.get_tasks(status="pending")
context_parts = [
"# HEARTBEAT CHECK",
f"Current time: {datetime.now().isoformat()}",
f"\nSOUL:\n{soul[:SOUL_PREVIEW_LENGTH]}...",
f"\nPending tasks: {len(pending_tasks)}",
]
if pending_tasks:
context_parts.append("\nPending Tasks:")
for task in pending_tasks[:MAX_PENDING_TASKS_IN_CONTEXT]:
context_parts.append(f"- [{task['id']}] {task['title']}")
return "\n".join(context_parts)
def _run_heartbeat(self) -> None:
"""Execute one heartbeat cycle."""
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"Heartbeat running ({timestamp})")
checklist = self.heartbeat_file.read_text(encoding="utf-8")
system = self._build_context()
messages = [{
"role": "user",
"content": (
f"{checklist}\n\n"
"Process this checklist. If nothing needs attention, "
"respond with EXACTLY 'HEARTBEAT_OK'. If something "
"needs attention, describe it briefly."
),
}]
response = self.llm.chat(messages, system=system, max_tokens=500)
if response.strip() != "HEARTBEAT_OK":
print(f"Heartbeat alert: {response[:100]}...")
if self.on_alert:
self.on_alert(response)
self.memory.write_memory(
f"## Heartbeat Alert\n{response}", daily=True
)
else:
print("Heartbeat OK")
def check_now(self) -> str:
"""Run heartbeat check immediately (for testing)."""
print("Running immediate heartbeat check...")
checklist = self.heartbeat_file.read_text(encoding="utf-8")
pending_tasks = self.memory.get_tasks(status="pending")
soul = self.memory.get_soul()
system = (
f"Time: {datetime.now().isoformat()}\n"
f"SOUL: {soul[:SOUL_PREVIEW_LENGTH]}...\n"
f"Pending tasks: {len(pending_tasks)}"
)
messages = [{
"role": "user",
"content": (
f"{checklist}\n\n"
"Process this checklist. "
"Return HEARTBEAT_OK if nothing needs attention."
),
}]
return self.llm.chat(messages, system=system, max_tokens=500)
if __name__ == "__main__":
memory = MemorySystem()
llm = LLMInterface(provider="claude")
heartbeat = Heartbeat(
memory, llm, interval_minutes=30, active_hours=(8, 22)
)
def on_alert(message: str) -> None:
print(f"\nALERT: {message}\n")
heartbeat.on_alert = on_alert
result = heartbeat.check_now()
print(f"\nResult: {result}")

View File

@@ -1,20 +1,41 @@
"""LLM Interface - Claude API, GLM, and other models."""
"""LLM Interface - Claude API, GLM, and other models.
Supports three modes for Claude:
1. Agent SDK (uses Pro subscription) - DEFAULT - Set USE_AGENT_SDK=true (default)
2. Direct API (pay-per-token) - Set USE_DIRECT_API=true
3. Legacy: Local Claude Code server - Set USE_CLAUDE_CODE_SERVER=true (deprecated)
"""
import os
from typing import Any, Dict, List, Optional
import requests
from anthropic import Anthropic
from anthropic.types import Message
from anthropic.types import Message, ContentBlock, TextBlock, ToolUseBlock, Usage
from usage_tracker import UsageTracker
# Try to import Agent SDK (optional dependency)
try:
from claude_agent_sdk import AgentSDK
import anyio
AGENT_SDK_AVAILABLE = True
except ImportError:
AGENT_SDK_AVAILABLE = False
# API key environment variable names by provider
_API_KEY_ENV_VARS = {
"claude": "ANTHROPIC_API_KEY",
"glm": "GLM_API_KEY",
}
# Mode selection (priority order: USE_DIRECT_API > USE_CLAUDE_CODE_SERVER > default to Agent SDK)
_USE_DIRECT_API = os.getenv("USE_DIRECT_API", "false").lower() == "true"
_CLAUDE_CODE_SERVER_URL = os.getenv("CLAUDE_CODE_SERVER_URL", "http://localhost:8000")
_USE_CLAUDE_CODE_SERVER = os.getenv("USE_CLAUDE_CODE_SERVER", "false").lower() == "true"
# Agent SDK is the default if available and no other mode is explicitly enabled
_USE_AGENT_SDK = os.getenv("USE_AGENT_SDK", "true").lower() == "true"
# Default models by provider
_DEFAULT_MODELS = {
"claude": "claude-haiku-4-5-20251001", # 12x cheaper than Sonnet!
@@ -39,12 +60,46 @@ class LLMInterface:
)
self.model = _DEFAULT_MODELS.get(provider, "")
self.client: Optional[Anthropic] = None
self.agent_sdk: Optional[Any] = None
# Usage tracking
self.tracker = UsageTracker() if track_usage else None
# Determine mode (priority: direct API > legacy server > agent SDK)
if provider == "claude":
if _USE_DIRECT_API:
self.mode = "direct_api"
elif _USE_CLAUDE_CODE_SERVER:
self.mode = "legacy_server"
elif _USE_AGENT_SDK and AGENT_SDK_AVAILABLE:
self.mode = "agent_sdk"
else:
# Fallback to direct API if Agent SDK not available
self.mode = "direct_api"
if _USE_AGENT_SDK and not AGENT_SDK_AVAILABLE:
print("[LLM] Warning: Agent SDK not available, falling back to Direct API")
print("[LLM] Install with: pip install claude-agent-sdk")
else:
self.mode = "direct_api" # Non-Claude providers use direct API
# Usage tracking (disabled when using Agent SDK or legacy server)
self.tracker = UsageTracker() if (track_usage and self.mode == "direct_api") else None
# Initialize based on mode
if provider == "claude":
if self.mode == "agent_sdk":
print(f"[LLM] Using Claude Agent SDK (Pro subscription)")
self.agent_sdk = AgentSDK()
elif self.mode == "direct_api":
print(f"[LLM] Using Direct API (pay-per-token)")
self.client = Anthropic(api_key=self.api_key)
elif self.mode == "legacy_server":
print(f"[LLM] Using Claude Code server at {_CLAUDE_CODE_SERVER_URL} (Pro subscription)")
# Verify server is running
try:
response = requests.get(f"{_CLAUDE_CODE_SERVER_URL}/", timeout=2)
response.raise_for_status()
print(f"[LLM] Claude Code server is running: {response.json()}")
except Exception as e:
print(f"[LLM] Warning: Could not connect to Claude Code server: {e}")
print(f"[LLM] Note: Claude Code server mode is deprecated. Using Agent SDK instead.")
def chat(
self,
@@ -58,6 +113,41 @@ class LLMInterface:
Exception: If the API call fails or returns an unexpected response.
"""
if self.provider == "claude":
# Agent SDK mode (Pro subscription)
if self.mode == "agent_sdk":
try:
# Use anyio to bridge async SDK to sync interface
response = anyio.from_thread.run(
self._agent_sdk_chat,
messages,
system,
max_tokens
)
return response
except Exception as e:
raise Exception(f"Agent SDK error: {e}")
# Legacy Claude Code server (Pro subscription)
elif self.mode == "legacy_server":
try:
payload = {
"messages": [{"role": m["role"], "content": m["content"]} for m in messages],
"system": system,
"max_tokens": max_tokens
}
response = requests.post(
f"{_CLAUDE_CODE_SERVER_URL}/v1/chat",
json=payload,
timeout=120
)
response.raise_for_status()
data = response.json()
return data.get("content", "")
except Exception as e:
raise Exception(f"Claude Code server error: {e}")
# Direct API (pay-per-token)
elif self.mode == "direct_api":
response = self.client.messages.create(
model=self.model,
max_tokens=max_tokens,
@@ -101,6 +191,102 @@ class LLMInterface:
raise ValueError(f"Unsupported provider: {self.provider}")
async def _agent_sdk_chat(
self,
messages: List[Dict],
system: Optional[str],
max_tokens: int
) -> str:
"""Internal async method for Agent SDK chat (called via anyio bridge)."""
response = await self.agent_sdk.chat(
messages=messages,
system=system,
max_tokens=max_tokens,
model=self.model
)
# Extract text from response
if isinstance(response, dict):
return response.get("content", "")
return str(response)
async def _agent_sdk_chat_with_tools(
self,
messages: List[Dict],
tools: List[Dict[str, Any]],
system: Optional[str],
max_tokens: int
) -> Message:
"""Internal async method for Agent SDK chat with tools (called via anyio bridge)."""
response = await self.agent_sdk.chat(
messages=messages,
tools=tools,
system=system,
max_tokens=max_tokens,
model=self.model
)
# Convert Agent SDK response to anthropic.types.Message format
return self._convert_sdk_response_to_message(response)
def _convert_sdk_response_to_message(self, sdk_response: Dict[str, Any]) -> Message:
"""Convert Agent SDK response to anthropic.types.Message format.
This ensures compatibility with agent.py's existing tool loop.
"""
# Extract content blocks
content_blocks = []
raw_content = sdk_response.get("content", [])
if isinstance(raw_content, str):
# Simple text response
content_blocks = [TextBlock(type="text", text=raw_content)]
elif isinstance(raw_content, list):
# List of content blocks
for block in raw_content:
if isinstance(block, dict):
if block.get("type") == "text":
content_blocks.append(TextBlock(
type="text",
text=block.get("text", "")
))
elif block.get("type") == "tool_use":
content_blocks.append(ToolUseBlock(
type="tool_use",
id=block.get("id", ""),
name=block.get("name", ""),
input=block.get("input", {})
))
# Extract usage information
usage_data = sdk_response.get("usage", {})
usage = Usage(
input_tokens=usage_data.get("input_tokens", 0),
output_tokens=usage_data.get("output_tokens", 0)
)
# Create Message object
# Note: We create a minimal Message-compatible object
# The Message class from anthropic.types is read-only, so we create a mock
# Capture self.model before defining inner class
model_name = sdk_response.get("model", self.model)
class MessageLike:
def __init__(self, content, stop_reason, usage, model):
self.content = content
self.stop_reason = stop_reason
self.usage = usage
self.id = sdk_response.get("id", "sdk_message")
self.model = model
self.role = "assistant"
self.type = "message"
return MessageLike(
content=content_blocks,
stop_reason=sdk_response.get("stop_reason", "end_turn"),
usage=usage,
model=model_name
)
def chat_with_tools(
self,
messages: List[Dict],
@@ -117,6 +303,55 @@ class LLMInterface:
if self.provider != "claude":
raise ValueError("Tool use only supported for Claude provider")
# Agent SDK mode (Pro subscription)
if self.mode == "agent_sdk":
try:
# Use anyio to bridge async SDK to sync interface
response = anyio.from_thread.run(
self._agent_sdk_chat_with_tools,
messages,
tools,
system,
max_tokens
)
return response
except Exception as e:
raise Exception(f"Agent SDK error: {e}")
# Legacy Claude Code server (Pro subscription)
elif self.mode == "legacy_server":
try:
payload = {
"messages": messages,
"tools": tools,
"system": system,
"max_tokens": max_tokens
}
response = requests.post(
f"{_CLAUDE_CODE_SERVER_URL}/v1/chat/tools",
json=payload,
timeout=120
)
response.raise_for_status()
# Convert response to Message-like object
data = response.json()
# Create a mock Message object with the response
class MockMessage:
def __init__(self, data):
self.content = data.get("content", [])
self.stop_reason = data.get("stop_reason", "end_turn")
self.usage = type('obj', (object,), {
'input_tokens': data.get("usage", {}).get("input_tokens", 0),
'output_tokens': data.get("usage", {}).get("output_tokens", 0)
})
return MockMessage(data)
except Exception as e:
raise Exception(f"Claude Code server error: {e}")
# Direct API (pay-per-token)
elif self.mode == "direct_api":
# Enable caching only for Sonnet models (not worth it for Haiku)
enable_caching = use_cache and "sonnet" in self.model.lower()

View File

@@ -1,487 +0,0 @@
"""
Pulse & Brain Architecture for Ajarbot.
PULSE (Pure Python):
- Runs every N seconds
- Zero API token cost
- Checks: server health, disk space, log files, task queue
- Only wakes the BRAIN when needed
BRAIN (Agent/SDK):
- Only invoked when:
1. Pulse detects an issue (error logs, low disk space, etc.)
2. Scheduled time for content generation (morning briefing)
3. Manual trigger requested
This is much more efficient than running Agent in a loop.
"""
import asyncio
import shutil
import string
import threading
import time
import traceback
from dataclasses import dataclass
from datetime import datetime
from enum import Enum
from typing import Any, Callable, Dict, List, Optional
from agent import Agent
# How many seconds between brain invocations to avoid duplicates
_BRAIN_COOLDOWN_SECONDS = 3600
class CheckType(Enum):
"""Type of check to perform."""
PURE_PYTHON = "pure_python"
CONDITIONAL = "conditional"
SCHEDULED = "scheduled"
@dataclass
class PulseCheck:
"""A check performed by the Pulse (pure Python)."""
name: str
check_func: Callable[[], Dict[str, Any]]
interval_seconds: int = 60
last_run: Optional[datetime] = None
@dataclass
class BrainTask:
"""A task that requires the Brain (Agent/SDK)."""
name: str
check_type: CheckType
prompt_template: str
# For CONDITIONAL: condition to check
condition_func: Optional[Callable[[Dict[str, Any]], bool]] = None
# For SCHEDULED: when to run
schedule_time: Optional[str] = None # "08:00", "18:00", etc.
last_brain_run: Optional[datetime] = None
# Output options
send_to_platform: Optional[str] = None
send_to_channel: Optional[str] = None
_STATUS_ICONS = {"ok": "+", "warn": "!", "error": "x"}
class PulseBrain:
"""
Hybrid monitoring system with zero-cost pulse and smart brain.
The Pulse runs continuously checking system health (zero tokens).
The Brain only activates when needed (uses tokens wisely).
"""
def __init__(
self,
agent: Agent,
pulse_interval: int = 60,
enable_defaults: bool = True,
) -> None:
"""
Initialize Pulse & Brain system.
Args:
agent: The Agent instance to use for brain tasks.
pulse_interval: How often pulse loop runs (seconds).
enable_defaults: Load example checks. Set False to start clean.
"""
self.agent = agent
self.pulse_interval = pulse_interval
self.pulse_checks: List[PulseCheck] = []
self.brain_tasks: List[BrainTask] = []
self.running = False
self.thread: Optional[threading.Thread] = None
self.adapters: Dict[str, Any] = {}
# State tracking (protected by lock)
self._lock = threading.Lock()
self.pulse_data: Dict[str, Any] = {}
self.brain_invocations = 0
if enable_defaults:
self._setup_default_checks()
print("[PulseBrain] Loaded default example checks")
print(
" To start clean: "
"PulseBrain(agent, enable_defaults=False)"
)
def _setup_default_checks(self) -> None:
"""Set up default pulse checks and brain tasks."""
def check_disk_space() -> Dict[str, Any]:
"""Check disk space (pure Python, no agent)."""
try:
usage = shutil.disk_usage(".")
percent_used = (usage.used / usage.total) * 100
gb_free = usage.free / (1024 ** 3)
if percent_used > 90:
status = "error"
elif percent_used > 80:
status = "warn"
else:
status = "ok"
return {
"status": status,
"percent_used": percent_used,
"gb_free": gb_free,
"message": (
f"Disk: {percent_used:.1f}% used, "
f"{gb_free:.1f} GB free"
),
}
except Exception as e:
return {"status": "error", "message": str(e)}
def check_memory_tasks() -> Dict[str, Any]:
"""Check for stale tasks (pure Python)."""
try:
pending = self.agent.memory.get_tasks(status="pending")
stale_count = len(pending)
status = "warn" if stale_count > 5 else "ok"
return {
"status": status,
"pending_count": len(pending),
"stale_count": stale_count,
"message": (
f"{len(pending)} pending tasks, "
f"{stale_count} stale"
),
}
except Exception as e:
return {"status": "error", "message": str(e)}
def check_log_errors() -> Dict[str, Any]:
"""Check recent logs for errors (pure Python)."""
return {
"status": "ok",
"errors_found": 0,
"message": "No errors in recent logs",
}
self.pulse_checks.extend([
PulseCheck(
"disk-space", check_disk_space,
interval_seconds=300,
),
PulseCheck(
"memory-tasks", check_memory_tasks,
interval_seconds=600,
),
PulseCheck(
"log-errors", check_log_errors,
interval_seconds=60,
),
])
self.brain_tasks.extend([
BrainTask(
name="disk-space-advisor",
check_type=CheckType.CONDITIONAL,
prompt_template=(
"Disk space is running low:\n"
"- Used: {percent_used:.1f}%\n"
"- Free: {gb_free:.1f} GB\n\n"
"Please analyze:\n"
"1. Is this critical?\n"
"2. What files/directories should I check?\n"
"3. Should I set up automated cleanup?\n\n"
"Be concise and actionable."
),
condition_func=lambda data: (
data.get("status") == "error"
),
),
BrainTask(
name="error-analyst",
check_type=CheckType.CONDITIONAL,
prompt_template=(
"Errors detected in logs:\n"
"{message}\n\n"
"Please analyze:\n"
"1. What does this error mean?\n"
"2. How critical is it?\n"
"3. What should I do to fix it?"
),
condition_func=lambda data: (
data.get("errors_found", 0) > 0
),
),
BrainTask(
name="morning-briefing",
check_type=CheckType.SCHEDULED,
schedule_time="08:00",
prompt_template=(
"Good morning! Please provide a brief summary:\n\n"
"1. System health "
"(disk: {disk_space_message}, "
"tasks: {tasks_message})\n"
"2. Any pending tasks that need attention\n"
"3. Priorities for today\n"
"4. A motivational message\n\n"
"Keep it brief and actionable."
),
),
BrainTask(
name="evening-summary",
check_type=CheckType.SCHEDULED,
schedule_time="18:00",
prompt_template=(
"Good evening! Daily wrap-up:\n\n"
"1. What was accomplished today\n"
"2. Tasks still pending: {pending_count}\n"
"3. Any issues detected (disk, errors, etc.)\n"
"4. Preview for tomorrow\n\n"
"Keep it concise."
),
),
])
def add_adapter(self, platform: str, adapter: Any) -> None:
"""Register an adapter for sending messages."""
self.adapters[platform] = adapter
def start(self) -> None:
"""Start the Pulse & Brain system."""
if self.running:
return
self.running = True
self.thread = threading.Thread(
target=self._pulse_loop, daemon=True
)
self.thread.start()
print("=" * 60)
print("PULSE & BRAIN Started")
print("=" * 60)
print(f"\nPulse interval: {self.pulse_interval}s")
print(f"Pulse checks: {len(self.pulse_checks)}")
print(f"Brain tasks: {len(self.brain_tasks)}\n")
for check in self.pulse_checks:
print(
f" [Pulse] {check.name} "
f"(every {check.interval_seconds}s)"
)
for task in self.brain_tasks:
if task.check_type == CheckType.SCHEDULED:
print(
f" [Brain] {task.name} "
f"(scheduled {task.schedule_time})"
)
else:
print(f" [Brain] {task.name} (conditional)")
print("\n" + "=" * 60 + "\n")
def stop(self) -> None:
"""Stop the Pulse & Brain system."""
self.running = False
if self.thread:
self.thread.join()
print(
f"\nPULSE & BRAIN Stopped "
f"(Brain invoked {self.brain_invocations} times)"
)
def _pulse_loop(self) -> None:
"""Main pulse loop (runs continuously, zero cost)."""
while self.running:
try:
now = datetime.now()
for check in self.pulse_checks:
should_run = (
check.last_run is None
or (now - check.last_run).total_seconds()
>= check.interval_seconds
)
if not should_run:
continue
result = check.check_func()
check.last_run = now
# Thread-safe update of pulse_data
with self._lock:
self.pulse_data[check.name] = result
icon = _STATUS_ICONS.get(
result.get("status"), "?"
)
print(
f"[{icon}] {check.name}: "
f"{result.get('message', 'OK')}"
)
self._check_brain_tasks(now)
except Exception as e:
print(f"Pulse error: {e}")
traceback.print_exc()
time.sleep(self.pulse_interval)
def _check_brain_tasks(self, now: datetime) -> None:
"""Check if any brain tasks need to be invoked."""
for task in self.brain_tasks:
should_invoke = False
prompt_data: Dict[str, Any] = {}
if (
task.check_type == CheckType.CONDITIONAL
and task.condition_func
):
for check_name, check_data in self.pulse_data.items():
if task.condition_func(check_data):
should_invoke = True
prompt_data = check_data
print(
f"Condition met for brain task: "
f"{task.name}"
)
break
elif (
task.check_type == CheckType.SCHEDULED
and task.schedule_time
):
target_time = datetime.strptime(
task.schedule_time, "%H:%M"
).time()
current_time = now.time()
time_match = (
current_time.hour == target_time.hour
and current_time.minute == target_time.minute
)
already_ran_recently = (
task.last_brain_run
and (now - task.last_brain_run).total_seconds()
< _BRAIN_COOLDOWN_SECONDS
)
if time_match and not already_ran_recently:
should_invoke = True
prompt_data = self._gather_scheduled_data()
print(
f"Scheduled time for brain task: {task.name}"
)
if should_invoke:
self._invoke_brain(task, prompt_data)
task.last_brain_run = now
def _gather_scheduled_data(self) -> Dict[str, Any]:
"""Gather data from all pulse checks for scheduled brain tasks."""
disk_data = self.pulse_data.get("disk-space", {})
task_data = self.pulse_data.get("memory-tasks", {})
return {
"disk_space_message": disk_data.get(
"message", "Unknown"
),
"tasks_message": task_data.get("message", "Unknown"),
"pending_count": task_data.get("pending_count", 0),
**disk_data,
}
def _invoke_brain(
self, task: BrainTask, data: Dict[str, Any]
) -> None:
"""Invoke the Brain (Agent/SDK) for a task."""
print(f"Invoking brain: {task.name}")
# Thread-safe increment
with self._lock:
self.brain_invocations += 1
try:
# Use safe_substitute to prevent format string injection
# Convert all data values to strings first
safe_data = {k: str(v) for k, v in data.items()}
template = string.Template(task.prompt_template)
prompt = template.safe_substitute(safe_data)
response = self.agent.chat(
user_message=prompt, username="pulse-brain"
)
print(f"Brain response ({len(response)} chars)")
print(f" Preview: {response[:100]}...")
if task.send_to_platform and task.send_to_channel:
asyncio.run(self._send_to_platform(task, response))
except Exception as e:
print(f"Brain error: {e}")
traceback.print_exc()
async def _send_to_platform(
self, task: BrainTask, response: str
) -> None:
"""Send brain output to messaging platform."""
adapter = self.adapters.get(task.send_to_platform)
if not adapter:
return
from adapters.base import OutboundMessage
message = OutboundMessage(
platform=task.send_to_platform,
channel_id=task.send_to_channel,
text=f"**{task.name}**\n\n{response}",
)
result = await adapter.send_message(message)
if result.get("success"):
print(f"Sent to {task.send_to_platform}")
def get_status(self) -> Dict[str, Any]:
"""Get current status of Pulse & Brain."""
# Thread-safe read of shared state
with self._lock:
return {
"running": self.running,
"pulse_interval": self.pulse_interval,
"brain_invocations": self.brain_invocations,
"pulse_checks": len(self.pulse_checks),
"brain_tasks": len(self.brain_tasks),
"latest_pulse_data": dict(self.pulse_data), # Copy
}
if __name__ == "__main__":
agent = Agent(
provider="claude",
workspace_dir="./memory_workspace",
enable_heartbeat=False,
)
pb = PulseBrain(agent, pulse_interval=10)
pb.start()
try:
print("Running... Press Ctrl+C to stop\n")
while True:
time.sleep(1)
except KeyboardInterrupt:
pb.stop()

137
pyproject.toml Normal file
View File

@@ -0,0 +1,137 @@
[build-system]
requires = ["setuptools>=68.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "ajarbot"
version = "0.2.0"
description = "Multi-platform AI agent powered by Claude with memory, tools, and scheduling"
readme = "README.md"
requires-python = ">=3.10"
license = {text = "MIT"}
authors = [
{name = "Ajarbot Team"}
]
keywords = ["ai", "agent", "claude", "slack", "telegram", "chatbot", "assistant"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Topic :: Communications :: Chat",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
]
# Core dependencies (always installed)
dependencies = [
"watchdog>=3.0.0",
"anthropic>=0.40.0",
"requests>=2.31.0",
"fastembed>=0.7.0",
"usearch>=2.23.0",
"numpy>=2.0.0",
"pyyaml>=6.0.1",
"python-dotenv>=1.0.0",
]
[project.optional-dependencies]
# Slack adapter dependencies
slack = [
"slack-bolt>=1.18.0",
"slack-sdk>=3.23.0",
]
# Telegram adapter dependencies
telegram = [
"python-telegram-bot>=20.7",
]
# Google integration (Gmail + Calendar)
google = [
"google-auth>=2.23.0",
"google-auth-oauthlib>=1.1.0",
"google-auth-httplib2>=0.1.1",
"google-api-python-client>=2.108.0",
]
# Agent SDK mode (uses Claude Pro subscription)
agent-sdk = [
"claude-code-sdk>=0.1.0",
"fastapi>=0.109.0",
"uvicorn>=0.27.0",
]
# All optional dependencies
all = [
"slack-bolt>=1.18.0",
"slack-sdk>=3.23.0",
"python-telegram-bot>=20.7",
"google-auth>=2.23.0",
"google-auth-oauthlib>=1.1.0",
"google-auth-httplib2>=0.1.1",
"google-api-python-client>=2.108.0",
"claude-code-sdk>=0.1.0",
"fastapi>=0.109.0",
"uvicorn>=0.27.0",
]
# Development dependencies
dev = [
"pytest>=7.4.0",
"black>=23.0.0",
"ruff>=0.1.0",
"mypy>=1.5.0",
]
[project.urls]
Homepage = "https://github.com/yourusername/ajarbot"
Documentation = "https://github.com/yourusername/ajarbot#readme"
Repository = "https://github.com/yourusername/ajarbot"
Issues = "https://github.com/yourusername/ajarbot/issues"
[project.scripts]
# Main entry point - runs ajarbot.py
ajarbot = "ajarbot:main"
[tool.setuptools]
# Auto-discover packages
packages = ["config", "adapters", "adapters.slack", "adapters.telegram", "google_tools"]
[tool.setuptools.package-data]
# Include YAML config templates
config = ["*.yaml"]
[tool.black]
line-length = 88
target-version = ["py310", "py311", "py312"]
[tool.ruff]
line-length = 88
target-version = "py310"
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
]
ignore = [
"E501", # line too long (handled by black)
"B008", # do not perform function calls in argument defaults
]
[tool.mypy]
python_version = "3.10"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = false
disallow_incomplete_defs = false
check_untyped_defs = true
no_implicit_optional = true
warn_redundant_casts = true
warn_unused_ignores = true

View File

@@ -23,3 +23,8 @@ google-auth>=2.23.0
google-auth-oauthlib>=1.1.0
google-auth-httplib2>=0.1.1
google-api-python-client>=2.108.0
# Claude Agent SDK (uses Pro subscription instead of API tokens)
claude-agent-sdk>=0.1.0
anyio>=4.0.0
python-dotenv>=1.0.0

79
run.bat Normal file
View File

@@ -0,0 +1,79 @@
@echo off
REM ========================================
REM Ajarbot - Windows One-Command Launcher
REM ========================================
REM
REM This script:
REM 1. Creates/activates virtual environment
REM 2. Installs dependencies if needed
REM 3. Runs ajarbot.py
REM
REM Usage:
REM run.bat Run the bot
REM run.bat --init Generate config template
REM run.bat --health Health check
REM
echo ========================================
echo Ajarbot Windows Launcher
echo ========================================
echo.
REM Check if virtual environment exists
if not exist "venv\Scripts\activate.bat" (
echo [Setup] Creating virtual environment...
python -m venv venv
if errorlevel 1 (
echo ERROR: Failed to create virtual environment
echo Please ensure Python 3.10+ is installed and in PATH
pause
exit /b 1
)
echo [Setup] Virtual environment created
)
REM Activate virtual environment
echo [Setup] Activating virtual environment...
call venv\Scripts\activate.bat
if errorlevel 1 (
echo ERROR: Failed to activate virtual environment
pause
exit /b 1
)
REM Check if dependencies are installed (check for a key package)
python -c "import anthropic" 2>nul
if errorlevel 1 (
echo.
echo [Setup] Installing dependencies...
echo This may take a few minutes on first run...
python -m pip install --upgrade pip
pip install -r requirements.txt
if errorlevel 1 (
echo ERROR: Failed to install dependencies
pause
exit /b 1
)
echo [Setup] Dependencies installed
echo.
)
REM Run ajarbot with all arguments passed through
echo [Launch] Starting ajarbot...
echo.
python ajarbot.py %*
REM Check exit code
if errorlevel 1 (
echo.
echo ========================================
echo Ajarbot exited with an error
echo ========================================
pause
exit /b 1
)
echo.
echo ========================================
echo Ajarbot stopped cleanly
echo ========================================

View File

@@ -100,6 +100,21 @@ TOOL_DEFINITIONS = [
"required": ["command"],
},
},
{
"name": "get_weather",
"description": "Get current weather for a location using OpenWeatherMap API. Returns temperature, conditions, and brief summary.",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or 'City, Country' (e.g., 'Phoenix, US' or 'London, GB'). Defaults to Phoenix, AZ if not specified.",
"default": "Phoenix, US",
}
},
"required": [],
},
},
# Gmail tools
{
"name": "send_email",
@@ -345,6 +360,9 @@ def execute_tool(tool_name: str, tool_input: Dict[str, Any], healing_system: Any
command = tool_input["command"]
working_dir = tool_input.get("working_dir", ".")
return _run_command(command, working_dir)
elif tool_name == "get_weather":
location = tool_input.get("location", "Phoenix, US")
return _get_weather(location)
# Gmail tools
elif tool_name == "send_email":
return _send_email(
@@ -530,6 +548,65 @@ def _run_command(command: str, working_dir: str) -> str:
return f"Error running command: {str(e)}"
def _get_weather(location: str = "Phoenix, US") -> str:
"""Get current weather for a location using OpenWeatherMap API.
Args:
location: City name or 'City, Country' (e.g., 'Phoenix, US')
Returns:
Weather summary string
"""
import requests
api_key = os.getenv("OPENWEATHERMAP_API_KEY")
if not api_key:
return "Error: OPENWEATHERMAP_API_KEY not found in environment variables. Please add it to your .env file."
try:
# OpenWeatherMap API endpoint
base_url = "http://api.openweathermap.org/data/2.5/weather"
params = {
"q": location,
"appid": api_key,
"units": "imperial" # Fahrenheit
}
response = requests.get(base_url, params=params, timeout=10)
response.raise_for_status()
data = response.json()
# Extract weather data
temp = data["main"]["temp"]
feels_like = data["main"]["feels_like"]
description = data["weather"][0]["description"].capitalize()
humidity = data["main"]["humidity"]
wind_speed = data["wind"]["speed"]
city = data["name"]
# Format weather summary
summary = f"**{city} Weather:**\n"
summary += f"🌡️ {temp}°F (feels like {feels_like}°F)\n"
summary += f"☁️ {description}\n"
summary += f"💧 Humidity: {humidity}%\n"
summary += f"💨 Wind: {wind_speed} mph"
return summary
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401:
return "Error: Invalid OpenWeatherMap API key. Please check your OPENWEATHERMAP_API_KEY in .env file."
elif e.response.status_code == 404:
return f"Error: Location '{location}' not found. Try format: 'City, Country' (e.g., 'Phoenix, US')"
else:
return f"Error: OpenWeatherMap API error: {e}"
except requests.exceptions.Timeout:
return "Error: Weather API request timed out. Please try again."
except Exception as e:
return f"Error getting weather: {str(e)}"
# Google Tools Handlers