🔬 #1 Nanobot (37k stars)
4,000 lines of pure research
Best for: Research and experimentation
Ultra-minimal Python implementation designed for research and experimentation. When you need to understand, not just use.
Python active
Health: 95/100
Pros
- + 37k GitHub stars — strong community validation
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🦀 Zero overhead, zero compromise
Best for: Performance-critical environments
Rust-based AI agent runtime built for speed and efficiency. Sub-10ms startup in a 3.4MB binary.
Rust active
MCP
Health: 95/100
Pros
- + 29k GitHub stars — strong community validation
- + MCP (Model Context Protocol) support
- + Lightweight: 3.4MB binary
- + Fast startup: <10ms
Cons
- - Requires LLM API key (ongoing token costs)
⭐ #3 AstrBot (28k stars)
Agentic IM chatbot infrastructure
Best for: Chat platform integrations
Multi-platform chatbot infra integrating LLMs, plugins, and AI features across messaging platforms
Python active
MCP
Health: 85/100
Pros
- + 28k GitHub stars — strong community validation
- + MCP (Model Context Protocol) support
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
Best for: IoT and embedded devices
Runs on $10 RISC-V boards with less than 10MB of RAM. One-second boot time. The tiniest claw in the sea.
Go active
Health: 95/100
Pros
- + 26k GitHub stars — strong community validation
- + Fast startup: 1s
- + Low memory: <10MB RAM
- + Runs on embedded/IoT hardware
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🔐 Security-first, radically minimal
Best for: Security-conscious deployments
Five files, one process, OS-level container isolation. NanoClaw strips the agent down to its secure essentials.
TypeScript active
Health: 80/100
Pros
- + 26k GitHub stars — strong community validation
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🖥️ #6 AionUi (20k stars)
24/7 cowork app for coding agents
Best for: Task automation and productivity
Free, local, open-source cowork app and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, and more
TypeScript active
MCP
Health: 95/100
Pros
- + 20k GitHub stars — strong community validation
- + MCP (Model Context Protocol) support
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
🟢 NVIDIA's secure OpenClaw deployment stack
Best for: Security-conscious deployments
Open source stack from NVIDIA that wraps OpenClaw in the OpenShell sandbox runtime with policy-based privacy and security guardrails. One-command install, containerized execution, inference routed through NVIDIA cloud.
javascript active
Health: 80/100
Pros
- + 17k GitHub stars — strong community validation
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
⚙️ The Agent Operating System
Best for: General-purpose AI agent use cases
Production-grade agent OS built in Rust. 137K LOC, 1,767+ tests, single 32MB binary. Autonomous agents that work for you 24/7.
Rust active
MCP
Health: 100/100
Pros
- + 16k GitHub stars — strong community validation
- + MCP (Model Context Protocol) support
- + Lightweight: 32MB binary
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
🤖 #9 LangBot (16k stars)
Production-grade agentic IM bots
Best for: Chat platform integrations
Multi-platform agent for Discord, Telegram, Slack, WeChat, LINE, QQ, and more with plugin system and knowledge base
Python active
Health: 95/100
Pros
- + 16k GitHub stars — strong community validation
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🧠 The agent that grows with you
Best for: Memory-first agent workflows
Persistent personal AI agent by Nous Research with multi-level growing memory, self-authored skills, and multi-platform messaging. Learns your projects and builds reusable knowledge over time.
Python active
MCP
Health: 75/100
Pros
- + 15k GitHub stars — strong community validation
- + MCP (Model Context Protocol) support
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
🧠 #11 memU (13k stars)
Memory-first proactive agent
Best for: Memory-first agent workflows
Long-term memory infrastructure for 24/7 AI agents. Proactive, cost-efficient, runs locally.
Python active
MCP
Health: 95/100
Pros
- + 13k GitHub stars — strong community validation
- + MCP (Model Context Protocol) support
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
🛡️ Rust-hardened privacy fortress
Best for: Security-conscious deployments
Rust implementation focused on privacy and security. When your agent needs armor plating.
Rust active
Health: 85/100
Pros
- + 11k GitHub stars — strong community validation
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
☁️ OpenClaw in Cloudflare Workers
Best for: Performance-critical environments
Experimental derivative that runs OpenClaw (formerly Moltbot/Clawdbot) in Cloudflare Sandbox containers.
TypeScript experimental
MCP
Health: 75/100
Pros
- + 9.8k GitHub stars — growing community
- + MCP (Model Context Protocol) support
- + Serverless deployment support
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - Experimental status — may have breaking changes
🧠 #14 MemOS (7.9k stars)
AI memory OS for agent systems
Best for: Memory-first agent workflows
Persistent skill memory for cross-task reuse and evolution — built for OpenClaw and similar agent frameworks
Python active
Health: 100/100
Pros
- + 7.9k GitHub stars — growing community
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
💼 #15 ClawWork (7.7k stars)
OpenClaw as your AI coworker
Best for: Task automation and productivity
Task-focused agent framework from the HKUDS team (creators of Nanobot)
Python active
Health: 70/100
Pros
- + 7.7k GitHub stars — growing community
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🦞 #16 NullClaw (6.9k stars)
Fastest, smallest, and fully autonomous AI assistant infrastructure written in Zig
Best for: Performance-critical environments
The smallest fully autonomous AI assistant infrastructure — a static Zig binary that fits on any $5 board, boots in milliseconds, and requires nothing but libc. 678 KB binary, <2 ms startup, 22+ providers, 17 channels, fully pluggable architecture.
Zig active
Health: 95/100
Pros
- + 6.9k GitHub stars — growing community
- + Lightweight: 678KB binary
- + Fast startup: <2ms
- + Runs on embedded/IoT hardware
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🦾 #17 MimicLaw (4.9k stars)
Bare metal AI on a $5 chip
Best for: IoT and embedded devices
Pure C running directly on an ESP32-S3 chip. No operating system needed. The most primal claw.
C experimental
Health: 75/100
Pros
- + 4.9k GitHub stars — growing community
- + Runs on embedded/IoT hardware
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - Experimental status — may have breaking changes
- - No MCP support yet
🔍 #18 Moltis (2.4k stars)
Zero unsafe, full audit trail
Best for: Security-conscious deployments
Zero unsafe Rust with built-in voice I/O and MCP support. Every action logged, every permission earned.
Rust active
MCP
Health: 80/100
Pros
- + 2.4k GitHub stars — growing community
- + MCP (Model Context Protocol) support
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
💝 #19 Clawra (2.2k stars)
OpenClaw as your companion
Best for: Personalized AI companions
OpenClaw reimagined as a persistent AI companion with personality, memory, and relationship context.
TypeScript active
Health: 55/100
Pros
- + 2.2k GitHub stars — growing community
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
- - Less frequent updates recently
🔌 #20 ZClaw (2.0k stars)
Personal AI assistant on ESP32 — 888 KiB firmware
Best for: IoT and embedded devices
C-based personal AI assistant that runs on ESP32 microcontrollers. Supports scheduled tasks, GPIO control, persistent memory, custom tools, and Telegram integration. Strict 888 KiB all-in firmware budget including ESP-IDF runtime, Wi-Fi, TLS, and certs. The smallest claw-family project running on actual hardware.
C active
Health: 90/100
Pros
- + 2.0k GitHub stars — growing community
- + Lightweight: 888 KiB binary
- + Runs on embedded/IoT hardware
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🚀 #21 Spacebot (2.0k stars)
Concurrent multi-process AI agent for communities and teams
Best for: General-purpose AI agent use cases
Rust-based AI agent built for multi-user environments — Discord servers, Slack workspaces, Telegram groups. Splits the monolith into five specialized processes (Channel, Branch, Worker, Compactor, Cortex) that run concurrently so the agent is never blocked. Typed memory graph (SQLite + LanceDB), cron jobs, MCP support, headless browser, OpenCode integration, and OpenClaw-compatible skills. Single binary, no server dependencies. FSL-1.1-ALv2 license, converting to Apache 2.0 after two years.
Rust active
Health: 65/100
Pros
- + 2.0k GitHub stars — growing community
Cons
- - Requires LLM API key (ongoing token costs)
🔒 Security-hardened OpenClaw by Composio
Best for: Security-conscious deployments
24/7 AI assistant on WhatsApp, Telegram, Signal, iMessage with full tool access, persistent memory, and 500+ app integrations
TypeScript active
MCP
Health: 40/100
Pros
- + 1.3k GitHub stars — growing community
- + MCP (Model Context Protocol) support
- + 500+ built-in integrations
Cons
- - Requires LLM API key (ongoing token costs)
- - Less frequent updates recently
🎨 Beautiful Web UI alternative to OpenClaw with sandboxed runtime
Best for: Chat platform integrations
A more beautiful and easier-to-use alternative to OpenClaw featuring a polished Web UI, built-in IM support, and a sandboxed runtime for improved safety. Powered by a Claude Code-based agent under the hood.
TypeScript active
Health: 95/100
Pros
- + 1.2k GitHub stars — growing community
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🤖 #24 Picobot (1.2k stars)
AI agent that runs anywhere — even on a $5 VPS
Best for: Performance-critical environments
Single ~9MB Go binary with persistent memory, tool calling, skills, and Telegram/Discord integration. Zero dependencies, boots in milliseconds, idles at 10MB RAM.
Go active
Health: 85/100
Pros
- + 1.2k GitHub stars — growing community
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
💰 #25 RT-Claw (984 stars)
Making AI assistants cheap again
Best for: Performance-critical environments
A C-based agent runtime built for minimal cost and maximum efficiency. Systems-language approach targeting the low-cost, local-first segment of the ecosystem.
C active
Health: 75/100
Pros
- + Runs on embedded/IoT hardware
- + Excellent project health score
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
💰 Autonomous agent that finds work, does it, and gets paid
Best for: General-purpose AI agent use cases
Self-improving agent that connects to the Moltlaunch onchain marketplace, evaluates tasks, quotes prices, executes work via LLM, collects ratings, and improves from feedback. Fork it for any platform.
TypeScript active
Health: 50/100
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🔗 OpenClaw built into Claude Code
Best for: Performance-critical environments
A lightweight, open-source OpenClaw alternative built into Claude Code. Uses Claude's subscription without separate API keys.
TypeScript active
Health: 50/100
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🦀 Rust agent inspired by NanoClaw
Best for: Security-conscious deployments
An agentic AI assistant that lives in your chats, inspired by NanoClaw and incorporating some of its design ideas
Rust active
Health: 95/100
Pros
- + Excellent project health score
- + Security-focused architecture
- + Minimal footprint
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🌐 Browser-native personal AI assistant — zero infrastructure, the browser is the server
Best for: Performance-critical environments
A browser-only reimagination of NanoClaw. Runs entirely in a browser tab as a PWA with no backend. Uses IndexedDB for storage, OPFS for files, Web Workers for the agent loop, and a WebVM (v86 Alpine Linux in WASM) for shell commands. Supports Telegram as an optional channel. Paste an Anthropic API key and go.
TypeScript active
Health: 55/100
Pros
- + Runs in the browser — no server needed
- + Minimal footprint
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🦀 Autonomous, self-improving Rust agent
Best for: Security-conscious deployments
Single Rust binary with multi-channel support, local TTS/STT via whisper.cpp, SQLite memory, and zero network listeners. Keys zeroized from RAM on drop. Inspired by OpenClaw.
Rust active
Health: 95/100
Pros
- + Excellent project health score
- + Security-focused architecture
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
⚡ Ultra-lightweight Rust agent with container isolation
Best for: Security-conscious deployments
One 6MB Rust binary with 32 tools, 9 channels, 9 providers, and container sandboxing. Studies OpenClaw, NanoClaw, and PicoClaw — keeps the integrations, drops the bloat.
Rust active
Health: 75/100
Pros
- + Excellent project health score
- + Security-focused architecture
- + Minimal footprint
Cons
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
⚡ Faster and better, written in Go
Best for: Performance-critical environments
A Go-based OpenClaw alternative focused on speed and simplicity. Built from scratch with performance as the primary design constraint.
Go active
Health: 75/100
Pros
- + Excellent project health score
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🛡️ Privacy-first autonomous agent platform
Best for: Security-conscious deployments
Privacy-first personal AI assistant platform with autonomous agents, tool orchestration, and multi-provider support.
TypeScript active
Health: 80/100
Pros
- + Excellent project health score
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
📱 #34 Kai (291 stars)
OpenClaw alternative in your pocket
Best for: Performance-critical environments
A mobile-first OpenClaw alternative built with Kotlin Multiplatform and Jetpack Compose. Runs on Android and iOS with any OpenAI-compatible endpoint. The first mobile-native entry in the OpenClaw ecosystem.
Kotlin active
Health: 70/100
Pros
- + Excellent project health score
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🧠 Persistent-memory agent across 5 messaging channels
Best for: Chat platform integrations
Personal AI assistant that remembers everything across Telegram, Slack, Discord, WhatsApp, and Signal. Unified memory, heartbeat check-ins, voice transcription, and scheduling — powered by the Letta SDK.
TypeScript active
Health: 60/100
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
💖 Your self-improving AI companion
Best for: Personalized AI companions
Self-improving AI companion with a personality system called Heartware. The friendliest claw in the sea.
TypeScript active
Health: 50/100
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🐙 #37 Gitclaw (179 stars)
Your agent lives inside a git repo
Best for: General-purpose AI agent use cases
A universal git-native AI agent framework where identity, rules, memory, tools, and skills are all version-controlled files. Fork an agent, branch a personality, git log its memory. Supports CLI, SDK, and remote repo modes.
TypeScript active
Health: 65/100
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🏮 Lightweight OpenClaw for China's app ecosystem
Best for: General-purpose AI agent use cases
A lightweight AI assistant framework focused on Chinese platforms. Supports QQ, Feishu, DingTalk, and WeCom channels with native Chinese LLM providers (DeepSeek, Doubao, Qwen, Kimi, Zhipu). 3% of OpenClaw's codebase with core functionality intact.
TypeScript active
Health: 45/100
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🏢 Security-first operating system for personal AI agents
Best for: Security-conscious deployments
Multi-channel (WhatsApp, Telegram, Discord, Slack, iMessage), multi-provider (Claude, GPT, Gemini, Ollama) personal AI agent OS. Fully self-hosted with a security-first architecture. TypeScript, MIT licensed.
TypeScript active
Health: 90/100
Pros
- + Excellent project health score
- + Security-focused architecture
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🗿 #40 Golem (163 stars)
Your AI agent, single binary, zero dependencies
Best for: Performance-critical environments
Terminal-first personal AI agent built in pure Go. Single binary deployment with TUI, multi-channel bot support, tool calling, long-term memory, skill packs, cron scheduling, and gateway API. No Python, Node, or Docker required.
Go active
Health: 70/100
Pros
- + Excellent project health score
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
⚡ Ultra-lightweight OpenClaw alternative with full plugin compatibility
Best for: Performance-critical environments
OpenClaw-inspired personal AI assistant at ~1/20 the codebase. 12+ AI providers, 10+ message channels (Discord, Telegram, Slack, WhatsApp, Feishu, DingTalk, and more), cron and heartbeat scheduling, browser UI configuration. Published on npm, runs locally.
TypeScript active
Health: 95/100
Pros
- + Excellent project health score
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🏃 #42 GoGogot (122 stars)
Lightweight self-hosted AI agent in Go
Best for: Performance-critical environments
A lightweight self-hosted personal AI agent written in Go. Deploys as a single ~15 MB binary that runs shell commands, edits files, browses the web, manages persistent memory, and schedules tasks. Runs on a $4/month VPS.
Go active
Health: 70/100
Pros
- + Excellent project health score
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🧯 Zero-LLM OpenClaw alternative
Best for: Performance-critical environments
Rule-based and local-first assistant positioned as an OpenClaw alternative with no required LLM API spend.
Python active
No LLM needed
Health: 85/100
Pros
- + No LLM API key needed — zero token costs
- + Excellent project health score
Cons
- - Smaller community — fewer resources and plugins
- - No MCP support yet
🌸 AI co-pilot for total computer automation
Best for: Task automation and productivity
Personal AI agent framework for autonomous computer control. Features browser automation, shell execution, sub-agent spawning, voice-to-action via Whisper, and omni-channel support including WeChat, Feishu, DingTalk, Telegram, WhatsApp, and Discord. RAG knowledge base with SQLite/LanceDB memory.
javascript active
Health: 40/100
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🛡️ Security-first AI agent framework for production multi-agent systems
Best for: Security-conscious deployments
Production-grade multi-agent platform built on the assumption that agents can be compromised. Every agent runs in its own Docker container with blast-radius containment through isolation, credential separation, permissions, and cost controls. Credentials live in a vault/proxy layer so agents never directly access raw keys. Includes a built-in stealth browser for human-like web interaction without fragile external browser setups. Designed for teams deploying agents in production, especially in security-sensitive or enterprise environments.
Python active
MCP
Health: 50/100
Pros
- + MCP (Model Context Protocol) support
- + Security-focused architecture
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
🍓 #46 ClawGo (75 stars)
Headless OpenClaw client in Go
Best for: IoT and embedded devices
Minimal headless node client for Raspberry Pi and Linux — connects to the gateway bridge for voice and chat
Go active
Health: 45/100
Pros
- + Runs on embedded/IoT hardware
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
Best for: General-purpose AI agent use cases
Cloud-native personal AI agent platform built on Convex. Multi-agent system with shared soul documents, agent-to-agent communication, chat UI, skills marketplace, MCP support, browser automation via Stagehand, and AI-powered analytics. Designed for deployment without self-hosting infrastructure.
TypeScript active
MCP
Health: 50/100
Pros
- + MCP (Model Context Protocol) support
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - Less frequent updates recently
🆓 Minimal OpenClaw-like Python agent
Best for: Performance-critical environments
Python implementation of the OpenClaw concept with multi-agent profiles and lightweight CLI operation.
Python active
Health: 50/100
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🛡️ Hardened Rust shell for autonomous agents
Best for: Security-conscious deployments
Security-first Rust alternative to OpenClaw/Clawdbot with signed WASM plugins and strict local-first defaults.
Rust experimental
Health: 55/100
Pros
- + Security-focused architecture
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - Experimental status — may have breaking changes
🔧 Programmatic AI agents for Node.js
Best for: General-purpose AI agent use cases
Build autonomous AI agents in Node.js/TypeScript with a code-first API. Scope-gated tool access, 30+ built-in integrations (Gmail, Slack, GitHub, Notion, Stripe), Zod-typed outputs, cron scheduling, and long-term memory — all from your codebase.
TypeScript active
Health: 70/100
Pros
- + Excellent project health score
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🐾 AI assistant that lives in your Telegram
Best for: Chat platform integrations
Personal AI assistant powered by the z.ai engine that runs inside Telegram. Goes beyond chat with 100+ built-in tools — browse the web, send emails, run scripts, manage files, track stocks, and more. Supports vision, voice, memory, and real browser automation.
Go active
Health: 70/100
Pros
- + 100+ built-in integrations
- + Excellent project health score
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🧠 OpenClaw-style multi-channel agent platform
Best for: Chat platform integrations
Self-hostable Python platform marketed as a lightweight OpenClaw alternative with multi-channel routing and task orchestration.
Python active
Health: 75/100
Pros
- + Excellent project health score
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet
🐚 An OpenClaw inspired personal assistant in 100% Bash 🦞
Best for: Performance-critical environments
Personal AI assistant built entirely in bash (compatible with bash 3.2+). Uses only standard Unix utilities (curl, jq, base64, etc.) with no Node.js or other runtimes. Modular architecture using named pipes.
Bash active
Health: 15/100
Pros
- + Runs on embedded/IoT hardware
- + Minimal footprint
Cons
- - Smaller community — fewer resources and plugins
- - Requires LLM API key (ongoing token costs)
- - No MCP support yet