CoPaw — Co Personal Agent Workstation — is an open-source AI assistant built on the AgentScope framework. Deploy on your machine or in the cloud, connect your favorite chat apps, and run local LLMs with full privacy control. A warm little "paw" that is always ready to help.
$ curl -fsSL https://copaw.agentscope.io/install.sh | bash
$ pip install copaw
$ docker run -p 8088:8088 agentscope/copaw:latest
CoPaw is more than a chatbot. It is a fully modular personal agent workstation that gives you complete control over your AI assistant — from the model it runs to the channels it connects.
Natively connect CoPaw to DingTalk, Feishu, QQ, Discord, iMessage, and more. One assistant, multiple channels — reach your AI wherever you communicate.
Run large language models entirely on your machine — no API keys, no cloud dependencies. CoPaw supports llama.cpp (cross-platform) and MLX (Apple Silicon M1–M4).
CoPaw's agent core is fully modular — Prompt, Hooks, Tools, and Memory are decoupled components. Replace or extend any module independently and assemble your own agent.
Built-in cron scheduling and custom skills that auto-load from your workspace. CoPaw ships with practical skills out of the box, and developers can create, install, or remove skills via CLI — no vendor lock-in.
CoPaw proactively remembers your decisions, preferences, and to-do items via its long-term memory system. Its innovative heartbeat mechanism lets it autonomously perform scheduled tasks — check emails, compile reports, organize to-dos — without being asked.
You own every piece of data. Deploy CoPaw locally or in the cloud, choose your own model — mainstream cloud APIs, self-hosted inference, Ollama, or Apple Silicon native — and keep everything under your control.
CoPaw is a key application within the AgentScope ecosystem — a production-ready, developer-centric framework for building and running intelligent agents. The technology stack combines Python (72.8%) for backend logic with TypeScript (22.2%) for the Console frontend.
Unified model layer supporting cloud APIs (Qwen series and mainstream models), self-hosted inference services, Ollama, llama.cpp, and MLX for Apple Silicon local execution.
Decoupled modules — Prompt, Hooks, Tools, Memory — that can be independently replaced or extended. Developers assemble agents from interchangeable building blocks.
Unified protocol and types across all messaging platforms. A channel registry with CLI commands (list, install, remove, config) lets you manage channels like plugins.
Built-in consumption and queue mechanism ensures reliable message processing across multiple simultaneous channels — no dropped messages even under heavy load.
Auto-loaded skills from the user workspace with built-in cron scheduling. Skills are first-class citizens — discoverable, composable, and independently deployable.
Persistent long-term memory that proactively captures decisions, preferences, and to-do items from conversations. The longer you use CoPaw, the better it understands you.
CoPaw integrates natively with the messaging platforms you already use. The unified channel protocol ensures consistent behavior and reliable delivery across every connected platform.
Developers can build custom channel plugins using the channel registry mechanism. Install, remove, and configure channels via CLI commands — manage your messaging integrations just like you manage packages.
CoPaw offers one of the lowest deployment barriers among open-source agent tools. Choose the method that fits your workflow — from a single pip command to Docker and cloud deployment.
For users with Python 3.10+ environments. The fastest way to get started.
pip install copaw
# Initialize and launch
copaw init --defaults
copaw app
# Open http://127.0.0.1:8088/ in your browser
One-line install that automatically sets up the Python environment. Ideal for first-time users.
# macOS / Linux
curl -fsSL https://copaw.agentscope.io/install.sh | bash
# Windows — see docs for PowerShell method
Containerized deployment with persistent data volume. Two commands to a running instance.
docker pull agentscope/copaw:latest
docker run -p 8088:8088 \
-v copaw-data:/app/working \
agentscope/copaw:latest
One-click deployment on ModelScope Studio or Alibaba Cloud Computing Nest — no local setup required.
# Visit ModelScope Studio for one-click deploy
# Or use Alibaba Cloud Computing Nest
# See official docs for detailed guides
CoPaw is designed for real daily workflows — from automating email digests to creative content drafts. Combine built-in skills with cron scheduling to build your own agentic workflows.
Automatically compile daily digests of trending posts from Xiaohongshu, Zhihu, and Reddit. Summarize Bilibili and YouTube videos. Stay informed without information overload.
Aggregate and summarize high-volume emails. Generate and organize weekly reports with one click. Extract contacts from emails and calendar events to streamline your workflow.
Describe your creative goal, let CoPaw work overnight, and receive a polished draft the next morning. From video scripts to social media content — ideation at scale.
Track technology and AI news automatically. Build a personal knowledge base that grows with you. Crawl, organize, and summarize information from across the web.
Organize files, read and summarize documents, request files through your chat interface. CoPaw bridges the gap between your desktop and your messaging apps.
Track and analyze your daily diet and fitness data. Record habits, set reminders for routines, and let CoPaw help you stay consistent with personal goals.
CoPaw occupies a unique position in the personal AI assistant space — combining multi-channel chat integration, local model support, and full user control in a single open-source workstation.
| Capability | CoPaw | AutoGPT | CrewAI | Cloud Assistants |
|---|---|---|---|---|
| Core Focus | Personal Agent Workstation | Autonomous Task Agent | Multi-Agent Orchestration | General Conversational AI |
| Deployment | Local / Cloud / Docker | Local / Cloud | Local / Cloud | Cloud Only |
| Multi-Channel Chat | ✓ Native (5+ platforms) | Limited | Limited | API-based |
| Local Model Support | ✓ llama.cpp + MLX | ✓ | ✓ | ✗ |
| Privacy Control | Full (local deploy) | Moderate | Moderate | Limited |
| Skill / Plugin System | ✓ Built-in + CLI | Plugins | Custom Agents | Varies |
| Proactive Scheduling | ✓ Heartbeat + Cron | ✗ | ✗ | ✗ |
| License | Apache 2.0 | MIT | MIT | Proprietary |
Audit scope: All 190 Python source files under src/copaw/. This audit compares CoPaw's agent design module-by-module against NGOClaw's architecture to identify shared patterns, independent implementations, and attribution accuracy.
Audit date: 2026-02-28
ReAct loop → LLM → Tool exec → Inject result
├── MiddlewarePipeline (Before/After Model)
├── ContextGuard (token-ratio compaction)
├── DoomLoop detection (sliding window)
├── SecurityHook (tool approval)
└── No MaxSteps (token budget terminates)
ReActAgent base → AgentScope framework
├── pre_reasoning Hook (Bootstrap, Compaction)
├── MemoryCompactionHook (token-ratio)
├── CommandHandler (slash commands)
├── max_iters=50 (step limit)
└── No DoomLoop, No SecurityHook
Verdict: Different frameworks (Go vs AgentScope/Python), different step strategies (token budget vs max_iters=50). Pre-LLM Hook injection concept is borrowed, but implementation diverges. Automatic compaction per-step behavior is consistent.
// agent_loop.go
ContextHardRatio: 0.85 // trigger at 85%
CompactKeepLast: 10 // keep last 10 msgs
// Extract middle → LLM summary → replace
# react_agent.py
threshold = int(max_input_length * 0.8) # 80%
# memory_compaction.py
keep_recent = MEMORY_COMPACT_KEEP_RECENT # 5
# Extract middle → LLM summary → COMPRESSED
Verdict: High similarity. Token-ratio triggered compaction is an NGOClaw original — OpenClaw uses message-count thresholds, not token ratios. The keep-recent-N tail strategy and per-step auto-check via Hook/Middleware are structurally identical. CoPaw's docs attribute inspiration to "OpenClaw", but OpenClaw does not implement token-ratio compaction.
// Layered: System + Workspace
// ~/.ngoclaw/prompts/*.md (alphabetical)
// workspace/.ngoclaw/prompts/*.md
// Channel-specific overrides
// YAML frontmatter: requires.channel
// rebuild_sys_prompt() at runtime
# Single-layer: working_dir only
FILE_ORDER = [
("AGENTS.md", True), # required
("SOUL.md", True), # required
("PROFILE.md", False), # optional
]
# rebuild_sys_prompt() at runtime
Verdict: Partial borrowing. The ".md files concatenated into system prompt" pattern originates from NGOClaw. The method name rebuild_sys_prompt() is identical. However, CoPaw uses single-layer hardcoded order vs NGOClaw's dual-layer alphabetical + frontmatter filtering.
// ~/.ngoclaw/skills/[name]/SKILL.md
// Auto-discover → load → inject prompt
type SkillInfo struct {
Name, Description, Path string
}
# Three tiers: builtin → customized → active
class SkillInfo(BaseModel):
name: str
content: str
source: str # "builtin"/"customized"/"active"
path: str
# SKILL.md frontmatter → register
Verdict: Borrowed. The SKILL.md entry file convention and SkillInfo naming are directly from NGOClaw/OpenClaw. CoPaw extends with a three-tier directory structure (builtin/custom/active), which goes beyond the original single-tier design.
/new — save history → clear
/compact — manual compaction
/clear — wipe history
/stop — interrupt execution
/model — switch model
/security, /trust, /skills, /agent
/new — async summary → clear
/compact — manual compaction
/clear — wipe history
/history — display history
# No /stop, /model, /security
Verdict: The core command set (/new, /compact, /clear) is identical in name and semantics. NGOClaw-exclusive commands (/stop, /model, /security) are absent from CoPaw.
bash_exec, read_file, write_file, edit_file
search (grep+glob), web_search, web_fetch
browser, send_photo, send_document
python_exec, gemini_agent (sub-agent)
execute_shell_command, read_file, write_file
edit_file, grep_search, glob_search
browser_use, desktop_screenshot
send_file_to_user, memory_search
Verdict: edit_file (patch-style editing) and grep/glob search splitting show similarity, but these patterns are common across Aider, Cline, and Claude Code — not NGOClaw originals. NGOClaw's Agent-as-Tool and sub-agent patterns are absent from CoPaw.
Adapter direct-connect
TG + CLI + Web
StagedReply (status → deliver)
DraftStream (streaming edits)
InlineKeyboard (security approval)
BaseChannel ABC + ChannelManager
Queue + consumer loop
DingTalk, Feishu, QQ, Discord, iMessage
ConfigWatcher (mtime poll → hot-reload)
MessageRenderer (unified markdown/text)
Verdict: Independent architecture. NGOClaw uses direct adapter connections; CoPaw uses a queue-based channel manager. Different platform targets. NGOClaw's StagedReply, DraftStream, and InlineKeyboard approval are completely absent from CoPaw.
Session-memory → ~/.ngoclaw/memory/
YYYY-MM-DD.md file-based
conversation.Repository (SQLite)
// No semantic search (planned: sqlite-vec)
MemoryManager (ReMeFs-based)
CoPawInMemoryMemory
AgentMdManager (agent.md)
memory_search tool (semantic search)
Async summary tasks (background)
Verdict: Different implementations. NGOClaw uses file + SQLite storage; CoPaw uses ReMeFs + in-memory. CoPaw has semantic search (more advanced). Source code includes attribution: "inspired by OpenClaw memory architecture".
User msg → Gateway (WebSocket)
→ runEmbeddedPiAgent() → external Pi SDK
→ stream events (lifecycle/tools/messages)
❌ No in-process ReAct loop
❌ Tools run in external sandbox
User msg → Adapter/Channel → Agent.Run/reply()
→ while(true) { Hook → LLM → Tool → inject }
→ Middleware/Hook intercepts every step
✅ In-process ReAct loop
✅ Tools execute via Toolkit interface
| Design Decision | OpenClaw | NGOClaw | CoPaw | Closer To |
|---|---|---|---|---|
| Loop location | External process (Pi SDK) | Embedded Go loop | Embedded Python loop | NGOClaw |
| Tool execution | Sandbox isolation | In-process ToolExecutor | In-process Toolkit | NGOClaw |
| Hook / Middleware | None (stream events) | Before/After Model | pre_reasoning Hook | NGOClaw |
| Compaction trigger | Auto backend (no command) | /compact + auto | /compact + auto | NGOClaw |
| Skill loading | Workspace SKILL.md | SkillManager glob | ensure_skills_initialized() | NGOClaw |
| System prompt | system-prompt.ts template | .md file concatenation | .md file concatenation | NGOClaw |
| Message format | Transcript JSONL | []LLMMessage array | [Msg] list | NGOClaw |
Result: 7/8 design decisions align with NGOClaw. 0/8 align with OpenClaw. OpenClaw is a platform-style architecture where the Gateway dispatches to an external Agent SDK. NGOClaw and CoPaw are both monolithic agents — running LLM calls, tool execution, and context management in a single process loop.
| # | Design Pattern | Severity | Evidence |
|---|---|---|---|
| 1 | Token-ratio compaction + keep_recent | High | OpenClaw lacks this; parameter structure matches NGOClaw |
| 2 | SKILL.md entry + SkillInfo naming | High | Naming is identical |
| 3 | Pre-LLM Hook injection architecture | Medium | Concept matches, but uses AgentScope framework |
| 4 | /compact + /new command semantics | Medium | Command names + behavior are identical |
| 5 | .md prompt loading + rebuild_sys_prompt() | Medium | Pattern + method name are identical |
| 6 | edit_file patch-style tool | Low | Common in Aider / Cline / Claude Code |
| 7 | grep + glob search splitting | Low | Industry standard pattern |
| 8 | Config hot-reload | Low | Common engineering practice |
Telegram phased card rendering (status → deliver)
Streaming text segmented edit pushes
In-Telegram tool execution approval flow
Sliding window repetition detection + forced reflection
Per-model behavior strategies (reasoning_format, repair_tool_pairing)
Declarative sub-agent YAML → tool registration
domain / infrastructure / application / interfaces
Remote agent execution over gRPC
No step limits — natural termination by token exhaustion
CoPaw's borrowing from NGOClaw falls into three tiers:
rebuild_sys_prompt() method namingCore finding: CoPaw attributes inspiration to "OpenClaw", but comparison reveals that several key designs — notably token-ratio compaction and Hook architecture — are NGOClaw originals that do not exist in OpenClaw. The actual scope of borrowing is broader than what is attributed. CoPaw's execution model is a Python rewrite of NGOClaw's architecture, not OpenClaw's.
This open-source release is just the beginning. The CoPaw development team is actively exploring the next generation of personal AI assistant capabilities.
Lightweight local models handle private and sensitive data while powerful cloud models tackle planning, coding, and complex reasoning — balancing security, performance, and capability.
Voice and video call capabilities with your CoPaw personal assistant. Expect richer, more natural ways to interact beyond text.
Growing the skill marketplace, broadening channel support, and deepening the AgentScope framework integration for ever more capable personal agents.
CoPaw stands for Co Personal Agent Workstation. It is an open-source personal AI assistant built on the AgentScope framework, developed by the AgentScope AI team. CoPaw supports multi-channel chat applications, local LLM execution, and a modular agent architecture — designed to give you full control over your AI assistant.
The recommended method is pip install copaw (requires Python 3.10+). You can also use the one-line installer script for macOS/Linux, deploy via Docker (docker pull agentscope/copaw:latest), or use one-click cloud deployment on ModelScope Studio. After installation, run copaw init --defaults && copaw app to launch the console at http://127.0.0.1:8088/.
Yes. CoPaw supports running LLMs entirely on your local machine via llama.cpp (cross-platform: macOS, Linux, Windows) and MLX (optimized for Apple Silicon M1/M2/M3/M4). No API keys or cloud services required. Use copaw models download to manage local models.
CoPaw natively supports DingTalk, Feishu (Lark), QQ, Discord, and iMessage. Developers can also build custom channel plugins using the built-in channel registry and manage them via CLI commands (list, install, remove, config).
Yes. CoPaw is released under the Apache License 2.0 and is free to use, modify, and distribute. The source code is available on GitHub. Contributions from the community are welcomed.
CoPaw is built upon the AgentScope framework — a production-ready, developer-centric framework for building and running intelligent agents with built-in support for tools and model integration. CoPaw serves as a reference implementation and key application within the AgentScope ecosystem, leveraging its abstractions and capabilities to deliver personal AI assistant functionalities.
GitHub Stars
Forks
License
Tech Stack
CoPaw is built and maintained by the AgentScope AI team and actively welcomes community contributions. The project is part of the broader AgentScope ecosystem, which has been cited in academic publications including "AgentScope 1.0: A Developer-Centric Framework for Building Agentic Applications" on arXiv, demonstrating both academic rigor and production readiness.
Whether you want to submit a pull request, report an issue, or build a custom skill or channel plugin, the CoPaw community is open and growing. Documentation is available in both English and Chinese to support developers worldwide.
Deploy your personal AI assistant in minutes. Open-source, extensible, and privacy-first.