Open Source · Apache 2.0

CoPaw - Co Personal Agent Workstation

CoPaw — Co Personal Agent Workstation — is an open-source AI assistant built on the AgentScope framework. Deploy on your machine or in the cloud, connect your favorite chat apps, and run local LLMs with full privacy control. A warm little "paw" that is always ready to help.

$ curl -fsSL https://copaw.agentscope.io/install.sh | bash
$ pip install copaw
$ docker run -p 8088:8088 agentscope/copaw:latest

What Makes CoPaw Powerful

CoPaw is more than a chatbot. It is a fully modular personal agent workstation that gives you complete control over your AI assistant — from the model it runs to the channels it connects.

Multi-Channel Chat Integration

Natively connect CoPaw to DingTalk, Feishu, QQ, Discord, iMessage, and more. One assistant, multiple channels — reach your AI wherever you communicate.

Local LLM Execution

Run large language models entirely on your machine — no API keys, no cloud dependencies. CoPaw supports llama.cpp (cross-platform) and MLX (Apple Silicon M1–M4).

Modular Agent Architecture

CoPaw's agent core is fully modular — Prompt, Hooks, Tools, and Memory are decoupled components. Replace or extend any module independently and assemble your own agent.

Extensible Skill System

Built-in cron scheduling and custom skills that auto-load from your workspace. CoPaw ships with practical skills out of the box, and developers can create, install, or remove skills via CLI — no vendor lock-in.

Long-Term Memory & Heartbeat

CoPaw proactively remembers your decisions, preferences, and to-do items via its long-term memory system. Its innovative heartbeat mechanism lets it autonomously perform scheduled tasks — check emails, compile reports, organize to-dos — without being asked.

Full User Control & Privacy

You own every piece of data. Deploy CoPaw locally or in the cloud, choose your own model — mainstream cloud APIs, self-hosted inference, Ollama, or Apple Silicon native — and keep everything under your control.

Built on the AgentScope Framework

CoPaw is a key application within the AgentScope ecosystem — a production-ready, developer-centric framework for building and running intelligent agents. The technology stack combines Python (72.8%) for backend logic with TypeScript (22.2%) for the Console frontend.

Model Management

Unified model layer supporting cloud APIs (Qwen series and mainstream models), self-hosted inference services, Ollama, llama.cpp, and MLX for Apple Silicon local execution.

Agent Core

Decoupled modules — Prompt, Hooks, Tools, Memory — that can be independently replaced or extended. Developers assemble agents from interchangeable building blocks.

Channel Layer

Unified protocol and types across all messaging platforms. A channel registry with CLI commands (list, install, remove, config) lets you manage channels like plugins.

Message Queue

Built-in consumption and queue mechanism ensures reliable message processing across multiple simultaneous channels — no dropped messages even under heavy load.

Skill Engine

Auto-loaded skills from the user workspace with built-in cron scheduling. Skills are first-class citizens — discoverable, composable, and independently deployable.

Memory System

Persistent long-term memory that proactively captures decisions, preferences, and to-do items from conversations. The longer you use CoPaw, the better it understands you.

One CoPaw Assistant, Connect as You Need

CoPaw integrates natively with the messaging platforms you already use. The unified channel protocol ensures consistent behavior and reliable delivery across every connected platform.

DingTalk Feishu (Lark) QQ Discord iMessage Custom Channels

Developers can build custom channel plugins using the channel registry mechanism. Install, remove, and configure channels via CLI commands — manage your messaging integrations just like you manage packages.

Install & Deploy CoPaw

CoPaw offers one of the lowest deployment barriers among open-source agent tools. Choose the method that fits your workflow — from a single pip command to Docker and cloud deployment.

For users with Python 3.10+ environments. The fastest way to get started.

pip install copaw

# Initialize and launch
copaw init --defaults
copaw app

# Open http://127.0.0.1:8088/ in your browser

One-line install that automatically sets up the Python environment. Ideal for first-time users.

# macOS / Linux
curl -fsSL https://copaw.agentscope.io/install.sh | bash

# Windows — see docs for PowerShell method

Containerized deployment with persistent data volume. Two commands to a running instance.

docker pull agentscope/copaw:latest
docker run -p 8088:8088 \
  -v copaw-data:/app/working \
  agentscope/copaw:latest

One-click deployment on ModelScope Studio or Alibaba Cloud Computing Nest — no local setup required.

# Visit ModelScope Studio for one-click deploy
# Or use Alibaba Cloud Computing Nest
# See official docs for detailed guides

Your Digital Life Teammate

CoPaw is designed for real daily workflows — from automating email digests to creative content drafts. Combine built-in skills with cron scheduling to build your own agentic workflows.

Social & News Aggregation

Automatically compile daily digests of trending posts from Xiaohongshu, Zhihu, and Reddit. Summarize Bilibili and YouTube videos. Stay informed without information overload.

Work Productivity

Aggregate and summarize high-volume emails. Generate and organize weekly reports with one click. Extract contacts from emails and calendar events to streamline your workflow.

Creative Workflows

Describe your creative goal, let CoPaw work overnight, and receive a polished draft the next morning. From video scripts to social media content — ideation at scale.

Research & Knowledge

Track technology and AI news automatically. Build a personal knowledge base that grows with you. Crawl, organize, and summarize information from across the web.

Desktop Assistant

Organize files, read and summarize documents, request files through your chat interface. CoPaw bridges the gap between your desktop and your messaging apps.

Health & Lifestyle

Track and analyze your daily diet and fitness data. Record habits, set reminders for routines, and let CoPaw help you stay consistent with personal goals.

How CoPaw Compares

CoPaw occupies a unique position in the personal AI assistant space — combining multi-channel chat integration, local model support, and full user control in a single open-source workstation.

Capability CoPaw AutoGPT CrewAI Cloud Assistants
Core Focus Personal Agent Workstation Autonomous Task Agent Multi-Agent Orchestration General Conversational AI
Deployment Local / Cloud / Docker Local / Cloud Local / Cloud Cloud Only
Multi-Channel Chat Native (5+ platforms) Limited Limited API-based
Local Model Support llama.cpp + MLX
Privacy Control Full (local deploy) Moderate Moderate Limited
Skill / Plugin System Built-in + CLI Plugins Custom Agents Varies
Proactive Scheduling Heartbeat + Cron
License Apache 2.0 MIT MIT Proprietary

CoPaw vs NGOClaw — Independent Architecture Audit

Audit scope: All 190 Python source files under src/copaw/. This audit compares CoPaw's agent design module-by-module against NGOClaw's architecture to identify shared patterns, independent implementations, and attribution accuracy.

Audit date: 2026-02-28

Agent Core Loop

NGOClaw (agent_loop.go)
ReAct loop → LLM → Tool exec → Inject result
├── MiddlewarePipeline (Before/After Model)
├── ContextGuard (token-ratio compaction)
├── DoomLoop detection (sliding window)
├── SecurityHook (tool approval)
└── No MaxSteps (token budget terminates)
CoPaw (react_agent.py)
ReActAgent base → AgentScope framework
├── pre_reasoning Hook (Bootstrap, Compaction)
├── MemoryCompactionHook (token-ratio)
├── CommandHandler (slash commands)
├── max_iters=50 (step limit)
└── No DoomLoop, No SecurityHook

Verdict: Different frameworks (Go vs AgentScope/Python), different step strategies (token budget vs max_iters=50). Pre-LLM Hook injection concept is borrowed, but implementation diverges. Automatic compaction per-step behavior is consistent.

Context Compaction Mechanism Highest Similarity

NGOClaw
// agent_loop.go
ContextHardRatio: 0.85  // trigger at 85%
CompactKeepLast:  10    // keep last 10 msgs
// Extract middle → LLM summary → replace
CoPaw
# react_agent.py
threshold = int(max_input_length * 0.8)  # 80%
# memory_compaction.py
keep_recent = MEMORY_COMPACT_KEEP_RECENT  # 5
# Extract middle → LLM summary → COMPRESSED

Verdict: High similarity. Token-ratio triggered compaction is an NGOClaw original — OpenClaw uses message-count thresholds, not token ratios. The keep-recent-N tail strategy and per-step auto-check via Hook/Middleware are structurally identical. CoPaw's docs attribute inspiration to "OpenClaw", but OpenClaw does not implement token-ratio compaction.

Prompt Loading System Partial

NGOClaw (prompt_engine.go)
// Layered: System + Workspace
// ~/.ngoclaw/prompts/*.md (alphabetical)
// workspace/.ngoclaw/prompts/*.md
// Channel-specific overrides
// YAML frontmatter: requires.channel
// rebuild_sys_prompt() at runtime
CoPaw (prompt.py)
# Single-layer: working_dir only
FILE_ORDER = [
    ("AGENTS.md", True),   # required
    ("SOUL.md", True),     # required
    ("PROFILE.md", False), # optional
]
# rebuild_sys_prompt() at runtime

Verdict: Partial borrowing. The ".md files concatenated into system prompt" pattern originates from NGOClaw. The method name rebuild_sys_prompt() is identical. However, CoPaw uses single-layer hardcoded order vs NGOClaw's dual-layer alphabetical + frontmatter filtering.

Skills System High

NGOClaw (skill_manager.go)
// ~/.ngoclaw/skills/[name]/SKILL.md
// Auto-discover → load → inject prompt
type SkillInfo struct {
    Name, Description, Path string
}
CoPaw (skills_manager.py)
# Three tiers: builtin → customized → active
class SkillInfo(BaseModel):
    name: str
    content: str
    source: str   # "builtin"/"customized"/"active"
    path: str
# SKILL.md frontmatter → register

Verdict: Borrowed. The SKILL.md entry file convention and SkillInfo naming are directly from NGOClaw/OpenClaw. CoPaw extends with a three-tier directory structure (builtin/custom/active), which goes beyond the original single-tier design.

Slash Command System Medium

NGOClaw
/new     — save history → clear
/compact — manual compaction
/clear   — wipe history
/stop    — interrupt execution
/model   — switch model
/security, /trust, /skills, /agent
CoPaw (command_handler.py)
/new     — async summary → clear
/compact — manual compaction
/clear   — wipe history
/history — display history
# No /stop, /model, /security

Verdict: The core command set (/new, /compact, /clear) is identical in name and semantics. NGOClaw-exclusive commands (/stop, /model, /security) are absent from CoPaw.

Tool Design

NGOClaw
bash_exec, read_file, write_file, edit_file
search (grep+glob), web_search, web_fetch
browser, send_photo, send_document
python_exec, gemini_agent (sub-agent)
CoPaw
execute_shell_command, read_file, write_file
edit_file, grep_search, glob_search
browser_use, desktop_screenshot
send_file_to_user, memory_search

Verdict: edit_file (patch-style editing) and grep/glob search splitting show similarity, but these patterns are common across Aider, Cline, and Claude Code — not NGOClaw originals. NGOClaw's Agent-as-Tool and sub-agent patterns are absent from CoPaw.

Channel / Messaging Architecture

NGOClaw
Adapter direct-connect
TG + CLI + Web
StagedReply (status → deliver)
DraftStream (streaming edits)
InlineKeyboard (security approval)
CoPaw
BaseChannel ABC + ChannelManager
Queue + consumer loop
DingTalk, Feishu, QQ, Discord, iMessage
ConfigWatcher (mtime poll → hot-reload)
MessageRenderer (unified markdown/text)

Verdict: Independent architecture. NGOClaw uses direct adapter connections; CoPaw uses a queue-based channel manager. Different platform targets. NGOClaw's StagedReply, DraftStream, and InlineKeyboard approval are completely absent from CoPaw.

Memory System

NGOClaw
Session-memory → ~/.ngoclaw/memory/
  YYYY-MM-DD.md file-based
conversation.Repository (SQLite)
// No semantic search (planned: sqlite-vec)
CoPaw
MemoryManager (ReMeFs-based)
CoPawInMemoryMemory
AgentMdManager (agent.md)
memory_search tool (semantic search)
Async summary tasks (background)

Verdict: Different implementations. NGOClaw uses file + SQLite storage; CoPaw uses ReMeFs + in-memory. CoPaw has semantic search (more advanced). Source code includes attribution: "inspired by OpenClaw memory architecture".

Execution Model Tracing Critical

OpenClaw (platform-style)
User msg → Gateway (WebSocket)
  → runEmbeddedPiAgent() → external Pi SDK
  → stream events (lifecycle/tools/messages)
  ❌ No in-process ReAct loop
  ❌ Tools run in external sandbox
NGOClaw + CoPaw (monolithic)
User msg → Adapter/Channel → Agent.Run/reply()
  → while(true) { Hook → LLM → Tool → inject }
  → Middleware/Hook intercepts every step
  ✅ In-process ReAct loop
  ✅ Tools execute via Toolkit interface
Design Decision OpenClaw NGOClaw CoPaw Closer To
Loop locationExternal process (Pi SDK)Embedded Go loopEmbedded Python loopNGOClaw
Tool executionSandbox isolationIn-process ToolExecutorIn-process ToolkitNGOClaw
Hook / MiddlewareNone (stream events)Before/After Modelpre_reasoning HookNGOClaw
Compaction triggerAuto backend (no command)/compact + auto/compact + autoNGOClaw
Skill loadingWorkspace SKILL.mdSkillManager globensure_skills_initialized()NGOClaw
System promptsystem-prompt.ts template.md file concatenation.md file concatenationNGOClaw
Message formatTranscript JSONL[]LLMMessage array[Msg] listNGOClaw

Result: 7/8 design decisions align with NGOClaw. 0/8 align with OpenClaw. OpenClaw is a platform-style architecture where the Gateway dispatches to an external Agent SDK. NGOClaw and CoPaw are both monolithic agents — running LLM calls, tool execution, and context management in a single process loop.

Borrowing Assessment by Severity

# Design Pattern Severity Evidence
1Token-ratio compaction + keep_recentHighOpenClaw lacks this; parameter structure matches NGOClaw
2SKILL.md entry + SkillInfo namingHighNaming is identical
3Pre-LLM Hook injection architectureMediumConcept matches, but uses AgentScope framework
4/compact + /new command semanticsMediumCommand names + behavior are identical
5.md prompt loading + rebuild_sys_prompt()MediumPattern + method name are identical
6edit_file patch-style toolLowCommon in Aider / Cline / Claude Code
7grep + glob search splittingLowIndustry standard pattern
8Config hot-reloadLowCommon engineering practice

NGOClaw Originals — Not Borrowed by CoPaw

StagedReply

Telegram phased card rendering (status → deliver)

DraftStream

Streaming text segmented edit pushes

SecurityHook + InlineKeyboard

In-Telegram tool execution approval flow

DoomLoop Detection

Sliding window repetition detection + forced reflection

ModelPolicy

Per-model behavior strategies (reasoning_format, repair_tool_pairing)

Agent-as-Tool

Declarative sub-agent YAML → tool registration

DDD Domain Layering

domain / infrastructure / application / interfaces

gRPC Agent Protocol

Remote agent execution over gRPC

Token Budget

No step limits — natural termination by token exhaustion

Final Conclusion

CoPaw's borrowing from NGOClaw falls into three tiers:

  1. Attributed borrowing (labeled "OpenClaw"): Skills, Memory, and basic Prompt patterns
  2. Unattributed borrowing: Token-ratio compaction, Pre-LLM Hook architecture, rebuild_sys_prompt() method naming
  3. Independent implementation: Channel architecture, AgentScope framework integration, ConfigWatcher, Heartbeat mechanism

Core finding: CoPaw attributes inspiration to "OpenClaw", but comparison reveals that several key designs — notably token-ratio compaction and Hook architecture — are NGOClaw originals that do not exist in OpenClaw. The actual scope of borrowing is broader than what is attributed. CoPaw's execution model is a Python rewrite of NGOClaw's architecture, not OpenClaw's.

CoPaw Roadmap

This open-source release is just the beginning. The CoPaw development team is actively exploring the next generation of personal AI assistant capabilities.

Large-Small Model Collaboration

Lightweight local models handle private and sensitive data while powerful cloud models tackle planning, coding, and complex reasoning — balancing security, performance, and capability.

Multimodal Interaction

Voice and video call capabilities with your CoPaw personal assistant. Expect richer, more natural ways to interact beyond text.

Expanded Ecosystem

Growing the skill marketplace, broadening channel support, and deepening the AgentScope framework integration for ever more capable personal agents.

Frequently Asked Questions

What is CoPaw?

CoPaw stands for Co Personal Agent Workstation. It is an open-source personal AI assistant built on the AgentScope framework, developed by the AgentScope AI team. CoPaw supports multi-channel chat applications, local LLM execution, and a modular agent architecture — designed to give you full control over your AI assistant.

How do I install CoPaw?

The recommended method is pip install copaw (requires Python 3.10+). You can also use the one-line installer script for macOS/Linux, deploy via Docker (docker pull agentscope/copaw:latest), or use one-click cloud deployment on ModelScope Studio. After installation, run copaw init --defaults && copaw app to launch the console at http://127.0.0.1:8088/.

Does CoPaw support local models?

Yes. CoPaw supports running LLMs entirely on your local machine via llama.cpp (cross-platform: macOS, Linux, Windows) and MLX (optimized for Apple Silicon M1/M2/M3/M4). No API keys or cloud services required. Use copaw models download to manage local models.

Which chat platforms does CoPaw support?

CoPaw natively supports DingTalk, Feishu (Lark), QQ, Discord, and iMessage. Developers can also build custom channel plugins using the built-in channel registry and manage them via CLI commands (list, install, remove, config).

Is CoPaw free and open-source?

Yes. CoPaw is released under the Apache License 2.0 and is free to use, modify, and distribute. The source code is available on GitHub. Contributions from the community are welcomed.

What is the relationship between CoPaw and AgentScope?

CoPaw is built upon the AgentScope framework — a production-ready, developer-centric framework for building and running intelligent agents with built-in support for tools and model integration. CoPaw serves as a reference implementation and key application within the AgentScope ecosystem, leveraging its abstractions and capabilities to deliver personal AI assistant functionalities.

Open Source & Growing

634+

GitHub Stars

54+

Forks

Apache 2.0

License

Python + TS

Tech Stack

CoPaw is built and maintained by the AgentScope AI team and actively welcomes community contributions. The project is part of the broader AgentScope ecosystem, which has been cited in academic publications including "AgentScope 1.0: A Developer-Centric Framework for Building Agentic Applications" on arXiv, demonstrating both academic rigor and production readiness.

Whether you want to submit a pull request, report an issue, or build a custom skill or channel plugin, the CoPaw community is open and growing. Documentation is available in both English and Chinese to support developers worldwide.

Start Building with CoPaw Today

Deploy your personal AI assistant in minutes. Open-source, extensible, and privacy-first.