A Spring Boot AI assistant that wires ReAct agents, MCP protocol, and seven chat platforms into one stack.
Why This Matters Right Now
The multi-agent gold rush has been almost exclusively a Python story. LangChain, CrewAI, AutoGen — they all assume you live in a pip-install world. Meanwhile, the overwhelming majority of enterprise backends are Java. Spring Boot shops have been left watching the agentic AI wave roll past them, duct-taping Python microservices to their JVM monoliths just to get a ReAct loop running.
MateClaw is a direct answer to that gap. It's a full-stack AI assistant framework built on Java 17+ and Vue 3, powered by Spring AI Alibaba — and it ships with multi-agent orchestration, MCP protocol support, multi-layer memory, and adapters for seven messaging platforms out of the box. At 13 stars it's barely on anyone's radar. That's exactly when you want to be paying attention.
What It Actually Does
Let's be concrete, because "AI assistant" means nothing in 2025 without specifics.
MateClaw runs two agent execution modes. The first is ReAct — the classic Thought → Action → Observation loop that lets an agent reason through tool use iteratively. The second is Plan-and-Execute, which decomposes a complex user request into ordered sub-steps before execution begins. You can create multiple independent agents, each with its own persona, toolset, and memory scope. This isn't a single chatbot with a system prompt — it's a configurable agent fleet.
The tool system is layered. Built-in tools cover web search and date/time. Beyond that, MateClaw implements the MCP (Model Context Protocol), with GitHub and Filesystem MCP servers pre-configured — you enable them and they're live. Additional skill packages can be installed from ClawHub, which appears to be a first-party marketplace for extending agent capabilities. Custom MCP sources are also supported.
Memory is handled in four layers: a short-term context window with auto-compression, event-driven post-conversation memory extraction, workspace files (PROFILE.md, MEMORY.md, and daily notes that agents can read and write), and scheduled memory consolidation. The file-based memory approach is pragmatic — it's inspectable, version-controllable, and doesn't require a vector database to get started.
Channel support is genuinely broad: web console, DingTalk, Feishu, WeChat Work, Telegram, Discord, and QQ. The multi-channel adapter architecture means one configured agent can respond across all of them simultaneously.
Model support covers 20+ providers: OpenAI, Anthropic, Google Gemini, DeepSeek, DashScope, Kimi, MiniMax, Zhipu AI, Ollama, LM Studio, OpenRouter, and more. Provider configuration is handled through the web UI rather than raw config files, which lowers the operational friction significantly.
Technical Deep-Dive
The backend is mateclaw-server, a Spring Boot 3.5 application targeting Java 17+. It uses Maven (mvnw is included) and defaults to H2 for local development — the H2 console is exposed at http://localhost:18088/h2-console, which means zero database setup friction for first-run evaluation. SpringDoc OpenAPI is integrated for API documentation.
The frontend lives in mateclaw-ui, built with Vue 3 and pnpm. There's also an Electron wrapper for desktop distribution with auto-update support — so this isn't just a dev tool, it's aiming at end-user deployability.
The Spring AI Alibaba foundation is worth noting. This is Alibaba's Spring-native AI framework, meaning the agent orchestration primitives are built on top of Spring's dependency injection and configuration model. If you're a Spring developer, the mental model for extending agents with new tools or memory backends will feel familiar rather than foreign.
The topics list in the repository — react-agent, plan-and-execute, mcp-protocol, tool-calling, skills, multi-agent — maps almost one-to-one with the architectural layers described in the README. This isn't vaporware labeling; the feature surface appears genuinely implemented.
Starting the backend is straightfor