FINCLAW · YOUR PERSONAL INVESTMENT AI
一个常驻在你 IM 和桌面里的
个人投资 AI 助理。
A personal investment AI
that lives in your IM and on your desk.
FinClaw 不是一个工具,是一个一直在线的研究员。它在 IM 里跟你聊行情、推日报、报告异动;在桌面驾驶舱里维护 thesis、跑回测、做归因。底下是一套桥水风格的五层研究引擎,跨美 / A / 港三市场。每一笔买卖,都有证据、有 thesis、有复盘。
FinClaw isn't a tool — it's a researcher who's always on. In your IM it chats about markets, pushes briefings, and flags anomalies. On your desktop it maintains theses, runs backtests, and attributes returns. Underneath sits a Bridgewater-style five-layer research engine spanning US, A-share, and HK markets. Every trade carries a thesis, evidence, and a postmortem.
同一个助理,两种工作姿势
One assistant, two postures
移动端是"耳边的研究员"——日报、对话、异动告警,碎片时间就能完成轻量决策;桌面端是"作战指挥所"——thesis 全生命周期、Gate 审批、回测归因。两端共享同一份记忆与证据账本。
Mobile is the researcher at your ear — briefings, conversation, anomaly alerts; light decisions in pocket time. Desktop is the war room — full thesis lifecycle, gate approvals, backtests, attribution. Both share one memory and one evidence ledger.
口袋里的研究员
The researcher in your pocket
- 每日定时简报(宏观因果机 + 持仓巡检)
- Scheduled daily briefings (macro engine + portfolio sweep)
- 对话式问答:行情、thesis 状态、Gate 预检
- Conversational queries: quotes, thesis state, gate pre-checks
- 触发式告警:价格穿越、估值偏离、新闻异动
- Trigger alerts: price crossings, valuation drift, news shocks
投研驾驶舱
The investment cockpit
- Thesis 全生命周期:起草 · Gate · 巡检 · 平仓 · 复盘
- End-to-end thesis lifecycle: draft · gate · tick · close · postmortem
- 结构性 alpha(深度研究 + 估值锚)与统计性 alpha(多因子 + 自进化)并行回测
- Parallel backtests: structural alpha (deep research + valuation anchor) and statistical alpha (multi-factor + self-evolving)
- Outcomes 账本:跨方法对比,反哺 knowledge 层
- Outcomes ledger: cross-method comparison, fed back into the knowledge layer
FinClaw 的三层解剖
FinClaw, in three layers
上层是 触达:IM、桌面、报告。中层是 大脑:常驻 agent、flows、设备授权与模型网关。下层是 研究引擎:桥水风格五层管线,数据底座 → 知识库 → 双方法 → 结果账本。
Top is delivery — IM, desktop, reports. Middle is the brain — long-running agent, flows, device authorization, model gateway. Bottom is the research engine — a Bridgewater-style five-layer pipeline from data foundation through methods to the outcomes ledger.
微信 · 飞书 · Telegram
WeChat · Feishu · Telegram
对话、推送、命令、审批一体的入口。
Chat, push, commands, approvals — one entry.
投研驾驶舱
Research cockpit
thesis · watchlist · gate · outcomes 全流程。
Thesis · watchlist · gate · outcomes end-to-end.
每日 HTML 报告
Daily HTML report
宏观因果机生成,可分享、可归档。
Generated by the macro-causal engine; shareable, archivable.
常驻主 agent
Long-running agent
会话池 + 安全 sandbox,跨通道维持上下文。
Session pool with sandbox; context across channels.
定时与触发流
Scheduled & triggered flows
简报、巡检、告警、审批工作流。
Briefings, sweeps, alerts, approval workflows.
设备配对与权限
Device pairing & scopes
operator / read / approvals 分级 token。
Tiered tokens: operator / read / approvals.
模型网关
Model gateway
路由 Opus 4.7 / Sonnet 4.6,按任务降配。
Routes Opus 4.7 / Sonnet 4.6, downshifts per task.
数据底座
Data
10 类数据源 × 3 市场,统一 provider。
10 sources × 3 markets, unified provider.
知识库
Knowledge
doctrine · raw · wiki · feedback 四层。
doctrine · raw · wiki · feedback in four tiers.
结构性 α
Structural α
thesis 驱动,1–10 持仓,深度研究。
Thesis-driven, 1–10 holdings, deep research.
统计性 α
Statistical α
多因子 + ShinkaEvolve 自进化。
Multi-factor + ShinkaEvolve self-evolution.
结果账本
Outcomes
跨方法对比,反哺 knowledge。
Cross-method, fed back to knowledge.
为什么是 AI 增强,不是 AI 替代
Why AI-augmented, not AI-replaced
大模型最强的不是预测涨跌,而是把研究员的注意力放到正确的地方,并把每一次判断变成可审计的工件。
What frontier models do best is not forecasting — it is steering the researcher's attention and turning every judgment into an auditable artifact.
证据优先
Evidence-first
每个 BUY / SELL / HOLD 都绑定一份 thesis,Gate 审批后才能落账。
Every BUY / SELL / HOLD is bound to a thesis and committed only after a gate check.
可重放
Replayable
所有原始数据 / 推理轨迹本地留存,回测与归因可以从任意时间点重放。
Raw data and reasoning traces are kept locally; backtests and attributions replay from any point in time.
双方法对照
Two methods, one truth
结构性与统计性方法各自独立运作,在 outcomes 层硬碰硬比较,谁对就是谁对。
Structural and statistical methods run independently and meet head-to-head in the outcomes ledger.
技术栈与运行
Stack & runtime
数据主权 · 决策闭环。整套主权计算栈在本机运行,模型推理通过自托管网关完成,原始数据与决策记录从不流向第三方。
Sovereign compute, self-contained loop. The full stack runs on-prem, model inference is mediated by a self-hosted gateway, and raw data and decision records never reach a third party.