1. Role 执行循环时序图
显示 Mermaid 源码
sequenceDiagram participant Env as Environment participant Role as Role participant Memory as Memory participant LLM as LLM Provider participant Action as ActionEnv->>Role: run() Role->>Role: _observe() Role->>Role: msg_buffer.pop_all() Role->>Role: 过滤感兴趣消息 Role->>Memory: add_batch(news) alt 有新消息 Role->>Role: _think() alt 单一动作 Role->>Role: _set_state(0) else 多动作-按顺序 Role->>Role: _set_state(state+1) else 多动作-LLM选择 Role->>LLM: aask(state_prompt) LLM-->>Role: next_state Role->>Role: _set_state(next_state) end Role->>Role: _act() Role->>Action: run(history) Action->>LLM: aask(prompt) LLM-->>Action: response Action-->>Role: AIMessage Role->>Memory: add(message) Role->>Env: publish_message(response) else 无新消息 Role-->>Env: None (idle) end</pre> </details>
2. ActionNode 填充时序图
显示 Mermaid 源码
sequenceDiagram participant Action as Action participant Node as ActionNode participant LLM as LLM Provider participant Parser as OutputParser participant Model as PydanticModelAction->>Node: fill(req, llm, schema, mode) Node->>Node: set_llm(llm) Node->>Node: set_context(req) alt schema == "raw" Node->>LLM: aask(raw_prompt) LLM-->>Node: content Node->>Node: self.content = content else 结构化输出 Node->>Node: compile(context, schema, mode) Node->>Node: compile_instruction() Node->>Node: compile_example() Node->>Node: get_mapping(mode) Node->>Node: _aask_v1(prompt, class_name, mapping) Node->>LLM: aask(prompt) LLM-->>Node: raw_content alt schema == "json" Node->>Parser: llm_output_postprocess(content, schema) else schema == "markdown" Node->>Parser: parse_data_with_mapping(content, mapping) end Parser-->>Node: parsed_data Node->>Model: create_model_class(class_name, mapping) Model-->>Node: output_class Node->>Node: output_class(**parsed_data) Node->>Node: self.instruct_content = instance end Node-->>Action: self</pre> </details>
3. Environment 消息路由时序图
显示 Mermaid 源码
sequenceDiagram participant Role1 as 发送角色 participant Env as Environment participant Role2 as 接收角色1 participant Role3 as 接收角色2 participant History as HistoryRole1->>Env: publish_message(message) Env->>Env: 检查 message.send_to loop 遍历所有角色 Env->>Env: is_send_to(message, role.addresses) alt 地址匹配 Env->>Role2: put_message(message) Role2->>Role2: msg_buffer.push(message) else 地址不匹配 Env->>Role3: (跳过) end end Env->>History: add(message) Env-->>Role1: True (发送成功) Note over Env: 如果没有找到接收者,记录警告日志</pre> </details>
4. Team 运行时序图
显示 Mermaid 源码
sequenceDiagram participant CLI as 命令行 participant Team as Team participant Env as Environment participant Role1 as ProductManager participant Role2 as Architect participant Role3 as Engineer participant CostMgr as CostManagerCLI->>Team: run(n_round, idea) Team->>Env: publish_message(Message(idea)) loop n_round 轮 Team->>CostMgr: _check_balance() alt 预算充足 Team->>Env: run() par 并发执行角色 Env->>Role1: run() Role1->>Role1: observe->think->act Role1->>Env: publish_message(prd) and Env->>Role2: run() Role2->>Role2: observe->think->act Role2->>Env: publish_message(design) and Env->>Role3: run() Role3->>Role3: observe->think->act Role3->>Env: publish_message(code) end Team->>Env: is_idle? alt 所有角色空闲 break 提前结束 end else 预算不足 Team->>Team: raise NoMoneyException end end Team->>Env: archive(auto_archive) Team-->>CLI: env.history</pre> </details> </div>
5. MGXEnv 多模态消息处理时序图
显示 Mermaid 源码
sequenceDiagram participant User as 用户 participant MGXEnv as MGXEnv participant TL as TeamLeader participant Role as 目标角色 participant ImageProc as 图像处理User->>MGXEnv: publish_message(message, user_defined_recipient) MGXEnv->>ImageProc: attach_images(message) ImageProc->>ImageProc: extract_and_encode_images() ImageProc-->>MGXEnv: message with images MGXEnv->>TL: get_role(TEAMLEADER_NAME) alt 用户直接对话 MGXEnv->>MGXEnv: direct_chat_roles.add(role_name) MGXEnv->>MGXEnv: _publish_message(message) MGXEnv->>Role: put_message(message) else 团队领导发布 alt message.send_to == {"no one"} MGXEnv-->>MGXEnv: 跳过虚拟消息 else MGXEnv->>MGXEnv: _publish_message(message) end else 常规消息 MGXEnv->>MGXEnv: message.send_to.add(tl.name) MGXEnv->>MGXEnv: _publish_message(message) MGXEnv->>TL: put_message(message) end MGXEnv->>MGXEnv: move_message_info_to_content(message) MGXEnv->>MGXEnv: history.add(message)</pre> </details>
6. Memory 记忆管理时序图
显示 Mermaid 源码
sequenceDiagram participant Role as Role participant Memory as Memory participant Index as 索引系统 participant Storage as 存储系统Role->>Memory: add(message) Memory->>Memory: 检查消息是否已存在 alt 消息不存在 Memory->>Storage: storage.append(message) alt message.cause_by 存在 Memory->>Index: index[cause_by].append(message) end end Role->>Memory: get_by_actions(watch_actions) Memory->>Index: 查询索引 loop 遍历关注的动作 Index->>Index: 获取相关消息列表 end Index-->>Memory: 相关消息集合 Memory-->>Role: 过滤后的消息列表 Role->>Memory: find_news(observed, k) Memory->>Storage: get(k) 获取最近k条 Memory->>Memory: 对比找出新消息 Memory-->>Role: 新消息列表</pre> </details>
7. LLM Provider 调用时序图
显示 Mermaid 源码
sequenceDiagram participant Action as Action participant LLM as BaseLLM participant Provider as OpenAI/Anthropic participant CostMgr as CostManager participant Retry as 重试机制Action->>LLM: aask(prompt, system_msgs) LLM->>LLM: format_msg(messages) LLM->>LLM: compress_messages(messages) LLM->>Retry: acompletion_text(messages) loop 最多3次重试 Retry->>Provider: _achat_completion(messages) Provider-->>Retry: response alt 调用成功 break else 连接错误 Retry->>Retry: wait_random_exponential() end end Retry-->>LLM: response LLM->>LLM: get_choice_text(response) LLM->>CostMgr: _update_costs(usage, model) CostMgr->>CostMgr: 更新token成本 LLM-->>Action: content</pre> </details> </div>
8. 配置系统初始化时序图
显示 Mermaid 源码
sequenceDiagram participant CLI as 命令行 participant Config as Config2 participant Context as Context participant LLMConfig as LLM配置 participant Provider as LLM ProviderCLI->>Config: update_via_cli(params) Config->>Config: 解析命令行参数 Config->>Config: 合并配置文件 CLI->>Context: Context(config=config) Context->>LLMConfig: config.llm Context->>Provider: create_llm_instance(llm_config) alt OpenAI Provider->>Provider: OpenAILLM(config) else Anthropic Provider->>Provider: AnthropicLLM(config) else 其他 Provider->>Provider: 对应Provider(config) end Provider-->>Context: llm_instance Context->>Context: 设置cost_manager Context-->>CLI: 初始化完成的上下文</pre> </details>
这些时序图展示了MetaGPT各个核心模块的详细交互流程,帮助理解系统的运行机制和数据流向。
创建时间: 2025年09月10日
本文由 tommie blog 原创发布