1. 系统启动时序图

1.1 服务器初始化流程

sequenceDiagram
    participant Main as main.go
    participant CLI as Cobra CLI
    participant Config as Configuration
    participant Server as ghmcp.Server
    participant Toolset as ToolsetGroup
    participant GitHub as GitHub Clients
    
    Note over Main, GitHub: 服务器启动初始化流程
    
    Main->>CLI: Execute rootCmd
    CLI->>CLI: Parse command line args
    CLI->>Config: Load environment variables
    Config->>Config: Validate token and settings
    
    Note over CLI: 配置解析阶段
    CLI->>CLI: viper.GetString("personal_access_token")
    CLI->>CLI: viper.UnmarshalKey("toolsets")
    CLI->>CLI: Parse additional flags
    
    CLI->>Server: ghmcp.RunStdioServer(config)
    Server->>Server: Create context with signals
    
    Note over Server: 服务器核心创建
    Server->>Server: NewMCPServer(MCPServerConfig)
    Server->>Server: parseAPIHost(cfg.Host)
    Server->>GitHub: Create REST/GraphQL/Raw clients
    GitHub-->>Server: Configured clients
    
    Note over Server, Toolset: 工具集注册阶段
    Server->>Toolset: DefaultToolsetGroup(readOnly, clients...)
    
    loop For each toolset
        Toolset->>Toolset: NewToolset(name, description)
        Toolset->>Toolset: AddReadTools() / AddWriteTools()
        Toolset->>Toolset: AddResourceTemplates()
        Toolset->>Toolset: AddPrompts()
    end
    
    Server->>Toolset: tsg.EnableToolsets(enabledToolsets)
    
    alt Dynamic toolsets enabled
        Server->>Toolset: InitDynamicToolset(server, tsg, translator)
        Toolset->>Server: Register dynamic tools
    end
    
    Server->>Toolset: tsg.RegisterAll(server)
    
    loop For each active toolset
        Toolset->>Server: RegisterTools()
        Toolset->>Server: RegisterResourcesTemplates() 
        Toolset->>Server: RegisterPrompts()
    end
    
    Note over Server: 启动 stdio 监听
    Server->>Server: server.NewStdioServer(ghServer)
    Server->>Server: stdioServer.Listen(ctx, stdin, stdout)
    
    Server-->>Main: Server ready
    
    Note over Main: 服务器运行中
    Main->>Main: Wait for shutdown signal

1.2 客户端连接建立流程

sequenceDiagram
    participant Client as AI Client
    participant MCP as MCP Protocol
    participant Server as GitHub MCP Server
    participant Auth as Authentication
    participant Toolset as ToolsetGroup
    
    Note over Client, Toolset: MCP 连接建立过程
    
    Client->>MCP: Connect via stdio/http
    MCP->>Server: Establish connection
    
    Note over Client, Server: 初始化握手
    Client->>MCP: InitializeRequest
    Note over MCP: {
    Note over MCP:   "method": "initialize",
    Note over MCP:   "params": {
    Note over MCP:     "protocolVersion": "2024-11-05",
    Note over MCP:     "clientInfo": {
    Note over MCP:       "name": "Claude",
    Note over MCP:       "version": "3.5"
    Note over MCP:     }
    Note over MCP:   }
    Note over MCP: }
    
    MCP->>Server: Process InitializeRequest
    Server->>Auth: Validate GitHub token
    Auth->>Auth: Check token permissions
    Auth-->>Server: Authentication result
    
    Server->>Server: Update User-Agent with client info
    Note over Server: User-Agent: github-mcp-server/1.0.0 (Claude/3.5)
    
    Server->>Server: Setup GitHub clients with auth
    
    Server->>MCP: InitializeResponse
    Note over MCP: {
    Note over MCP:   "result": {
    Note over MCP:     "protocolVersion": "2024-11-05", 
    Note over MCP:     "serverInfo": {
    Note over MCP:       "name": "github-mcp-server",
    Note over MCP:       "version": "1.0.0"
    Note over MCP:     },
    Note over MCP:     "capabilities": {
    Note over MCP:       "tools": {},
    Note over MCP:       "resources": {},
    Note over MCP:       "prompts": {}
    Note over MCP:     }
    Note over MCP:   }
    Note over MCP: }
    
    MCP-->>Client: Connection established
    
    Note over Client, Server: 工具发现阶段
    Client->>MCP: ListToolsRequest
    MCP->>Server: Get available tools
    Server->>Toolset: Get active tools from enabled toolsets
    
    loop For each enabled toolset
        Toolset->>Toolset: GetActiveTools()
        Toolset-->>Server: Tool definitions
    end
    
    Server->>MCP: ToolsResponse with tool list
    MCP-->>Client: Available tools
    
    Note over Client: Client now ready to call tools

2. 工具调用完整时序图

2.1 典型工具调用流程(以 get_issue 为例)

sequenceDiagram
    participant Client as AI Client  
    participant MCP as MCP Protocol
    participant Server as MCP Server
    participant Handler as IssueHandler
    participant Validator as ParamValidator
    participant GitHubREST as GitHub REST API
    participant ErrorHandler as Error Handler
    participant Response as Response Builder
    
    Note over Client, Response: 完整的 get_issue 工具调用流程
    
    Client->>MCP: CallToolRequest("get_issue")
    Note over MCP: {
    Note over MCP:   "method": "tools/call",
    Note over MCP:   "params": {
    Note over MCP:     "name": "get_issue",
    Note over MCP:     "arguments": {
    Note over MCP:       "owner": "microsoft",
    Note over MCP:       "repo": "vscode", 
    Note over MCP:       "issue_number": 12345
    Note over MCP:     }
    Note over MCP:   }
    Note over MCP: }
    
    MCP->>Server: Route to tool handler
    Server->>Handler: GetIssue handler function
    
    Note over Handler, Validator: 参数验证阶段
    Handler->>Validator: RequiredParam[string](request, "owner")
    Validator->>Validator: Check parameter exists
    Validator->>Validator: Validate type string
    Validator->>Validator: Check not empty
    Validator-->>Handler: "microsoft"
    
    Handler->>Validator: RequiredParam[string](request, "repo")
    Validator-->>Handler: "vscode"
    
    Handler->>Validator: RequiredInt(request, "issue_number")
    Validator->>Validator: RequiredParam[float64] then convert to int
    Validator-->>Handler: 12345
    
    Note over Handler: 参数验证通过,开始API调用
    
    Handler->>Handler: getClient(ctx)
    Handler->>Handler: Get configured GitHub REST client
    Handler-->>Handler: *github.Client with auth
    
    Note over Handler, GitHubREST: GitHub API 调用
    Handler->>GitHubREST: client.Issues.Get(ctx, "microsoft", "vscode", 12345)
    
    Note over GitHubREST: GET /repos/microsoft/vscode/issues/12345
    Note over GitHubREST: Authorization: Bearer <token>
    Note over GitHubREST: User-Agent: github-mcp-server/1.0.0 (Claude/3.5)
    
    alt Successful Response
        GitHubREST-->>Handler: Issue data + HTTP 200
        Handler->>Handler: Check resp.StatusCode == 200
        Handler->>Response: json.Marshal(issue)
        Response-->>Handler: JSON string
        Handler->>MCP: mcp.NewToolResultText(jsonString)
        
    else API Error (e.g. 404)
        GitHubREST-->>Handler: Error + HTTP 404
        Handler->>ErrorHandler: Check error type
        ErrorHandler->>ErrorHandler: ghErrors.NewGitHubAPIErrorResponse()
        ErrorHandler->>ErrorHandler: Map 404 to user-friendly message
        ErrorHandler-->>Handler: "Resource not found. Please verify the repository/resource exists."
        Handler->>MCP: mcp.NewToolResultError(errorMessage)
        
    else Network Error
        GitHubREST-->>Handler: Network error
        Handler->>ErrorHandler: fmt.Errorf("failed to get issue: %w", err)
        ErrorHandler-->>Handler: Wrapped error
        Handler->>MCP: Return nil, error (framework handles)
    end
    
    MCP-->>Client: CallToolResult
    Note over Client: Receives issue data or error message

2.2 复杂工具调用流程(以 get_job_logs 为例)

sequenceDiagram
    participant Client as AI Client
    participant Server as MCP Server  
    participant Handler as ActionsHandler
    participant GitHub as GitHub API
    participant LogProcessor as Log Processor
    participant Buffer as Ring Buffer
    participant Profiler as Performance Profiler
    
    Note over Client, Profiler: 复杂的日志获取工具调用
    
    Client->>Server: get_job_logs(run_id=123, failed_only=true, return_content=true)
    Server->>Handler: Route to GetJobLogs handler
    
    Note over Handler: 参数验证和逻辑分发
    Handler->>Handler: Validate parameters
    Handler->>Handler: failedOnly=true && runID=123 
    Handler->>Handler: Route to handleFailedJobLogs()
    
    Note over Handler, GitHub: 批量作业查询
    Handler->>GitHub: ListWorkflowJobs(ctx, owner, repo, runID)
    GitHub-->>Handler: WorkflowJobs{Jobs: [job1, job2, job3...]}
    
    Handler->>Handler: Filter failed jobs
    Note over Handler: jobs.Jobs.filter(job => job.GetConclusion() == "failure")
    Handler->>Handler: Found 2 failed jobs: [job1, job3]
    
    Note over Handler: 并发处理每个失败的作业
    
    par Job 1 Log Processing
        Handler->>Handler: getJobLogData(ctx, client, owner, repo, job1.ID)
        Handler->>GitHub: GetWorkflowJobLogs(ctx, owner, repo, job1.ID) 
        GitHub-->>Handler: Download URL for job1 logs
        
        Handler->>LogProcessor: downloadLogContent(url, tailLines=500, maxLines=5000)
        LogProcessor->>Profiler: prof.Start(ctx, "log_buffer_processing")
        LogProcessor->>LogProcessor: http.Get(logURL)
        LogProcessor->>Buffer: ProcessResponseAsRingBufferToEnd(response, 500)
        
        Note over Buffer: 环形缓冲区优化处理大文件
        Buffer->>Buffer: Create ring buffer[500]
        Buffer->>Buffer: Read line by line
        loop For each line in response
            Buffer->>Buffer: buffer[index] = line
            Buffer->>Buffer: index = (index + 1) % 500
        end
        Buffer->>Buffer: Reconstruct final 500 lines
        Buffer-->>LogProcessor: Last 500 lines + total line count
        
        LogProcessor->>LogProcessor: Split and trim to exact tailLines
        LogProcessor->>Profiler: finish(lineCount, byteCount)
        LogProcessor-->>Handler: {job_id: job1.ID, job_name: "build", logs_content: "...", original_length: 2341}
        
    and Job 3 Log Processing  
        Handler->>Handler: getJobLogData(ctx, client, owner, repo, job3.ID)
        Handler->>GitHub: GetWorkflowJobLogs(ctx, owner, repo, job3.ID)
        GitHub-->>Handler: Download URL for job3 logs
        
        Handler->>LogProcessor: downloadLogContent(url, tailLines=500, maxLines=5000)
        Note over LogProcessor: Same ring buffer processing as job1
        LogProcessor-->>Handler: {job_id: job3.ID, job_name: "test", logs_content: "...", original_length: 1205}
    end
    
    Note over Handler: 汇总所有作业结果
    Handler->>Handler: Combine results
    Handler->>Handler: Build comprehensive response
    
    Handler-->>Server: CallToolResult{
    Handler-->>Server:   message: "Retrieved logs for 2 failed jobs",
    Handler-->>Server:   run_id: 123,
    Handler-->>Server:   total_jobs: 5,
    Handler-->>Server:   failed_jobs: 2, 
    Handler-->>Server:   logs: [job1_result, job3_result],
    Handler-->>Server:   return_format: {content: true, urls: false}
    Handler-->>Server: }
    
    Server-->>Client: Formatted log results with content

3. 工具集动态管理时序图

3.1 动态工具集启用流程

sequenceDiagram
    participant Client as AI Client
    participant Server as MCP Server
    participant Dynamic as Dynamic Toolset
    participant ToolsetGroup as ToolsetGroup
    participant NewToolset as Target Toolset
    
    Note over Client, NewToolset: 动态启用工具集流程
    
    Client->>Server: list_available_toolsets()
    Server->>Dynamic: ListAvailableToolsets handler
    Dynamic->>ToolsetGroup: Get all registered toolsets
    ToolsetGroup-->>Dynamic: Toolset definitions with enabled status
    
    Dynamic->>Dynamic: Build toolset list
    Dynamic-->>Server: JSON response with available toolsets
    Server-->>Client: {toolsets: [{name: "actions", enabled: false, description: "..."}, ...]}
    
    Note over Client: Client decides to enable "actions" toolset
    
    Client->>Server: enable_toolset(toolset="actions")
    Server->>Dynamic: EnableToolset handler
    
    Dynamic->>Dynamic: RequiredParam[string](request, "toolset")
    Dynamic->>ToolsetGroup: toolsetGroup.Toolsets["actions"]
    ToolsetGroup-->>Dynamic: actions toolset reference
    
    Dynamic->>Dynamic: Check if already enabled
    alt Toolset already enabled
        Dynamic-->>Server: "Toolset actions is already enabled"
    else Toolset not enabled
        Dynamic->>NewToolset: toolset.Enabled = true
        NewToolset-->>Dynamic: Enabled
        
        Note over Dynamic, Server: 实时注册新工具
        Dynamic->>NewToolset: GetActiveTools()
        NewToolset-->>Dynamic: List of active tools
        
        loop For each tool in toolset
            Dynamic->>Server: s.AddTools(tool)
            Server->>Server: Register tool with MCP framework
        end
        
        Dynamic-->>Server: "Toolset actions enabled"
    end
    
    Server-->>Client: Success response
    
    Note over Client: Client can now use actions tools
    Client->>Server: list_workflows(owner="microsoft", repo="vscode")
    Server->>NewToolset: Route to actions toolset
    NewToolset->>NewToolset: ListWorkflows handler
    Note over NewToolset: Now available for use!

3.2 错误处理和恢复流程

sequenceDiagram
    participant Client as AI Client
    participant Server as MCP Server
    participant Handler as Tool Handler
    participant GitHub as GitHub API
    participant ErrorCollector as Error Context
    participant RateLimit as Rate Limit Handler
    
    Note over Client, RateLimit: API 错误处理和恢复机制
    
    Client->>Server: get_issue(owner="invalid", repo="repo", issue_number=1)
    Server->>Handler: Route to GetIssue handler
    Handler->>Handler: Validate parameters (all valid)
    Handler->>GitHub: client.Issues.Get(ctx, "invalid", "repo", 1)
    
    Note over GitHub: Repository does not exist
    GitHub-->>Handler: 404 Not Found + github.ErrorResponse
    
    Handler->>ErrorCollector: ghErrors.NewGitHubAPIErrorResponse(ctx, "failed to get issue", resp, err)
    
    Note over ErrorCollector: 详细错误信息记录
    ErrorCollector->>ErrorCollector: Extract error details
    ErrorCollector->>ErrorCollector: gitHubError{
    ErrorCollector->>ErrorCollector:   Message: "failed to get issue",
    ErrorCollector->>ErrorCollector:   StatusCode: 404,
    ErrorCollector->>ErrorCollector:   RateLimit: {Limit: 5000, Remaining: 4999, Reset: time},
    ErrorCollector->>ErrorCollector:   Timestamp: now(),
    ErrorCollector->>ErrorCollector:   Error: "404 Not Found"
    ErrorCollector->>ErrorCollector: }
    
    ErrorCollector->>ErrorCollector: Add to context error collection
    
    Note over ErrorCollector: 用户友好错误映射
    ErrorCollector->>ErrorCollector: Map status code to user message
    ErrorCollector-->>Handler: "Resource not found. Please verify the repository/resource exists."
    
    Handler-->>Server: mcp.NewToolResultError(userFriendlyMessage)
    Server-->>Client: Clear error message for user
    
    Note over Client, RateLimit: 速率限制处理场景
    
    Client->>Server: Multiple rapid API calls
    Server->>Handler: Process call #1000 in short time
    Handler->>GitHub: API call with rate limit approaching
    
    GitHub-->>Handler: 403 Forbidden + Rate Limit Headers
    Note over GitHub: X-RateLimit-Remaining: 0
    Note over GitHub: X-RateLimit-Reset: 1640995200
    
    Handler->>RateLimit: Check rate limit status
    RateLimit->>RateLimit: Parse rate limit headers
    RateLimit->>RateLimit: Calculate reset time
    RateLimit->>RateLimit: resetTime = time.Unix(1640995200, 0)
    RateLimit->>RateLimit: waitTime = resetTime.Sub(now())
    
    RateLimit-->>Handler: "Rate limit exceeded. Please wait 15 minutes before making more requests."
    Handler-->>Server: mcp.NewToolResultError(rateLimitMessage)
    Server-->>Client: Informative rate limit message
    
    Note over Client: Client can implement backoff strategy

4. 多种API协调使用时序图

4.1 REST + GraphQL + Raw API 协调使用

sequenceDiagram
    participant Client as AI Client
    participant Server as MCP Server
    participant Handler as Handler
    participant REST as GitHub REST API
    participant GraphQL as GitHub GraphQL API  
    participant Raw as GitHub Raw API
    participant Optimizer as API Optimizer
    
    Note over Client, Optimizer: API协调使用案例:更新Issue并获取文件内容
    
    Client->>Server: update_issue(owner="repo", issue_number=123, state="closed", state_reason="duplicate", duplicate_of=456)
    Server->>Handler: UpdateIssue handler
    
    Note over Handler: 复杂更新流程需要多种API
    
    Handler->>Handler: Parse parameters and validate
    Handler->>Handler: Detect state change requires GraphQL
    Handler->>Handler: Other updates can use REST
    
    Note over Handler, REST: 第一阶段:使用REST API更新基本属性
    Handler->>REST: client.Issues.Edit(ctx, owner, repo, issueNumber, issueRequest)
    REST-->>Handler: Updated issue (without state change)
    
    Note over Handler, GraphQL: 第二阶段:使用GraphQL API处理状态变更  
    Handler->>GraphQL: 获取Issue和重复Issue的GraphQL节点ID
    Handler->>GraphQL: fetchIssueIDs(ctx, gqlClient, owner, repo, 123, 456)
    
    GraphQL->>GraphQL: Execute query for both issues:
    Note over GraphQL: query {
    Note over GraphQL:   repository(owner: "repo", name: "repo") {
    Note over GraphQL:     issue(number: 123) { id }
    Note over GraphQL:     duplicateIssue: issue(number: 456) { id }
    Note over GraphQL:   }
    Note over GraphQL: }
    
    GraphQL-->>Handler: {issueID: "MDU6...", duplicateIssueID: "MDU6..."}
    
    Handler->>GraphQL: CloseIssue mutation with duplicate reference
    Handler->>GraphQL: Execute mutation:
    Note over GraphQL: mutation {
    Note over GraphQL:   closeIssue(input: {
    Note over GraphQL:     issueId: "MDU6...",
    Note over GraphQL:     stateReason: DUPLICATE,
    Note over GraphQL:     duplicateIssueId: "MDU6..."
    Note over GraphQL:   }) {
    Note over GraphQL:     issue { id number url state }
    Note over GraphQL:   }
    Note over GraphQL: }
    
    GraphQL-->>Handler: Issue closed as duplicate
    
    Handler-->>Server: Success response with issue URL
    Server-->>Client: Issue updated and closed as duplicate
    
    Note over Client, Raw: 另一个调用:获取大文件内容
    
    Client->>Server: get_file_contents(owner="repo", repo="big-repo", path="large-dataset.json")
    Server->>Handler: GetFileContents handler
    
    Handler->>Optimizer: resolveGitReference() - determine best API approach
    Optimizer->>REST: Get file metadata
    REST-->>Optimizer: File SHA and metadata
    
    Optimizer->>Optimizer: File size > 1MB, use Raw API for efficiency
    Optimizer-->>Handler: Use Raw API strategy
    
    Note over Handler, Raw: 使用Raw API下载大文件
    Handler->>Raw: GetRawContent(ctx, owner, repo, path, opts)
    Raw->>Raw: Direct download from raw.githubusercontent.com
    Raw-->>Handler: Large file content stream
    
    Handler->>Handler: Process content with ring buffer
    Handler->>Handler: Apply content window size limits
    Handler-->>Server: Optimized file content response
    
    Server-->>Client: Large file content (efficiently processed)

4.2 批量操作优化时序图

sequenceDiagram
    participant Client as AI Client
    participant Server as MCP Server
    participant Handler as Handler
    participant BatchProcessor as Batch Processor
    participant GitHub as GitHub API
    participant Cache as Response Cache
    participant Aggregator as Result Aggregator
    
    Note over Client, Aggregator: 批量操作优化:获取多个失败作业的日志
    
    Client->>Server: get_job_logs(run_id=789, failed_only=true, return_content=true, tail_lines=200)
    Server->>Handler: GetJobLogs handler with failed_only mode
    
    Handler->>BatchProcessor: handleFailedJobLogs()
    BatchProcessor->>GitHub: ListWorkflowJobs(ctx, owner, repo, 789)
    GitHub-->>BatchProcessor: {Jobs: [job1, job2, job3, job4, job5]}
    
    Note over BatchProcessor: 识别失败作业
    BatchProcessor->>BatchProcessor: Filter jobs by conclusion == "failure"
    BatchProcessor->>BatchProcessor: Found failed jobs: [job1, job3, job4]
    
    Note over BatchProcessor: 并发处理优化
    BatchProcessor->>BatchProcessor: Create goroutine pool for concurrent processing
    
    par Concurrent Job Processing
        BatchProcessor->>GitHub: GetWorkflowJobLogs(job1.ID) 
        GitHub-->>BatchProcessor: job1 log URL
        BatchProcessor->>Cache: Check if logs already cached
        Cache-->>BatchProcessor: Cache miss
        BatchProcessor->>BatchProcessor: downloadLogContent(job1_url)
        BatchProcessor->>Aggregator: job1 results
        
    and 
        BatchProcessor->>GitHub: GetWorkflowJobLogs(job3.ID)
        GitHub-->>BatchProcessor: job3 log URL  
        BatchProcessor->>Cache: Check cache
        Cache-->>BatchProcessor: Cache miss
        BatchProcessor->>BatchProcessor: downloadLogContent(job3_url)
        BatchProcessor->>Aggregator: job3 results
        
    and
        BatchProcessor->>GitHub: GetWorkflowJobLogs(job4.ID)
        GitHub-->>BatchProcessor: job4 log URL
        BatchProcessor->>BatchProcessor: downloadLogContent(job4_url)
        BatchProcessor->>Aggregator: job4 results
    end
    
    Note over Aggregator: 结果聚合和优化
    Aggregator->>Aggregator: Combine all job results
    Aggregator->>Aggregator: Calculate summary statistics
    Aggregator->>Aggregator: Build comprehensive response
    
    Aggregator-->>BatchProcessor: {
    Aggregator-->>BatchProcessor:   message: "Retrieved logs for 3 failed jobs",
    Aggregator-->>BatchProcessor:   run_id: 789,
    Aggregator-->>BatchProcessor:   total_jobs: 5,
    Aggregator-->>BatchProcessor:   failed_jobs: 3,
    Aggregator-->>BatchProcessor:   logs: [job1_logs, job3_logs, job4_logs],
    Aggregator-->>BatchProcessor:   performance: {
    Aggregator-->>BatchProcessor:     total_download_time: "2.3s",
    Aggregator-->>BatchProcessor:     total_lines_processed: 45231,
    Aggregator-->>BatchProcessor:     compression_ratio: 0.15
    Aggregator-->>BatchProcessor:   }
    Aggregator-->>BatchProcessor: }
    
    BatchProcessor-->>Handler: Aggregated results
    Handler-->>Server: Optimized batch response
    Server-->>Client: Complete failed jobs logs with performance metrics

5. 企业级部署时序图

5.1 GitHub Enterprise Server 部署流程

sequenceDiagram
    participant Admin as System Admin
    participant Config as Configuration
    participant Server as MCP Server
    participant Resolver as Host Resolver
    participant GHES as GitHub Enterprise Server
    participant Validator as SSL Validator
    participant Monitor as Health Monitor
    
    Note over Admin, Monitor: 企业级部署和健康检查
    
    Admin->>Config: Set GITHUB_HOST="https://github.enterprise.com"
    Admin->>Config: Set GITHUB_PERSONAL_ACCESS_TOKEN="ghes_token"
    Admin->>Config: Configure toolsets and security settings
    
    Config->>Server: Initialize with enterprise settings
    Server->>Resolver: parseAPIHost("https://github.enterprise.com")
    
    Note over Resolver: 企业环境检测和配置
    Resolver->>Resolver: Detect enterprise deployment type
    Resolver->>Resolver: Parse URL scheme and hostname
    Resolver->>Resolver: Not github.com, not *.ghe.com  GHES
    
    Resolver->>Resolver: newGHESHost("https://github.enterprise.com")
    Resolver->>Resolver: Build API endpoints:
    Note over Resolver: REST: https://github.enterprise.com/api/v3/
    Note over Resolver: GraphQL: https://github.enterprise.com/api/graphql
    Note over Resolver: Upload: https://github.enterprise.com/api/uploads/
    Note over Resolver: Raw: https://github.enterprise.com/raw/
    
    Resolver-->>Server: Enterprise API endpoints configured
    
    Note over Server: 客户端创建和验证
    Server->>Server: Create GitHub clients with enterprise URLs
    Server->>Validator: Validate SSL certificates
    Validator->>GHES: Test connection to enterprise server
    GHES-->>Validator: SSL handshake successful
    Validator-->>Server: Enterprise connection validated
    
    Server->>GHES: Test API access with token
    Server->>GHES: GET /api/v3/user (health check)
    GHES-->>Server: User info (token valid)
    
    Server->>Monitor: Start health monitoring
    Monitor->>Monitor: Setup periodic health checks
    
    Note over Server: 工具集注册(企业环境特定)
    Server->>Server: Register enterprise-compatible toolsets
    Server->>Server: Filter out unsupported features
    Server-->>Admin: Enterprise deployment ready
    
    Note over Admin, Monitor: 运行时健康监控
    
    loop Health Check Cycle
        Monitor->>GHES: GET /api/v3/rate_limit
        GHES-->>Monitor: Rate limit status
        Monitor->>Monitor: Check API response times
        Monitor->>Monitor: Validate SSL certificate expiry
        
        alt Health Check Success
            Monitor->>Monitor: Log success metrics
        else Health Check Failure
            Monitor->>Admin: Alert: Enterprise connectivity issues
            Monitor->>Server: Graceful degradation mode
        end
    end

5.2 高可用性和故障恢复流程

sequenceDiagram
    participant LoadBalancer as Load Balancer
    participant Primary as Primary MCP Server
    participant Secondary as Secondary MCP Server  
    participant HealthCheck as Health Check
    participant GitHub as GitHub API
    participant Client as AI Client
    participant Failover as Failover Manager
    
    Note over LoadBalancer, Failover: 高可用性部署架构
    
    Note over LoadBalancer: 正常运行状态
    Client->>LoadBalancer: MCP requests
    LoadBalancer->>Primary: Route to primary server
    Primary->>GitHub: GitHub API calls
    GitHub-->>Primary: API responses
    Primary-->>LoadBalancer: MCP responses
    LoadBalancer-->>Client: Results
    
    Note over HealthCheck: 持续健康检查
    loop Health Monitoring
        HealthCheck->>Primary: /health endpoint check
        Primary->>Primary: Check GitHub API connectivity
        Primary->>Primary: Check toolset status
        Primary->>Primary: Check resource usage
        Primary-->>HealthCheck: Health status: OK
        
        HealthCheck->>Secondary: /health endpoint check
        Secondary-->>HealthCheck: Health status: Standby OK
    end
    
    Note over Primary, Failover: 故障场景处理
    
    Primary->>GitHub: API request
    GitHub-->>Primary: 503 Service Unavailable (GitHub outage)
    Primary->>Primary: Detect GitHub API failure
    Primary->>Primary: Enable graceful degradation
    
    Note over Primary: 优雅降级模式
    Primary->>Primary: Disable write operations
    Primary->>Primary: Use cached data where possible
    Primary->>Primary: Return informative error messages
    
    Client->>LoadBalancer: create_issue request
    LoadBalancer->>Primary: Forward request
    Primary->>Primary: Check GitHub API status
    Primary-->>LoadBalancer: "GitHub API temporarily unavailable. Read operations may work with cached data."
    LoadBalancer-->>Client: Service degradation notice
    
    Note over HealthCheck, Failover: 服务器故障处理
    
    HealthCheck->>Primary: Health check
    Primary->>Primary: Server overload/crash
    Note over Primary: No response (timeout)
    
    HealthCheck->>Failover: Primary server unresponsive
    Failover->>Failover: Initiate failover procedure
    Failover->>LoadBalancer: Remove primary from rotation
    Failover->>Secondary: Promote to primary
    
    Secondary->>Secondary: Initialize as primary
    Secondary->>GitHub: Test API connectivity
    GitHub-->>Secondary: Connection OK
    Secondary->>Secondary: Load full toolset configuration
    Secondary-->>Failover: Ready to serve
    
    Failover->>LoadBalancer: Route traffic to secondary (now primary)
    
    Note over Client, Secondary: 服务恢复
    Client->>LoadBalancer: New MCP request
    LoadBalancer->>Secondary: Route to new primary
    Secondary->>GitHub: GitHub API call
    GitHub-->>Secondary: API response
    Secondary-->>LoadBalancer: MCP response
    LoadBalancer-->>Client: Service restored
    
    Note over Primary, Failover: 原主服务器恢复
    Primary->>Primary: Restart/recovery complete
    Primary->>HealthCheck: Register as available
    HealthCheck->>Failover: Primary server recovered
    
    Failover->>Failover: Plan gradual traffic restoration
    Failover->>LoadBalancer: Add primary back to pool (reduced weight)
    Failover->>Failover: Monitor performance
    
    alt Recovery Successful
        Failover->>LoadBalancer: Restore normal traffic distribution
        Note over LoadBalancer: Both servers active (load sharing)
    else Recovery Issues  
        Failover->>LoadBalancer: Keep secondary as primary
        Failover->>Primary: Investigate issues further
    end

6. 性能监控和调试时序图

6.1 性能分析和优化流程

sequenceDiagram
    participant Developer as Developer
    participant Profiler as Performance Profiler
    participant Server as MCP Server
    participant Monitor as Metrics Monitor
    participant Logger as Debug Logger
    participant Optimizer as Performance Optimizer
    
    Note over Developer, Optimizer: 性能监控和优化流程
    
    Developer->>Server: Enable performance monitoring
    Server->>Profiler: profiler.New(nil, profiler.IsProfilingEnabled())
    Server->>Logger: Setup debug logging with --enable-command-logging
    Server->>Monitor: Initialize metrics collection
    
    Note over Server: 正常工具调用监控
    Server->>Server: Process get_job_logs request
    Server->>Profiler: prof.Start(ctx, "log_buffer_processing")
    
    Server->>Server: Execute log download and processing
    
    Note over Profiler: 详细性能数据收集
    Profiler->>Profiler: Measure download time
    Profiler->>Profiler: Measure buffer processing time
    Profiler->>Profiler: Track memory usage
    Profiler->>Profiler: Count processed lines/bytes
    
    Server->>Profiler: finish(lineCount, byteCount)
    Profiler->>Monitor: Record metrics{
    Profiler->>Monitor:   operation: "log_buffer_processing",
    Profiler->>Monitor:   duration: 2.3s,
    Profiler->>Monitor:   lines_processed: 15234,
    Profiler->>Monitor:   bytes_processed: 2456789,
    Profiler->>Monitor:   memory_peak: 45MB
    Profiler->>Monitor: }
    
    Monitor->>Logger: Log performance data
    Logger->>Logger: Write to debug log file
    
    Note over Developer: 性能问题识别
    Developer->>Logger: Analyze debug logs
    Logger-->>Developer: grep "performance" debug.log
    
    Developer->>Monitor: Query performance metrics
    Monitor-->>Developer: {
    Monitor-->>Developer:   avg_response_time: 3.2s,
    Monitor-->>Developer:   p95_response_time: 8.1s,
    Monitor-->>Developer:   memory_usage_trend: increasing,
    Monitor-->>Developer:   bottleneck_operations: ["log_buffer_processing", "file_content_download"]
    Monitor-->>Developer: }
    
    Note over Developer: 发现性能瓶颈
    Developer->>Optimizer: Analyze bottlenecks
    Optimizer->>Optimizer: Identify issues:
    Note over Optimizer: 1. Large log files causing memory spikes
    Note over Optimizer: 2. No connection pooling for HTTP requests  
    Note over Optimizer: 3. Ring buffer size not optimal
    
    Optimizer-->>Developer: Optimization recommendations
    
    Note over Developer: 实施性能优化
    Developer->>Server: Implement optimizations:
    Developer->>Server: 1. Adjust content window size
    Developer->>Server: 2. Improve ring buffer algorithm
    Developer->>Server: 3. Add HTTP connection pooling
    Developer->>Server: 4. Implement response streaming
    
    Note over Server: 优化后性能测试
    Server->>Profiler: Re-run performance tests
    Server->>Server: Process same get_job_logs request
    Profiler->>Monitor: Record improved metrics{
    Profiler->>Monitor:   duration: 1.1s (52% improvement),
    Profiler->>Monitor:   memory_peak: 12MB (73% reduction),
    Profiler->>Monitor:   lines_processed: 15234 (same),
    Profiler->>Monitor:   efficiency_score: 8.7/10
    Profiler->>Monitor: }
    
    Monitor-->>Developer: Performance improvement confirmed
    Developer->>Developer: Deploy optimized version to production

7. 总结

GitHub MCP Server 的时序图分析展现了以下关键特性:

系统特性

  1. 分层架构:清晰的请求处理流程,从协议层到业务逻辑层
  2. 异步处理:支持并发操作和批量处理优化
  3. 错误恢复:完善的错误处理和优雅降级机制
  4. 性能监控:内置性能分析和优化支持

企业级特性

  1. 高可用性:支持负载均衡和故障转移
  2. 多环境兼容:适配不同的GitHub部署环境
  3. 安全认证:多种认证方式和权限管理
  4. 监控告警:完整的健康检查和性能监控

扩展性设计

  1. 动态工具管理:运行时启用/禁用功能模块
  2. API协调:智能选择最优的API调用策略
  3. 批量优化:针对大规模操作的性能优化
  4. 插件化架构:易于添加新功能和工具集

这种设计使得 GitHub MCP Server 能够在各种环境中稳定运行,为AI工具提供可靠、高效的GitHub平台接入能力。