Lesson 8: Parallel Agents & Cloud Agents

Session Duration: 20 minutes
Audience: Embedded/C++ Developers (Intermediate to Advanced)
Environment: Windows, VS Code
Extensions: GitHub Copilot
Source Control: GitHub/Bitbucket


Overview

This lesson teaches you to multiply your productivity by running multiple Copilot agents simultaneously. You’ll learn to coordinate parallel workflows, leverage cloud/background agents for asynchronous tasks, and orchestrate complex multi-domain features using the 6-agent + 9-skill architecture.

What You’ll Learn

  • Parallel Agent Patterns - Run multiple agent instances with different task contexts
  • Cloud/Background Agents - Offload work to GitHub infrastructure for async processing
  • Coordination Strategies - Plan, launch, review, and integrate parallel outputs
  • Multi-Domain Development - Tackle hardware, firmware, control, and testing simultaneously

Key Concepts

ConceptDescription
Parallel AgentsMultiple chat windows running different agents with specialized task contexts
6 Orchestrator AgentsODrive Engineer, ODrive QA, ODrive Ops, ODrive Reviewer, ODrive Toolchain, Ada to C++ Migrator
9 Specialized SkillsAgents invoke skills: control-algorithms, cpp-testing, foc-tuning, odrive-ops, odrive-toolchain, pcb-review, sensorless-control, signal-integrity, ada-cpp-migration
Cloud AgentsAsynchronous agents that work in background on GitHub infrastructure
Coordination PhasesDesign β†’ Review β†’ Implement β†’ Integrate workflow
Skill RoutingAgents automatically invoke appropriate skills based on task context

Table of Contents


Prerequisites

Before starting this session, ensure you have:

  • Completed Agentic Development & Context Engineering - Understanding of agent selection and context layering
  • Visual Studio Code with GitHub Copilot extensions installed and enabled
  • Active Copilot subscription with access to all features
  • Custom agents configured - Verify .github/agents/ folder contains agent definitions
  • Multiple chat panels - Ability to open several Copilot Chat windows

Verify Your Setup

  1. Test multiple chat windows:

    • Open Chat view (Ctrl+Alt+I)
    • Right-click on Chat tab β†’ Move to New Window (or split)
    • Verify you can have 2+ independent chat sessions
  2. Verify agent availability:

    • In each chat window, click the agent dropdown (top of chat panel)
    • Confirm all 6 custom agents appear in the list:
      • ODrive Engineer (primary development orchestrator)
      • ODrive QA (testing & quality assurance)
      • ODrive Ops (CI/CD & release operations)
      • ODrive Reviewer (code review specialist)
      • ODrive Toolchain (build & test operations)
      • Ada to C++ Migrator (Ada to C++ migration specialist)
  3. Test agent selection in parallel:

    • Window 1: Select ODrive Engineer from dropdown
    • Window 2: Select ODrive QA from dropdown
    • Send a test message in each simultaneously

Important: Custom agents are selected from the dropdown menu at the top of the chat panel, NOT via @mention syntax. The @mention syntax (like @workspace) is only for built-in Copilot features.


Why Parallel Agents Matter

Parallel agent workflows represent a significant productivity multiplier for complex development tasks.

Benefits of Parallel Agents

  1. Accelerated Development

    • Multiple tasks execute simultaneously
    • Reduce wait time from 4x to 1x
    • Get comprehensive feedback in minutes, not hours
  2. Domain Expertise Convergence

    • Each agent contributes specialized knowledge
    • Hardware, firmware, control, and QA perspectives
    • Better architecture decisions from multiple viewpoints
  3. Reduced Context Switching

    • Launch all tasks, then review all outputs
    • Focus on integration, not task management
    • Maintain mental flow state
  4. Scalable Complexity

    • Handle larger features without overwhelm
    • Break complex problems into parallel tracks
    • Coordinate via interfaces, not serial dependencies

Learning Path

This lesson covers three main topics in sequence:

TopicFocusTime
Running Multiple AgentsPatterns, coordination, launching8 min
Cloud/Background AgentsAsync workflows, GitHub integration7 min
Guided Demo: Multi-Agent WorkflowEncoder calibration with parallel agents5 min

1. Running Multiple Agents (8 min)

What Are Parallel Agents?

🎯 Copilot Modes: Agent (Multiple Instances)

Files to demonstrate:

Architecture Note: The ODrive system uses 6 specialized agents that can each invoke from 9 skills. For parallel workflows, run different agents in parallel (select from dropdown in each chat window) or multiple instances of the same agent with different task contexts.

How to invoke: Select the agent from the dropdown menu at the top of each Copilot Chat panel, then type your prompt.

Sequential vs Parallel Comparison

Sequential DevelopmentParallel Agent Development
Task 1: Design API β†’ waitTask 1: ODrive Engineer design API (firmware focus)
Task 2: Implement firmware β†’ waitTask 2: ODrive Toolchain build & validate
Task 3: Write tests β†’ waitTask 3: ODrive QA plan tests and validation
Task 4: Code review β†’ waitTask 4: ODrive Reviewer check code quality
Total: 4 Γ— wait timeTotal: 1 Γ— wait time

Key Insight: You can run different agents in parallel (ODrive Engineer, ODrive QA, ODrive Toolchain, ODrive Reviewer), or run multiple instances of the same agent with different task contexts.


When to Use Parallel Agents

βœ… Good Use Cases❌ Poor Use Cases
Independent tasks - Different agents on separate modulesDependent tasks - Task B needs output from Task A
Cross-domain features - HW + FW + SW componentsSame file edits - Potential merge conflicts
Multi-file refactoring - Different subsystemsSimple tasks - Overhead not worth it
Research & implementation - One researches, one implementsSequential workflows - Clear ordering required

Parallel Agent Patterns

🎯 Copilot Mode: Agent

Pattern 1: Domain Separation

Scenario: Add new motor control feature

πŸ€– Agent Mode Prompts (Parallel):

WindowAgentTask Focus
1ODrive EngineerDesign the control algorithm (control-algorithms skill)
2ODrive EngineerDesign embedded implementation (firmware focus)
3ODrive EngineerValidate electrical constraints (pcb-review skill)
4ODrive QACreate test plan and fixtures (cpp-testing skill)

Note: The same agent can be used in multiple windows with different task contexts. The agent will invoke the appropriate skill based on your request.

Pattern 2: Multi-Module Feature

Scenario: Implement over-the-air (OTA) firmware updates

πŸ€– Agent Mode Prompts (Parallel):

Window 1 - ODrive Engineer (Bootloader):
  "Design bootloader protocol for secure firmware updates"

Window 2 - ODrive Engineer (Application):
  "Add firmware update state machine to axis.cpp"

Window 3 - Regular Copilot (Python Tools):
  "Create upload utility in tools/firmware_update.py"

Window 4 - ODrive QA (Testing):
  "Design test strategy and test rig configuration"

Pattern 3: Refactoring Campaign

Scenario: Modernize legacy code across codebase

πŸ€– Agent Mode Prompts (Parallel):

Window 1: "Refactor Firmware/MotorControl/motor.cpp to modern C++17"
Window 2: "Improve error handling in Firmware/communication/can/odrive_can.cpp"
Window 3: "Add type hints to tools/odrive/*.py"
Window 4: "Regenerate API docs and update examples"

How to Launch Parallel Agents

Option 1: Multiple Chat Windows (VS Code)

  • Open multiple Copilot Chat panels
  • Assign each to a different task
  • Use @agent-name to target specific agents

Option 2: GitHub Copilot Workspace (Web-based)

  • Use GitHub.com Copilot chat
  • Can run cloud agents in background
  • Results delivered to PR or issue

Option 3: CLI with Background Jobs (Advanced)

# Launch multiple agents via CLI (future capability)
gh copilot agent @firmware-engineer "task 1" --background
gh copilot agent @qa-engineer "task 2" --background
gh copilot agent list-jobs

Coordinating Parallel Agents

🎯 Copilot Mode: Agent

Challenge: Agents work independently - how do you coordinate?

Solution: Master Coordination Plan

PhaseActivitiesDuration
Phase 1: Parallel DesignLaunch all agents with their tasks5 min
Phase 2: Review & AlignReview outputs, identify integration points5 min
Phase 3: Parallel ImplementationAgents implement based on aligned design10 min
Phase 4: IntegrationCombine work, test end-to-end5 min

πŸ’¬ Chat Mode Prompt (Coordination Example):

Phase 1 - Launch in parallel:

Agent A: ODrive Engineer (Control Focus)
  "Design optimized Park transform using SIMD instructions"

Agent B: ODrive Engineer (Firmware Focus)
  "Research STM32 DSP library for fast trigonometry"

Agent C: ODrive QA (Testing Focus)
  "Create performance benchmarking harness"

2. Cloud/Background Agents (7 min)

What Are Cloud Agents?

🎯 Copilot Mode: Background/Cloud

Local Agents (What we’ve used)Cloud/Background Agents
Run in VS CodeRun on GitHub infrastructure
You wait for responseWork asynchronously
Interactive chat sessionDeliver results when done (PR, issue, notification)

Use Cases for Background Agents

🎯 Copilot Mode: Background

Use Case 1: Code Review Automation

Example GitHub Actions Workflow:

# .github/workflows/copilot-review.yml
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Copilot Review
        run: |
          gh copilot agent ODrive QA \
            "Review this PR for:
             - MISRA C++ compliance
             - Interrupt safety
             - Memory leaks
             - Test coverage"

Result: Agent invokes cpp-testing skill to analyze the PR

Use Case 2: Continuous Refactoring

Run background agents to gradually improve codebase:

Background Task 1: "Add Doxygen comments to all public APIs"
Background Task 2: "Convert raw pointers to smart pointers"
Background Task 3: "Add unit tests for uncovered functions"

Agents work overnight, create draft PRs for review

Use Case 3: Documentation Generation

πŸ€– Agent Mode Prompt (Background):

Background Task: ODrive Engineer
  "Generate API reference documentation for all classes 
   in Firmware/MotorControl/*.hpp and create markdown 
   files in docs/api/"

Agent processes all files, generates docs, opens PR

Use Case 4: Multi-Repository Updates

For organizations with multiple repos:

Background Task: "Update all repositories to use new 
  CAN protocol version. Repos: ODrive-Firmware, 
  ODrive-Tools, ODrive-GUI"

Agent creates coordinated PRs across repos


Benefits vs Limitations

βœ… Benefits⚠️ Limitations
Asynchronous work - Don’t block developmentRequires GitHub.com - Not VS Code alone
Large-scale tasks - Entire codebasesLess interactive - Can’t iterate real-time
Scheduled execution - Run during off-hoursReview required - Always review agent PRs
Audit trail - All changes tracked in PRsRate limits - Subject to GitHub API limits
Team collaboration - Results visible to allContext size - Limited by model window

Background Agent Workflow

graph LR
    A[Developer: Define Task] --> B[Cloud Agent: Execute]
    B --> C[Agent: Create PR/Issue Comment]
    C --> D[Team: Review]
    D --> E{Approve?}
    E -->|Yes| F[Merge]
    E -->|No| G[Refine Task]
    G --> B

Demo: Background Agent via GitHub.com

🎯 Copilot Mode: Cloud

Live Demo Steps:

  1. Navigate to GitHub.com repository
  2. Open an issue: β€œAdd temperature monitoring to motor.cpp”
  3. In issue comments, tag agent:

πŸ€– Agent Mode Prompt (GitHub Issue):

@copilot ODrive Engineer 
Please implement temperature monitoring with NTC thermistor 
support in Firmware/MotorControl/motor.cpp

Requirements:
- Use Steinhart-Hart equation
- Add thermal shutdown at 85Β°C
- Configurable via axis.config_
- Update diagnostics struct
  1. Agent processes in background
  2. Agent responds with implementation plan or code
  3. Can iterate in issue comments
  4. Agent can open PR with implementation

Note: As of early 2025, this feature is in beta. Check GitHub Copilot docs for latest capabilities.


3. Guided Demo: Multi-Agent Workflow (5 min)

Example: Implement Encoder Calibration Feature

🎯 Copilot Modes: Agent (Multiple Parallel Instances)

Scenario: Implement automatic encoder calibration

This feature requires:

  • Hardware knowledge - Encoder electrical specs
  • Firmware implementation - Calibration routine
  • Control theory - How calibration affects control
  • Testing - Validation strategy

Approach: Use multiple instances of ODrive Engineer with different task focuses, plus ODrive QA for testing. The agents will invoke appropriate skills based on the task context.

Files to work with:


Step-by-Step Guide

Step 1: Define the Feature (30 sec)

β€œWe need to add automatic encoder calibration. This is complex because it touches hardware, firmware, control algorithms, and testing. Let’s use parallel agents.”

Step 2: Launch Parallel Agents (2 min)

Open 4 separate Chat windows (or tabs)

πŸ€– Agent Mode Prompt - Window 1 (Hardware Focus):

Select ODrive Engineer from agent dropdown, then paste:

What are the electrical requirements for 
encoder calibration on ODrive v3.6?

- Encoder type: Incremental with index
- Calibration involves rotating motor one full revolution
- Need to measure electrical angle vs mechanical angle
- Any constraints on rotation speed or current?

(This task will use the pcb-review skill internally)

πŸ€– Agent Mode Prompt - Window 2 (Firmware Focus):

Select ODrive Engineer from agent dropdown, then paste:

Design the calibration routine for encoder.cpp

Requirements:
- Rotate motor at constant velocity (e.g., 1 rev/sec)
- Record encoder counts vs electrical angle
- Detect index pulse
- Store calibration table in NVM
- Must be interrupt-safe
- Allow user to trigger via axis.requested_state

Show me the function signature and high-level algorithm.

πŸ€– Agent Mode Prompt - Window 3 (Control Focus):

Select ODrive Engineer from agent dropdown, then paste:

What's the optimal motor control 
strategy during encoder calibration?

- Need smooth, constant velocity
- Minimize torque ripple
- Open loop or closed loop?
- What if there's load on the motor?

(This task will use control-algorithms skill internally)

πŸ€– Agent Mode Prompt - Window 4 (Testing):

Select ODrive QA from agent dropdown, then paste:

Create a test plan for encoder calibration feature.

What to test:
- Calibration accuracy
- Repeatability (run 10 times, compare results)
- Behavior with load on motor
- Error cases (stall, overvoltage, etc.)
- Calibration data persistence (survives reboot)

(This invokes the cpp-testing skill)

Step 3: Review Outputs (1.5 min)

Review each agent’s output:

  • ODrive-Engineer (Hardware) β†’ Voltage/current limits and encoder specs
  • ODrive-Engineer (Firmware) β†’ Calibration algorithm structure
  • ODrive-Engineer (Control) β†’ Open-loop constant current recommendation
  • ODrive-QA (Testing) β†’ Comprehensive test cases

Key Insight: Each window’s output informs the others, even though they use the same agents with different task contexts!

Step 4: Integration (1 min)

πŸ€– Agent Mode Prompt - Integration Window:

Select ODrive Engineer from agent dropdown, then paste:

Now implement the full calibration routine.

Context from parallel windows:
- Hardware: Max calibration current 10A, speed < 2 rev/sec
- Control: Use open-loop with constant Iq current
- Testing: Need to log calibration data for validation

Files: 
#file:src-ODrive/Firmware/MotorControl/encoder.cpp
#file:src-ODrive/Firmware/MotorControl/encoder.hpp

Implement calibrate() method following the design from earlier.

Acceptance Criteria:
- Static allocation only
- Interrupt-safe operations
- Error codes, no exceptions
- Doxygen documentation

The agent will synthesize outputs from all windows into a cohesive implementation.


Success Criteria

By the end of this exercise, you should have:

  • βœ… 4 windows worked simultaneously on different aspects
  • βœ… Each window contributed domain-specific expertise via the same agents with different contexts
  • βœ… Total time ~3-4 minutes vs. 15-20 minutes sequential
  • βœ… Better quality - Each domain properly addressed through skill invocation

Key Takeaways

  1. Parallel = faster - Multiple perspectives simultaneously
  2. Task context matters - Same agent can handle different domains based on prompt
  3. Skills are invoked automatically - Agents route to appropriate skills
  4. Integration phase - You synthesize the outputs
  5. Coordination matters - Pre-define interfaces when possible

Practice Exercises

These exercises help you master parallel agent workflows. Complete them to build confidence with multi-agent coordination.

Exercise 1: Parallel Agent Setup

Objective: Configure your environment for parallel agent workflows

Steps:

  1. Open Copilot Chat (Ctrl+Alt+I)
  2. Create multiple windows:
    • Right-click on the Chat tab β†’ β€œMove to New Window” or drag to split
    • Repeat until you have 4 chat windows/panels
  3. Configure each window for a different task focus:
    • Window 1: ODrive Engineer (firmware focus)
    • Window 2: ODrive Engineer (control focus)
    • Window 3: ODrive Engineer (hardware focus)
    • Window 4: ODrive QA (testing focus)

Verification Checklist:

  • 4 chat windows open and visible
  • Each window has the appropriate agent selected from dropdown
  • You can send messages independently in each window
Solution & Tips

Window Layout Options:

  • Split horizontally: View β†’ Editor Layout β†’ Split Down (Ctrl+K Ctrl+\)
  • Split vertically: View β†’ Editor Layout β†’ Split Right (Ctrl+\)
  • Floating windows: Drag tab outside VS Code to create new window
  • Grid layout: Combine splits for 2x2 grid

Verification Test: Send a simple message to each window simultaneously:

Window 1: "ODrive Engineer What files define the motor controller?"
Window 2: "ODrive Engineer What control modes does ODrive support?"
Window 3: "ODrive Engineer What voltage ratings does ODrive v3.6 support?"
Window 4: "ODrive QA What testing frameworks are used in ODrive?"

All four should respond independently without blocking each other.

Pro Tip: Save this layout with View β†’ Editor Layout β†’ Save Layout for quick access.


Exercise 2: Parallel Feature Design - CAN Heartbeat

Objective: Use parallel agents to design a multi-domain feature

Feature: Add CAN bus heartbeat monitoring to detect communication failures

Steps:

  1. Launch these prompts in parallel (one per window):
WindowAgentPrompt
1ODrive Engineer”Design heartbeat packet structure for CAN bus on ODrive”
2ODrive Engineer”What are CAN bus timing constraints for heartbeat monitoring?β€œ
3ODrive Engineer”How should heartbeat failure affect motor state machine?β€œ
4ODrive QA”Create test cases for CAN heartbeat monitoring feature”
  1. Review each output and note key information
  2. Identify integration points between the outputs

Review Checklist:

  • All 4 windows responded with domain-specific information
  • Outputs address different aspects without contradiction
  • You can identify how outputs connect together
Solution: Expected Outputs

Window 1 (Packet Structure) - Expected Response:

Heartbeat Packet Design:
- CAN ID: 0x700 + node_id (NMT heartbeat standard)
- Payload: 1 byte state + 2 byte error flags + timestamp
- Rate: 100ms default, configurable
- Boot message on startup

struct HeartbeatPacket {
    uint8_t state;      // BOOT, PREOP, OP, STOPPED
    uint16_t errors;    // Error flags bitfield
    uint32_t uptime_ms; // Optional: system uptime
};

Window 2 (Timing Constraints) - Expected Response:

CAN Timing Constraints:
- Bus load: Heartbeat at 10Hz = ~0.1% bus load at 1Mbps
- Timeout detection: 3x heartbeat period (300ms default)
- Jitter tolerance: Β±10% acceptable
- Priority: Lower than real-time control messages
- Watchdog: Independent hardware timer recommended

Window 3 (State Machine) - Expected Response:

Heartbeat Failure Response:
1. Detect timeout (3 missed heartbeats)
2. Set axis.error = ERROR_HEARTBEAT_TIMEOUT
3. Transition to IDLE state (configurable)
4. Options:
   - COAST: Free-spinning (safest)
   - BRAKE: Active braking then coast
   - HOLD: Maintain position (risky without commands)
5. Require explicit clear before restart

Window 4 (Test Cases) - Expected Response:

Test Plan:
1. Normal operation: Verify heartbeat sent at configured rate
2. Timeout detection: Stop heartbeat, verify timeout after 3x period
3. Recovery: Resume heartbeat, verify system recovers
4. Bus-off: Test behavior during CAN bus errors
5. Configuration: Verify period changes take effect
6. Multi-node: Test with multiple devices on bus
7. Boot sequence: Verify boot message sent on startup

Integration Points:

  • Packet structure defines what QA tests validate
  • Timing constraints inform test timing parameters
  • State machine behavior must match test expectations

Exercise 3: Integration Practice

Objective: Synthesize parallel outputs into a cohesive implementation request

Steps:

  1. Review outputs from Exercise 2 (or use the solution examples)
  2. Create an integration prompt that combines all domain knowledge:

Select ODrive Engineer from agent dropdown, then paste:

Implement CAN heartbeat monitoring.

Context from parallel analysis:
- Packet: 0x700+node_id, 1 byte state + 2 byte errors, 100ms default
- Timing: 3x period timeout, Β±10% jitter tolerance, lower priority than control
- State: On timeout set ERROR_HEARTBEAT_TIMEOUT, transition to IDLE (configurable)
- Tests: Need boot message, timeout detection, recovery, multi-node support

Files:
#file:src-ODrive/Firmware/communication/can/odrive_can.cpp
#file:src-ODrive/Firmware/MotorControl/axis.hpp

Requirements:
- Static allocation only
- Interrupt-safe timeout check
- Configurable via axis.config_
- Error codes, no exceptions

Implement the heartbeat sender and timeout detector.
  1. Send the integration prompt and review the synthesized implementation
Solution: What to Look For

Good Integration Output Should Include:

  1. Configuration structure:
struct HeartbeatConfig {
    bool enable = true;
    uint32_t period_ms = 100;
    uint32_t timeout_ms = 300;  // 3x period
    AxisState timeout_action = AXIS_STATE_IDLE;
};
  1. Heartbeat sender (called from main loop):
void send_heartbeat() {
    if (!config_.heartbeat.enable) return;
    if (timer_ms_ - last_heartbeat_ms_ < config_.heartbeat.period_ms) return;
    
    HeartbeatPacket pkt = {
        .state = static_cast<uint8_t>(current_state_),
        .errors = static_cast<uint16_t>(error_)
    };
    can_send(0x700 + node_id_, &pkt, sizeof(pkt));
    last_heartbeat_ms_ = timer_ms_;
}
  1. Timeout detector:
void check_heartbeat_timeout() {
    if (!config_.heartbeat.enable) return;
    if (timer_ms_ - last_received_heartbeat_ms_ > config_.heartbeat.timeout_ms) {
        error_ |= ERROR_HEARTBEAT_TIMEOUT;
        requested_state_ = config_.heartbeat.timeout_action;
    }
}

Key Validation:

  • Static allocation βœ… (no new or malloc)
  • Interrupt-safe βœ… (simple comparisons, no blocking)
  • Configurable βœ… (via config struct)
  • Error codes βœ… (no exceptions)

Exercise 4: Cloud Agent Workflow Simulation

Objective: Understand how to structure requests for background/cloud agents

Scenario: You want to set up automated code review on PRs using cloud agents

Steps:

  1. Create a GitHub Actions workflow that would trigger Copilot review:
# .github/workflows/copilot-review.yml
name: Copilot Code Review

on:
  pull_request:
    types: [opened, synchronize]
    paths:
      - 'Firmware/**/*.cpp'
      - 'Firmware/**/*.hpp'

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Copilot Review
        run: |
          gh copilot agent ODrive QA \
            "Review this PR for:
             - MISRA C++ compliance
             - Interrupt safety issues
             - Memory leaks or unbounded allocations
             - Missing error handling
             - Test coverage gaps"
  1. Think about: What makes this effective for async review?
Solution: Cloud Agent Best Practices

Why This Workflow Works:

  1. Scoped trigger: Only runs on firmware changes (.cpp, .hpp files)
  2. Clear agent selection: ODrive QA is the right agent for code review
  3. Specific criteria: Bullet list tells agent exactly what to check
  4. Async-friendly: Results appear as PR comments, not blocking

Improvements to Consider:

# Enhanced version
- name: Run Copilot Review
  run: |
    gh copilot agent ODrive QA \
      "Review this PR for embedded firmware best practices.
       
       Check for:
       - MISRA C++ 2023 compliance (focus on rules 0-1, 5-x, 6-x)
       - ISR safety: no blocking calls, volatile for shared data
       - Memory: static allocation only, no heap usage
       - Error handling: all error paths covered
       - Thread safety: proper mutex usage around shared resources
       
       For each issue found:
       1. Cite the specific file and line
       2. Explain why it's a problem
       3. Suggest a fix
       
       Provide a summary at the end with pass/fail status."

Key Principles for Cloud Agents:

  • Be explicit about what β€œgood” looks like
  • Request structured output (file, line, suggestion)
  • Ask for summary/verdict for quick triage
  • Scope to specific domains (don’t ask for everything)

Exercise 5: Coordination Challenge

Objective: Practice the full parallel workflow with a complex feature

Feature: Add sensorless motor startup (no encoder, estimate position from back-EMF)

Steps:

  1. Plan your parallel tasks (which windows, which agents, which focus areas)
  2. Launch 4 parallel prompts targeting different domains
  3. Review and identify conflicts or integration challenges
  4. Create integration prompt that resolves conflicts
  5. Document what you learned about coordination
Solution: Recommended Approach

Parallel Task Plan:

WindowAgentFocusPrompt
1ODrive EngineerControl Theory”Explain sensorless FOC startup: HFI vs I/F vs observer methods”
2ODrive EngineerFirmware”What changes to motor.cpp for sensorless startup?β€œ
3ODrive EngineerTuning”What parameters need tuning for sensorless operation?β€œ
4ODrive QATesting”How to test sensorless startup without encoder?”

Expected Conflicts:

  • Control may recommend HFI, firmware may not support it yet
  • Tuning parameters depend on method chosen
  • Testing needs to know which method to validate

Integration Strategy:

Select ODrive Engineer from agent dropdown, then paste:

Design sensorless startup for ODrive.

Parallel analysis summary:
- Control: Recommends I/F (inject current at angle) for simplicity
- Firmware: Can add state AXIS_STATE_SENSORLESS_STARTUP
- Tuning: Need ramp_current, ramp_time, transition_velocity
- Testing: Use known load, compare to encoder-based as reference

Decision: Use I/F method (simplest, most robust for initial implementation)

Files:
#file:src-ODrive/Firmware/MotorControl/motor.cpp
#file:src-ODrive/Firmware/MotorControl/axis.hpp

Implement sensorless startup state with I/F current injection.

Coordination Lessons:

  1. Pre-decide method when options exist (don’t let agents conflict)
  2. Integration prompt resolves ambiguity by stating decisions
  3. Human is architect - you make the final design calls
  4. Testing informs design - QA constraints affect implementation

Quick Reference: Parallel Agent Patterns

Agent & Skill Assignment Guide

DomainAgentSkill Invoked
Low-level firmwareODrive EngineerDirect implementation + odrive-toolchain for builds
Control algorithmsODrive Engineercontrol-algorithms (🚧), foc-tuning (🚧), sensorless-control (🚧)
Hardware specsODrive Engineerpcb-review (🚧), signal-integrity (🚧)
Testing & QAODrive QAcpp-testing
CI/CD & releasesODrive Opsodrive-ops
Code reviewODrive ReviewerN/A (reads and reviews only)
Build & testODrive Toolchainodrive-toolchain
Ada migrationAda to C++ Migratorada-cpp-migration

Legend: 🚧 = Planned skill (not yet implemented)

Note: Use different agents in parallel for their specialized domains, or run multiple instances of the same agent with different task contexts. Agents route to appropriate skills automatically.

Parallel Workflow Checklist

Before launching:
β”œβ”€β”€ [ ] Define feature scope
β”œβ”€β”€ [ ] Identify domains involved
β”œβ”€β”€ [ ] Assign agents to domains
└── [ ] Pre-define interfaces if possible

During execution:
β”œβ”€β”€ [ ] Launch all agents simultaneously
β”œβ”€β”€ [ ] Monitor for completion
└── [ ] Note any questions/conflicts

After completion:
β”œβ”€β”€ [ ] Review each output individually
β”œβ”€β”€ [ ] Identify integration points
β”œβ”€β”€ [ ] Resolve conflicts
└── [ ] Create integration prompt

Coordination Phases

PhaseDurationActivities
Design5 minLaunch parallel agents with design tasks
Review5 minReview outputs, identify integration points
Implementation10 minParallel implementation based on design
Integration5 minCombine, test, refine

Common Pitfalls

❌ Don’tβœ… Do
Launch without a planPre-define interfaces between agents
Assign overlapping tasksPartition work clearly by domain
Blindly merge all outputsReview, validate, integrate systematically
Use parallel for sequential tasksUse parallel for independent tasks only

Troubleshooting

IssueSolution
Can’t open multiple chat windowsUse View β†’ Editor Layout β†’ Split, or drag chat tab to new area
Agents giving conflicting adviceYou’re the architect - arbitrate or ask another agent to compare
One agent much slower than othersContinue with faster ones, integrate slow output later
Merge conflicts in generated codeAssign non-overlapping files/functions to each agent
Context not shared between windowsEach window is independent - copy relevant context to integration prompt
Too many agents to coordinateLimit to 3-4 agents; beyond that, coordination overhead increases
Cloud agents not availableFeature may be in beta - check GitHub Copilot documentation
Background job not completingCheck GitHub Actions logs, verify API rate limits

Debug Tips

  1. Agent selection issues:

    • Verify agent files exist in src-ODrive/.github/agents/
    • Check agents dropdown shows ODrive Engineer and ODrive QA
    • Ensure agent file has correct .agent.md extension
  2. Coordination problems:

    • Start with 2 windows, then scale up
    • Use explicit interface definitions
    • Review sequentially before integrating
    • Same agent can handle multiple domains via different prompts
  3. Integration failures:

    • Be explicit about context from each window
    • Include file references in integration prompt
    • Ask for merge strategy if conflicts exist
  4. Skills not invoked:

    • Check src-ODrive/.github/skills/ for available skills
    • Some skills are planned (🚧) and not yet implemented
    • Agent decides skill invocation based on task context

Additional Resources

Prompt Templates

See demo-script.md for ready-to-use prompts

Coordination Strategies

See hands-on-exercise.md for more practice problems

Official Documentation


Frequently Asked Questions

How do I avoid merge conflicts with parallel agents?

Short Answer: Assign non-overlapping files or functions to each agent window.

Detailed Explanation: Parallel agents work independently and don’t know what other windows are generating. If two agents modify the same file, you’ll need to manually merge their outputs.

Best Practices:

  1. Different files: Window 1 works on encoder.cpp, Window 2 on motor.cpp
  2. Different functions: Window 1 implements calibrate(), Window 2 implements validate()
  3. Interface-first: All windows agree on function signatures before implementing
  4. Review sequentially: Even if launched in parallel, review outputs one at a time
  5. Integration window: Use a final prompt to merge context from all windows

Example Partition:

Window 1: "Implement encoder calibration in encoder.cpp"
Window 2: "Implement calibration state machine in axis.cpp"
Window 3: "Implement calibration tests in test_encoder.cpp"
Window 4: "Document calibration in docs/calibration.md"

No overlapping files = no merge conflicts.


What if agents give contradictory advice?

Short Answer: You’re the architect - make the final call based on your requirements.

Why This Happens:

  • Different prompts lead to different assumptions
  • Agents optimize for their specific task context
  • Some questions have multiple valid answers

Resolution Strategies:

  1. Arbitration Prompt:

    ODrive Engineer I got two suggestions:
    - Option A: Use open-loop control for calibration
    - Option B: Use closed-loop with low gains
    
    Which is better for:
    - Motors with high cogging torque
    - Systems where encoder may have noise
  2. Requirements-Based Decision:

    • If safety is critical: Choose the more conservative approach
    • If performance is critical: Choose the faster approach
    • If simplicity is critical: Choose the simpler implementation
  3. Ask for Trade-offs:

    ODrive Engineer Compare these approaches for encoder calibration:
    1. Open-loop at constant current
    2. Closed-loop with observer
    
    What are the trade-offs in terms of accuracy, robustness, and complexity?

Remember: Conflicting advice often means there’s no single β€œright” answer. Your domain knowledge decides.


Can I use the same agent in multiple windows?

Short Answer: Yes! You can run the same agent with different task contexts, or use different specialized agents.

How It Works: The ODrive system has 6 specialized agents that invoke from 9 skills:

  • ODrive Engineer - Primary development orchestrator
  • ODrive QA - Testing & quality assurance
  • ODrive Ops - CI/CD & release operations
  • ODrive Reviewer - Code review specialist
  • ODrive Toolchain - Build & test operations
  • Ada to C++ Migrator - Ada to C++ migration

You can run multiple instances of the same agent with different task contexts, or run different agents in parallel:

WindowAgentTask Context
1ODrive Engineer”Focus on firmware implementationβ€¦β€œ
2ODrive Engineer”Focus on control algorithm designβ€¦β€œ
3ODrive Engineer”Focus on hardware constraintsβ€¦β€œ
4ODrive QA”Focus on test strategy…”

Why This Works:

  • Each window maintains its own conversation context
  • The agent adapts based on your prompt, not window identity
  • Skills are invoked automatically based on task content
  • No special configuration needed per window

Can I use parallel agents in an air-gapped environment?

Short Answer: Local agents work with Foundry Local; cloud agents require internet.

EnvironmentLocal Agents (VS Code)Cloud Agents (GitHub)
Internet connectedβœ… Full functionalityβœ… Full functionality
Air-gapped + Foundry Localβœ… Works offline❌ Not available
Air-gapped, no local model❌ Requires connection❌ Not available

For Air-Gapped Development:

  1. Set up Azure AI Foundry Local on an approved internal server
  2. Configure VS Code to use local endpoint
  3. Custom agents work because they’re just prompt files
  4. Skills work if they don’t require external services

Limitations:

  • Model quality may differ from cloud
  • No automatic updates
  • May need IT approval for internal deployment

How many windows can I run in parallel?

Short Answer: Technically unlimited; practically 3-4 is optimal.

Scaling Analysis:

WindowsCoordination OverheadTypical Use Case
2Low - Easy to trackQuick dual-domain task
3-4Medium - ManageableMulti-domain feature
5-6High - Getting complexLarge refactoring
7+Very High - Diminishing returnsProbably too many

Why 3-4 is the Sweet Spot:

  • Most features touch 3-4 domains (firmware, control, testing, docs)
  • Human working memory handles ~4 contexts well
  • Review time scales linearly with window count
  • Integration complexity scales quadratically

When to Use More:

  • Large-scale refactoring across many subsystems
  • Multi-repository updates
  • You’re very experienced with coordination

Pro Tip: Start with 2 windows. Add more only when you’re comfortable.


Do parallel agents cost more API credits?

Short Answer: Yes, but you save time, so cost-per-feature is often similar.

Cost Analysis:

MetricSequentialParallel (4 windows)
API calls4 calls (one at a time)4 calls (simultaneous)
Total tokensSimilarSimilar
Wall-clock time4x wait1x wait
Your timeHigher (context switching)Lower (batch review)

Why Cost is Similar:

  • You’re asking the same questions either way
  • Parallel just changes the timing, not the content
  • Time saved has value too

Cost Optimization Tips:

  1. Don’t launch parallel for simple tasks
  2. Be specific in prompts (fewer follow-ups needed)
  3. Use integration prompts efficiently (combine context)
  4. Cloud/background agents may have different pricing (check GitHub docs)

When should I use cloud agents instead of local?

Short Answer: Use cloud for async, large-scale, or team-visible tasks.

Decision Matrix:

FactorUse Local AgentsUse Cloud Agents
Iteration speedβœ… Fast feedback loop❌ Async delay
Task sizeSmall to mediumLarge (full codebase)
Team visibilityJust youResults visible to team
SchedulingDuring work hoursOvernight/scheduled
PR integrationManual copy-pasteAutomatic PR creation
Audit trailLocal chat historyGitHub issue/PR history

Ideal Cloud Agent Tasks:

  • Overnight documentation generation
  • Scheduled code quality sweeps
  • PR review automation
  • Multi-repository updates

Stick with Local When:

  • You need to iterate quickly
  • Task requires back-and-forth refinement
  • Working on experimental ideas
  • Learning a new codebase

How do I coordinate if agents can’t see each other?

Short Answer: You’re the coordination layer - use integration prompts.

The Coordination Pattern:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    YOU                          β”‚
β”‚            (Human Coordinator)                  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Review  β”‚  Identify  β”‚  Resolve  β”‚  Integrate  β”‚
β”‚  outputs β”‚  conflicts β”‚  decisionsβ”‚  via prompt β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β–²           β–²           β–²           β–²
      β”‚           β”‚           β”‚           β”‚
β”Œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”
β”‚ Window 1  β”‚ Window 2  β”‚ Window 3  β”‚ Window 4  β”‚
β”‚  Output   β”‚  Output   β”‚  Output   β”‚  Output   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Integration Prompt Template:

Select ODrive Engineer from agent dropdown, then paste:

Implement [feature].

Context from parallel analysis:
- Firmware (Window 1): [key findings]
- Control (Window 2): [key findings]
- Hardware (Window 3): [key findings]
- Testing (Window 4): [key requirements]

Decisions made:
- [Conflict 1]: Chose approach A because [reason]
- [Conflict 2]: Chose approach B because [reason]

Requirements: [standard embedded constraints]

Files: #file:path/to/file.cpp

Implement the solution.

Pro Tip: Keep notes during review phase - you’ll need them for integration.


Summary: Key Takeaways

Core Concepts

  1. Parallel = Faster

    • Multiple perspectives generated simultaneously
    • 4 tasks in the time of 1
    • Total time reduced from 4x to 1x
  2. 6 Specialized Agents, 9 Skills

    • Use the right agent for the domain: ODrive Engineer, ODrive QA, ODrive Ops, ODrive Reviewer, ODrive Toolchain, Ada to C++ Migrator
    • Run different agents in parallel or same agent with different task contexts
    • Skills are invoked automatically based on prompt content
  3. You Are the Architect

    • Agents provide domain expertise
    • You make integration decisions
    • Human review is always required

Workflow Summary

Plan β†’ Launch β†’ Review β†’ Integrate
  β”‚       β”‚        β”‚         β”‚
  β”‚       β”‚        β”‚         └── Create synthesis prompt
  β”‚       β”‚        └── Identify conflicts, make decisions
  β”‚       └── Send prompts simultaneously (4 windows)
  └── Define tasks, assign agents, partition files

When to Use Parallel Agents

βœ… Good Fit❌ Poor Fit
Multi-domain featuresSimple single-file changes
Independent tasksSequential dependencies
Research + implementationSame-file modifications
Complex refactoringTrivial questions

Quick Reference

AgentPrimary Use
ODrive EngineerFirmware, control, hardware (via different prompts)
ODrive QATesting, test generation, quality assurance
ODrive OpsCI/CD workflows, releases, deployment
ODrive ReviewerCode review, style, safety checks
ODrive ToolchainBuild firmware, run tests, symbol search
Ada to C++ MigratorAda to C++ migration specialist
PatternDescription
Domain SeparationDifferent prompts for firmware, control, hardware, testing
Multi-ModuleDifferent files/subsystems in each window
Refactoring CampaignParallel improvements across codebase

Remember

  • Start small: 2 windows, then scale up
  • Partition clearly: Non-overlapping files/functions
  • Review before merging: Don’t blindly combine outputs
  • Coordination has cost: 3-4 windows is usually optimal
  • Integration is key: Your synthesis prompt ties everything together

GitHub Copilot Parallel Agents & Cloud Agents Guide
Last Updated: January 2026