Skip to main content

Overview

AG2 (formerly AutoGen) is an open-source framework for building AI agents. AG2 has built-in support for the A2A protocol through A2aRemoteAgent, allowing your agents to communicate with remote A2A agents like StackOne’s. Use AG2 to build agents that:
  • Consume StackOne’s A2A agents as remote agents
  • Orchestrate multi-agent conversations with local and remote agents
  • Access StackOne platform actions without managing tool definitions
A2aRemoteAgent supports only asynchronous methods - this is a limitation of the A2A client.

Installation

Install AG2 with A2A support using pip:
pip install ag2[a2a]

Quick Start

This example creates a local agent that delegates HR tasks to a StackOne A2A agent.
import asyncio
import base64
from autogen import ConversableAgent, LLMConfig
from autogen.a2a import A2aRemoteAgent, HttpxClientFactory

# StackOne A2A configuration
STACKONE_API_KEY = "<stackone_api_key>"
STACKONE_ACCOUNT_ID = "<account_id>"
BASE64_API_KEY = base64.b64encode(f"{STACKONE_API_KEY}:".encode()).decode()

# Create HTTP client with StackOne authentication headers
http_client = HttpxClientFactory(
    headers={
        "Authorization": f"Basic {BASE64_API_KEY}",
        "x-account-id": STACKONE_ACCOUNT_ID
    }
)

# Create a remote A2A agent pointing to StackOne
stackone_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="stackone_hr_agent",
    client=http_client
)

# Create a local agent to orchestrate conversations
llm_config = LLMConfig({"model": "gpt-4o-mini"})

orchestrator = ConversableAgent(
    name="hr_assistant",
    system_message="""
        You are an HR assistant that helps with employee management tasks.
        Delegate HR operations to the stackone_hr_agent.

        When the user asks about employees, time off, or other HR data,
        use the stackone_hr_agent to fetch or update the information.
    """,
    llm_config=llm_config,
)

# Start a conversation
async def main():
    await orchestrator.a_initiate_chat(
        recipient=stackone_agent,
        message={"role": "user", "content": "List the first 5 employees"}
    )

asyncio.run(main())
See Authentication Guide for details on obtaining your API key and account ID.

Architecture

AG2’s A2aRemoteAgent allows your local agent to communicate with remote A2A agents:

Complete Example

Here’s a complete example with a local tool and a remote StackOne agent:
import asyncio
import base64
from datetime import datetime
from autogen import ConversableAgent, LLMConfig
from autogen.a2a import A2aRemoteAgent, HttpxClientFactory

# Configuration
STACKONE_API_KEY = "<stackone_api_key>"
STACKONE_ACCOUNT_ID = "<account_id>"
BASE64_API_KEY = base64.b64encode(f"{STACKONE_API_KEY}:".encode()).decode()

# Local tool example
def get_current_date() -> str:
    """Get the current date."""
    return datetime.now().strftime("%Y-%m-%d")

# Create HTTP client with StackOne authentication headers
http_client = HttpxClientFactory(
    headers={
        "Authorization": f"Basic {BASE64_API_KEY}",
        "x-account-id": STACKONE_ACCOUNT_ID
    }
)

# Remote StackOne A2A agent
stackone_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="stackone_hr_agent",
    client=http_client
)

# Configure local agent with LLM
llm_config = LLMConfig({"model": "gpt-4o-mini"})

# Root orchestrator agent
orchestrator = ConversableAgent(
    name="hr_assistant",
    system_message="""
        You are an HR assistant that helps with employee management tasks.

        You have access to:
        1. get_current_date - Get today's date
        2. stackone_hr_agent - Delegate HR operations (employees, time off, etc.)

        When users ask about HR data, delegate to stackone_hr_agent.
        Always confirm actions before making changes.
    """,
    llm_config=llm_config,
)

# Register local tool
orchestrator.register_for_llm(name="get_current_date", description="Get the current date")(get_current_date)

async def main():
    # Start conversation with StackOne agent
    await orchestrator.a_initiate_chat(
        recipient=stackone_agent,
        message={"role": "user", "content": "List all employees"}
    )

    # Print the conversation history
    messages = orchestrator.chat_messages[stackone_agent.name]
    for message in messages:
        print(f"{message['name']}: {message['content']}")

asyncio.run(main())

Creating from AgentCard

If you already have an AgentCard (e.g., fetched from a discovery service), create an A2aRemoteAgent directly from it:
from a2a.types import AgentCard, AgentCapabilities
from autogen.a2a import A2aRemoteAgent

# Create or fetch an AgentCard
card = AgentCard(
    name="stackone_hr_agent",
    url="https://a2a.stackone.com",
    description="Agent that handles HR operations via StackOne integrations",
    version="0.1.0",
    default_input_modes=["text"],
    default_output_modes=["text"],
    capabilities=AgentCapabilities(streaming=True),
    skills=[],
    supports_authenticated_extended_card=False,
)

# Create the remote agent from the card
stackone_agent = A2aRemoteAgent.from_card(card)

Human in the Loop Support

A2aRemoteAgent automatically handles Human in the Loop (HITL) interactions when the remote agent requests human input. This happens transparently - the client loops until the agent completes its task or the conversation terminates.
from autogen.a2a import A2aRemoteAgent

# Connect to a remote agent that may request human input
stackone_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="approval_agent",
    client=http_client
)

# The client automatically handles any human input requests
await orchestrator.a_initiate_chat(
    recipient=stackone_agent,
    message={"role": "user", "content": "Should I approve this budget proposal?"}
)
# If the remote agent requests input, the user will be prompted automatically

Multiple StackOne Accounts

Connect to multiple StackOne accounts by creating separate HTTP clients and A2aRemoteAgent instances:
import base64
from autogen import ConversableAgent, LLMConfig
from autogen.a2a import A2aRemoteAgent, HttpxClientFactory

STACKONE_API_KEY = "<stackone_api_key>"
BASE64_API_KEY = base64.b64encode(f"{STACKONE_API_KEY}:".encode()).decode()

# HTTP client for HiBob account
hibob_http_client = HttpxClientFactory(
    headers={
        "Authorization": f"Basic {BASE64_API_KEY}",
        "x-account-id": "<hibob_account_id>"
    }
)

# HTTP client for BambooHR account
bamboo_http_client = HttpxClientFactory(
    headers={
        "Authorization": f"Basic {BASE64_API_KEY}",
        "x-account-id": "<bamboohr_account_id>"
    }
)

# Agent for HiBob integration
hibob_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="hibob_agent",
    client=hibob_http_client
)

# Agent for BambooHR integration
bamboo_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="bamboo_agent",
    client=bamboo_http_client
)

# Orchestrator with access to both agents
llm_config = LLMConfig({"model": "gpt-4o-mini"})

orchestrator = ConversableAgent(
    name="multi_hr_assistant",
    system_message="""
        You have access to multiple HR systems:
        - hibob_agent: For HiBob HR operations
        - bamboo_agent: For BambooHR operations

        Route requests to the appropriate system based on user context.
    """,
    llm_config=llm_config,
)

Advanced Configuration

Custom HTTP Client

Set custom options on the HTTP client such as headers and timeout:
from autogen.a2a import A2aRemoteAgent, HttpxClientFactory

# Create custom HTTP client factory
http_client = HttpxClientFactory(
    timeout=30.0,
    headers={
        "User-Agent": "MyApp/1.0",
        "Authorization": f"Basic {BASE64_API_KEY}",
        "x-account-id": STACKONE_ACCOUNT_ID
    }
)

stackone_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="stackone_hr_agent",
    client=http_client
)

Client Configuration

Pass additional client configuration using ClientConfig:
from a2a.client import ClientConfig
from autogen.a2a import A2aRemoteAgent

stackone_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="stackone_hr_agent",
    client=http_client,
    client_config=ClientConfig(streaming=True),
)

Testing with MockClient

Mock remote agent replies for testing:
from autogen.a2a import A2aRemoteAgent, MockClient

stackone_agent = A2aRemoteAgent(
    url="https://a2a.stackone.com",
    name="stackone_hr_agent",
    client=MockClient(response_message="Mock response for testing"),
)

Best Practices

A2aRemoteAgent only supports asynchronous methods. Always use a_initiate_chat and other async methods:
# Correct: Use async methods
await orchestrator.a_initiate_chat(
    recipient=stackone_agent,
    message={"role": "user", "content": "List employees"}
)

# Incorrect: Sync methods won't work
# orchestrator.initiate_chat(...)  # This will fail
Provide detailed system messages for your orchestrator agent. Include clear instructions about when to delegate to remote agents:
orchestrator = ConversableAgent(
    name="hr_assistant",
    system_message="""
        You are an HR assistant. When users ask about:
        - Employee data -> delegate to stackone_hr_agent
        - Time off requests -> delegate to stackone_hr_agent
        - Local operations -> use local tools

        Always confirm before making changes to HR systems.
    """,
    llm_config=llm_config,
)
Wrap A2A calls in try-except blocks to handle network or protocol errors:
async def safe_hr_query(message: str):
    try:
        await orchestrator.a_initiate_chat(
            recipient=stackone_agent,
            message={"role": "user", "content": message}
        )
    except Exception as e:
        print(f"Error communicating with HR agent: {e}")
        # Handle fallback logic
Always provide messages in the correct format with role and content:
# Correct format
await orchestrator.a_initiate_chat(
    recipient=stackone_agent,
    message={"role": "user", "content": "List employees"}
)

# Also valid: just content string
await orchestrator.a_initiate_chat(
    recipient=stackone_agent,
    message="List employees"
)

Next Steps