AG2 (formerly AutoGen) is an open-source framework for building AI agents. AG2 has built-in support for the A2A protocol through A2aRemoteAgent, allowing your agents to communicate with remote A2A agents like StackOne’s.Use AG2 to build agents that:
Consume StackOne’s A2A agents as remote agents
Orchestrate multi-agent conversations with local and remote agents
Access StackOne platform actions without managing tool definitions
A2aRemoteAgent supports only asynchronous methods - this is a limitation of the A2A client.
This example creates a local agent that delegates HR tasks to a StackOne A2A agent.
Copy
Ask AI
import asyncioimport base64from autogen import ConversableAgent, LLMConfigfrom autogen.a2a import A2aRemoteAgent, HttpxClientFactory# StackOne A2A configurationSTACKONE_API_KEY = "<stackone_api_key>"STACKONE_ACCOUNT_ID = "<account_id>"BASE64_API_KEY = base64.b64encode(f"{STACKONE_API_KEY}:".encode()).decode()# Create HTTP client with StackOne authentication headershttp_client = HttpxClientFactory( headers={ "Authorization": f"Basic {BASE64_API_KEY}", "x-account-id": STACKONE_ACCOUNT_ID })# Create a remote A2A agent pointing to StackOnestackone_agent = A2aRemoteAgent( url="https://a2a.stackone.com", name="stackone_hr_agent", client=http_client)# Create a local agent to orchestrate conversationsllm_config = LLMConfig({"model": "gpt-4o-mini"})orchestrator = ConversableAgent( name="hr_assistant", system_message=""" You are an HR assistant that helps with employee management tasks. Delegate HR operations to the stackone_hr_agent. When the user asks about employees, time off, or other HR data, use the stackone_hr_agent to fetch or update the information. """, llm_config=llm_config,)# Start a conversationasync def main(): await orchestrator.a_initiate_chat( recipient=stackone_agent, message={"role": "user", "content": "List the first 5 employees"} )asyncio.run(main())
If you already have an AgentCard (e.g., fetched from a discovery service), create an A2aRemoteAgent directly from it:
Copy
Ask AI
from a2a.types import AgentCard, AgentCapabilitiesfrom autogen.a2a import A2aRemoteAgent# Create or fetch an AgentCardcard = AgentCard( name="stackone_hr_agent", url="https://a2a.stackone.com", description="Agent that handles HR operations via StackOne integrations", version="0.1.0", default_input_modes=["text"], default_output_modes=["text"], capabilities=AgentCapabilities(streaming=True), skills=[], supports_authenticated_extended_card=False,)# Create the remote agent from the cardstackone_agent = A2aRemoteAgent.from_card(card)
A2aRemoteAgent automatically handles Human in the Loop (HITL) interactions when the remote agent requests human input. This happens transparently - the client loops until the agent completes its task or the conversation terminates.
Copy
Ask AI
from autogen.a2a import A2aRemoteAgent# Connect to a remote agent that may request human inputstackone_agent = A2aRemoteAgent( url="https://a2a.stackone.com", name="approval_agent", client=http_client)# The client automatically handles any human input requestsawait orchestrator.a_initiate_chat( recipient=stackone_agent, message={"role": "user", "content": "Should I approve this budget proposal?"})# If the remote agent requests input, the user will be prompted automatically
A2aRemoteAgent only supports asynchronous methods. Always use a_initiate_chat and other async methods:
Copy
Ask AI
# Correct: Use async methodsawait orchestrator.a_initiate_chat( recipient=stackone_agent, message={"role": "user", "content": "List employees"})# Incorrect: Sync methods won't work# orchestrator.initiate_chat(...) # This will fail
Clear System Messages
Provide detailed system messages for your orchestrator agent. Include clear instructions about when to delegate to remote agents:
Copy
Ask AI
orchestrator = ConversableAgent( name="hr_assistant", system_message=""" You are an HR assistant. When users ask about: - Employee data -> delegate to stackone_hr_agent - Time off requests -> delegate to stackone_hr_agent - Local operations -> use local tools Always confirm before making changes to HR systems. """, llm_config=llm_config,)
Error Handling
Wrap A2A calls in try-except blocks to handle network or protocol errors: