OpenAI SDK Agent/MCPStudio/AgentsTracing 使用第三方api demo

摘要

前阵子研究mcp的时候 发现官方文档 关于在Agent/MCP里使用第三方 api_baseapi_key 说明很少
在网上也没找到demo 就记录一下 后续有人有类似需求也好有个参考

1
2
3
4
# 不想硬编码的话, 可以通过如下的环境变量配置
export OPENAI_BASE_URL=xxx
export OPENAI_API_KEY=xxx
export LANGCHAIN_API_KEY=xxx

示例1:

通过 OpenRouter 接入 GPT 模型,结合 LangGraph agents 及 LangSmith 的 tracing 功能,实现一个只用 俳句 回复的 Agent。


环境依赖

1
pip install openai langsmith agents

示例代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import asyncio
from agents import Agent, Runner, set_trace_processors
from agents import set_default_openai_client, OpenAIChatCompletionsModel
from langsmith import Client
from langsmith.wrappers import OpenAIAgentsTracingProcessor
from openai import AsyncOpenAI

# 自定义 OpenAI 客户端:使用 OpenRouter 接口(需替换为你的 API Key)
custom_client = AsyncOpenAI(
base_url="https://openrouter.ai/api/v1", # OPENAI_BASE_URL
api_key="sk-xxx" # 你的 OpenRouter API Key
)

# 设置 LangSmith tracing processor(需替换为你的 LangSmith API Key)
set_trace_processors([OpenAIAgentsTracingProcessor(
client=Client(api_key='lsv2_xxx')
)])

# 设置默认客户端供 agents 使用
set_default_openai_client(custom_client)

# 定义使用的模型(可用 "gpt-4o" / "gpt-4" / "gpt-3.5-turbo" 等)
model = OpenAIChatCompletionsModel(
model="gpt-4o",
openai_client=custom_client,
)

# 主函数:运行一个只用 haiku 回复的 Agent
async def main():
agent = Agent(
name="HaikuBot",
instructions="You only respond in haikus.",
model=model,
)

result = await Runner.run(agent, "Tell me about recursion in programming.")
print(result.final_output)

# Python 脚本入口
if __name__ == "__main__":
asyncio.run(main())

效果示例

输出结果类似:

1
2
3
Endless self-calling,
Functions within functions grow—
Beauty in the loop.

示例2:

本示例展示如何通过 LangGraph Agents 接入 OpenRouter 上的 Claude 模型,并结合 MCPServerStdio 文件系统协议,在本地目录上运行自然语言驱动的文件操作任务。


环境依赖

1
2
pip install openai langsmith agents
npm install -g @modelcontextprotocol/server-filesystem

目录结构说明

假设我们有一个子目录 0416/,其中包含若干 .py 文件,目标是列出所有 Python 文件并将其写入 python-files.txt


示例代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import asyncio
import pathlib
from openai import AsyncOpenAI
from openai.types.responses import ResponseTextDeltaEvent
from langsmith.client import Client
from langsmith.wrappers import OpenAIAgentsTracingProcessor
from agents import (
Agent, Runner,
set_default_openai_client,
OpenAIChatCompletionsModel,
set_trace_processors
)
from agents.mcp import MCPServerStdio

# 设置自定义 OpenAI 客户端 (OpenRouter)
client = AsyncOpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-xxx" # 替换为你的 API Key
)
set_default_openai_client(client)

# LangSmith 追踪器(可选保留)
set_trace_processors([
OpenAIAgentsTracingProcessor(
client=Client(api_key='lsv2_xxx')
)
])

# 使用 Claude-3.7 Sonnet 模型
model = OpenAIChatCompletionsModel(
model="anthropic/claude-3.7-sonnet",
openai_client=client,
)

async def main():
samples_dir = pathlib.Path(__file__).parent / "0416"

async with MCPServerStdio(params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", str(samples_dir)],
}) as fs_server:

tools = await fs_server.list_tools()
print("tools:", tools)

agent = Agent(
name="file agent",
model=model,
mcp_servers=[fs_server],
)

result = Runner.run_streamed(
agent,
"list all python files, and echo the basenames to a new file, named python-files.txt"
)

async for event in result.stream_events():
if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
print(event.data.delta, end='', flush=True)

if __name__ == "__main__":
asyncio.run(main())

效果示例

1
list all python files, and echo the basenames to a new file, named python-files.txt