Description
Question
Description
I'm using the BedrockConverseModel to use models hosted on Bedrock. When working with Llama-3.3-70B, tool calls do not work.
This seems to be a similar issue to #1623, and following @DouweM advice I've attempted to create a custom GenerateToolJsonSchema
to match Llama 3.3's schema (which is the same as llama 3.1) but found it confusing. I'm hoping someone with more knowledge on the what's going on under the hood can help me out here. Considering that Llama3 is one of the top 5 most popular Llms, I hope others would find this useful too.
Code snippet
Below is an example showing that tools work fine when working with Anthropic on Bedrock, but not Llama. This is how I convinced myself it's not a problem with bedrock specifically or my prompt, but rather a difference in the underlying model
from pydantic_ai.providers.bedrock import BedrockProvider
from pydantic_ai.models.bedrock import BedrockConverseModel
import boto3
import random
from pydantic_ai import RunContext, Agent
# create an agent with the model id parameterized.
# uses the "dice rolling" example from the pydantic ai docs
# https://ai.pydantic.dev/tools/#registering-function-tools-via-agent-argument
def agent_with_tool_use(model: str):
bedrock_client = boto3.client("bedrock-runtime")
model = BedrockConverseModel(
model,
provider=BedrockProvider(bedrock_client=bedrock_client),
)
agent = Agent(
model,
system_prompt=(
"You're a dice game, you should roll the die and see if the number "
"you get back matches the user's guess. If so, tell them they're a winner. "
"Use the player's name in the response."
),
instrument=True,
)
@agent.tool_plain
def roll_die() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps
return agent
# when calling with anthropic, tools are invoked
anthropic_agent = agent_with_tool_use("anthropic.claude-3-5-sonnet-20241022-v2:0")
anthropic_agent.run_sync('My guess is 4', deps='Anne')
# > AgentRunResult(output='Sorry Anne! You guessed 4, but the die rolled a 5. Better luck next time!')
# when calling with llama, it does not invoke the tool
llama_agent = agent_with_tool_use("us.meta.llama3-3-70b-instruct-v1:0")
llama_agent.run_sync('My guess is 4', deps='Anne')
# > AgentRunResult(output='{"type": "function", "name": "roll_die", "parameters": {}}')
Any advice, code snippets or help on resolving this would be super appreciated. thanks.
Logfire output
When calling anthropic, you can see that the tool is registered and called
Full conversation json
{
"agent_name": "anthropic_agent",
"all_messages_events": [
{
"content": "You're a dice game, you should roll the die and see if the number you get back matches the user's guess. If so, tell them they're a winner. Use the player's name in the response.",
"role": "system",
"gen_ai.message.index": 0,
"event.name": "gen_ai.system.message"
},
{
"content": "My guess is 4",
"role": "user",
"gen_ai.message.index": 0,
"event.name": "gen_ai.user.message"
},
{
"role": "assistant",
"content": "Let me get your name and roll the die to see if you guessed correctly!",
"tool_calls": [
{
"id": "tooluse_el55vVyWT2WM-h6oQo1zDA",
"type": "function",
"function": {
"name": "get_player_name",
"arguments": {}
}
}
],
"gen_ai.message.index": 1,
"event.name": "gen_ai.assistant.message"
},
{
"content": "Anne",
"role": "tool",
"id": "tooluse_el55vVyWT2WM-h6oQo1zDA",
"name": "get_player_name",
"gen_ai.message.index": 2,
"event.name": "gen_ai.tool.message",
"functionName": "get_player_name"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "tooluse_dWbCrxm6TteRD-S8e-0eVQ",
"type": "function",
"function": {
"name": "roll_die",
"arguments": {}
}
}
],
"gen_ai.message.index": 3,
"event.name": "gen_ai.assistant.message"
},
{
"content": "2",
"role": "tool",
"id": "tooluse_dWbCrxm6TteRD-S8e-0eVQ",
"name": "roll_die",
"gen_ai.message.index": 4,
"event.name": "gen_ai.tool.message",
"functionName": "roll_die"
},
{
"role": "assistant",
"content": "Sorry Anne! You guessed 4, but the die landed on 2. Better luck next time!",
"gen_ai.message.index": 5,
"event.name": "gen_ai.assistant.message"
}
],
"final_result": "Sorry Anne! You guessed 4, but the die landed on 2. Better luck next time!",
"gen_ai.usage.input_tokens": 1592,
"gen_ai.usage.output_tokens": 119,
"model_name": "anthropic.claude-3-5-sonnet-20241022-v2:0"
}
But when we call with Llama, the tool does not seem to be correctly registered and it is not invoked:

Full conversation history
{
"agent_name": "llama_agent",
"all_messages_events": [
{
"content": "You're a dice game, you should roll the die and see if the number you get back matches the user's guess. If so, tell them they're a winner. Use the player's name in the response.",
"role": "system",
"gen_ai.message.index": 0,
"event.name": "gen_ai.system.message"
},
{
"content": "My guess is 4",
"role": "user",
"gen_ai.message.index": 0,
"event.name": "gen_ai.user.message"
},
{
"role": "assistant",
"content": "{\"type\": \"function\", \"name\": \"roll_die\", \"parameters\": {}}",
"gen_ai.message.index": 1,
"event.name": "gen_ai.assistant.message"
}
],
"final_result": {
"type": "function",
"name": "roll_die",
"parameters": {}
},
"gen_ai.usage.input_tokens": 242,
"gen_ai.usage.output_tokens": 19,
"model_name": "us.meta.llama3-3-70b-instruct-v1:0"
}
Configuration
Relevant libraries
pydantic 2.10.6
pydantic-ai 0.0.55
pydantic-ai-slim 0.1.9
pydantic_core 2.27.2
pydantic-evals 0.0.55
pydantic-graph 0.1.9
pydantic-settings 2.9.1
boto3 1.38.8
boto3-stubs 1.34.162
botocore 1.38.8
botocore-stubs 1.37.38
Python version 3.11.6
Additional Context
No response