Fix Assistant OpenAI adapter to handle message content structure returned by to_hash method #952
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While using the gem for the first time I was discovering
Langchain::Assistant
and experimenting with it.While using naively I came across an issue :
Here
message_hash
equalFrom there I was thinking great I can just persist
messages_hash
somewhere and then use the method@assistant.add_messages
to resume the conversationBut this happened :
resumed_hash
equal thisThe
message_hash
get stringified and nested into a new hashAfter some investigation I found out that this behaviour comes from the function
build_message
at/lib/langchain/assistant/llm/adapters/openai.rb:42
That ignore the fact that the method
to_hash
, from the same object, transformcontent
into an hash that merge text message and image urlThis PR is extracted from a monkey patch I made in my project :
I also added tests for the
#build_message
that covers the previous behavior and the one added