Skip to content

Fix image to text node, it was bugged #44

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 3, 2024
Merged

Conversation

VinciGit00
Copy link
Collaborator

No description provided.

@VinciGit00 VinciGit00 added the bug Something isn't working label Apr 3, 2024
@VinciGit00 VinciGit00 requested a review from PeriniM April 3, 2024 10:57
Copy link

github-actions bot commented Apr 3, 2024

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

OpenSSF Scorecard

PackageVersionScoreDetails

Scanned Manifest Files


Methods:
execute(state, url): Execute the node's logic and return the updated state.
"""

def __init__(self, llm, node_name: str):
def __init__(self, input: str, output: List[str], model_config: dict,
node_name: str = "GetProbableTags"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change default node_name

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did it

"""
super().__init__(node_name, "node")
self.llm = llm
super().__init__(node_name, "node", input, output, 2, model_config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should accept only 1 input which is the url or list of url present in the state

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did it

print("---GENERATING TEXT FROM IMAGE---")
text_answer = self.llm.run(url)
text_answer = self.llm_model.run(url)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing the retrieval part from the state before this line to get the url

super().__init__(node_name, "node")
self.llm = llm
super().__init__(node_name, "node", input, output, 2, model_config)
self.llm_model = model_config["llm_model"]

def execute(self, state: dict, url: str) -> dict:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here we can remove the url arg since we use our graph syntax for state retrieval

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are right

@@ -71,8 +71,7 @@ def _create_llm(self, llm_config: dict):
return OpenAI(llm_params)
elif "gemini" in llm_params["model"]:
return Gemini(llm_params)
else:
raise ValueError("Model not supported")
raise ValueError("Model not supported")
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope

@PeriniM PeriniM merged commit 3f56f05 into main Apr 3, 2024
5 checks passed
@VinciGit00 VinciGit00 deleted the fix_image_to_text_node branch April 3, 2024 11:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants