Open
Description
- Package Name: azure.ai.evaluation evaluate
- Package Version: 1.5.0
- Operating System: windows 11
- Python Version: Python 3.12.10
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
- call evaluate with a valid file path. It does not create a file named evaluation_results.json per documentation. Instead, the result is piped to the last folder in the path as a file name.
Expected behavior
Behave per documentation:
:keyword output_path: The local folder or file path to save evaluation results to if set. If folder path is provided
the results will be saved to a file named evaluation_results.json
in the folder.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Metadata
Metadata
Assignees
Labels
This issue points to a problem in the data-plane of the library.Issues related to the client library for Azure AI EvaluationWorkflow: This issue is responsible by Azure service team.Issues that are reported by GitHub users external to the Azure organization.Workflow: This issue needs attention from Azure service team or SDK teamThe issue doesn't require a change to the product in order to be resolved. Most issues start as that