We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
We should support sampling where the server asks the client to make an LLM call