Description
Is your enhancement related to a problem? Please describe
There will be one default configuration out of the box. Right now, we'll be starting with llamacpp
, but the goal will be to move to vLLM
as soon as possible.
As a user, I should be able to configure the default inferencing runtime to be used from the settings of AI Lab.

Describe the solution you'd like
This issue will be the implementation of the UX described in #2501
Describe alternatives you've considered
No response
Additional context
No response