diff --git a/api/v1alpha2/inferencemodel_types.go b/api/v1alpha2/inferencemodel_types.go
index 052683d88..7cd98a740 100644
--- a/api/v1alpha2/inferencemodel_types.go
+++ b/api/v1alpha2/inferencemodel_types.go
@@ -126,7 +126,7 @@ type PoolObjectReference struct {
}
// Criticality defines how important it is to serve the model compared to other models.
-// Criticality is intentionally a bounded enum to contain the possibilities that need to be supported by the load balancing algorithm. Any reference to the Criticality field must be optional(use a pointer), and set no default.
+// Criticality is intentionally a bounded enum to contain the possibilities that need to be supported by the load balancing algorithm. Any reference to the Criticality field must be optional (use a pointer), and set no default.
// This allows us to union this with a oneOf field in the future should we wish to adjust/extend this behavior.
// +kubebuilder:validation:Enum=Critical;Standard;Sheddable
type Criticality string
diff --git a/mkdocs.yml b/mkdocs.yml
index bdfffe057..e5927ed53 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -63,6 +63,7 @@ nav:
- Getting started: guides/index.md
- Adapter Rollout: guides/adapter-rollout.md
- Metrics: guides/metrics.md
+ - Replacing an Inference Pool: guides/replacing-inference-pool.md
- Implementer's Guide: guides/implementers.md
- Performance:
- Benchmark: performance/benchmark/index.md
diff --git a/site-src/api-types/inferencepool.md b/site-src/api-types/inferencepool.md
index baa604b61..1494d314e 100644
--- a/site-src/api-types/inferencepool.md
+++ b/site-src/api-types/inferencepool.md
@@ -7,28 +7,56 @@
## Background
-The InferencePool resource is a logical grouping of compute resources, e.g. Pods, that run model servers. The InferencePool would deploy its own routing, and offer administrative configuration to the Platform Admin.
+The **InferencePool** API defines a group of Pods (containers) dedicated to serving AI models. Pods within an InferencePool share the same compute configuration, accelerator type, base language model, and model server. This abstraction simplifies the management of AI model serving resources, providing a centralized point of administrative configuration for Platform Admins.
-It is expected for the InferencePool to:
+An InferencePool is expected to be bundled with an [Endpoint Picker](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) extension. This extension is responsible for tracking key metrics on each model server (i.e. the KV-cache utilization, queue length of pending requests, active LoRA adapters, etc.) and routing incoming inference requests to the optimal model server replica based on these metrics. An EPP can only be associated with a single InferencePool. The associated InferencePool is specified by the [poolName](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/config/manifests/inferencepool-resources.yaml#L54) and [poolNamespace](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/config/manifests/inferencepool-resources.yaml#L56) flags. An HTTPRoute can have multiple backendRefs that reference the same InferencePool and therefore routes to the same EPP. An HTTPRoute can have multiple backendRefs that reference different InferencePools and therefore routes to different EPPs.
- - Enforce fair consumption of resources across competing workloads
- - Efficiently route requests across shared compute (as displayed by the PoC)
-
-It is _not_ expected for the InferencePool to:
+Additionally, any Pod that seeks to join an InferencePool would need to support the [model server protocol](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/docs/proposals/003-model-server-protocol), defined by this project, to ensure the Endpoint Picker has adequate information to intelligently route requests.
- - Enforce any common set of adapters or base models are available on the Pods
- - Manage Deployments of Pods within the Pool
- - Manage Pod lifecycle of pods within the pool
+## How to Configure an InferencePool
-Additionally, any Pod that seeks to join an InferencePool would need to support a protocol, defined by this project, to ensure the Pool has adequate information to intelligently route requests.
+The full spec of the InferencePool is defined [here](/reference/spec/#inferencepool).
-`InferencePool` has some small overlap with `Service`, displayed here:
+In summary, the InferencePoolSpec consists of 3 major parts:
+
+- The `selector` field specifies which Pods belong to this pool. The labels in this selector must exactly match the labels applied to your model server Pods.
+- The `targetPortNumber` field defines the port number that the Inference Gateway should route to on model server Pods that belong to this pool.
+- The `extensionRef` field references the [endpoint picker extension](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) (EPP) service that monitors key metrics from model servers within the InferencePool and provides intelligent routing decisions.
+
+### Example Configuration
+
+Here is an example InferencePool configuration:
+
+```
+apiVersion: inference.networking.x-k8s.io/v1alpha2
+kind: InferencePool
+metadata:
+ name: vllm-llama3-8b-instruct
+spec:
+ targetPortNumber: 8000
+ selector:
+ app: vllm-llama3-8b-instruct
+ extensionRef:
+ name: vllm-llama3-8b-instruct-epp
+ port: 9002
+ failureMode: FailClose
+```
+
+In this example:
+
+- An InferencePool named `vllm-llama3-8b-instruct` is created in the `default` namespace.
+- It will select Pods that have the label `app: vllm-llama3-8b-instruct`.
+- Traffic routed to this InferencePool will call out to the EPP service `vllm-llama3-8b-instruct-epp` on port `9002` for making routing decisions. If EPP fails to pick an endpoint, or is not responsive, the request will be dropped.
+- Traffic routed to this InferencePool will be forwarded to the port `8000` on the selected Pods.
+
+## Overlap with Service
+
+**InferencePool** has some small overlap with **Service**, displayed here:
-The InferencePool is _not_ intended to be a mask of the Service object, simply exposing the absolute bare minimum required to allow the Platform Admin to focus less on networking, and more on Pool management.
-
-## Spec
+The InferencePool is not intended to be a mask of the Service object. It provides a specialized abstraction tailored for managing and routing traffic to groups of LLM model servers, allowing Platform Admins to focus on pool-level management rather than low-level networking details.
-The full spec of the InferencePool is defined [here](/reference/spec/#inferencepool).
\ No newline at end of file
+## Replacing an InferencePool
+Please refer to the [Replacing an InferencePool](/guides/replacing-inference-pool) guide for details on uses cases and how to replace an InferencePool.
diff --git a/site-src/guides/replacing-inference-pool.md b/site-src/guides/replacing-inference-pool.md
new file mode 100644
index 000000000..212945706
--- /dev/null
+++ b/site-src/guides/replacing-inference-pool.md
@@ -0,0 +1,59 @@
+# Replacing an InferencePool
+
+## Background
+
+Replacing an InferencePool is a powerful technique for performing various infrastructure and model updates with minimal disruption and built-in rollback capabilities. This method allows you to introduce changes incrementally, monitor their impact, and revert to the previous state if necessary.
+
+## Use Cases
+Use Cases for Replacing an InferencePool:
+
+- Upgrading or replacing your model server framework
+- Upgrading or replacing your base model
+- Transitioning to new hardware
+
+## How to replace an InferencePool
+
+To replacing an InferencePool:
+
+1. **Deploy new infrastructure**: Create a new InferencePool configured with the new hardware / model server / base model that you chose.
+1. **Configure traffic splitting**: Use an HTTPRoute to split traffic between the existing InferencePool and the new InferencePool. The `backendRefs.weight` field controls the traffic percentage allocated to each pool.
+1. **Maintain InferenceModel integrity**: Keep your InferenceModel configuration unchanged. This ensures that the system applies the same LoRA adapters consistently across both base model versions.
+1. **Preserve rollback capability**: Retain the original nodes and InferencePool during the roll out to facilitate a rollback if necessary.
+
+### Example
+
+You start with an existing lnferencePool named `llm-pool-v1`. To replace the original InferencePool, you create a new InferencePool named `llm-pool-v2`. By configuring an **HTTPRoute**, as shown below, you can incrementally split traffic between the original `llm-pool-v1` and new `llm-pool-v2`.
+
+1. Save the following sample manifest as `httproute.yaml`:
+
+ ```yaml
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: HTTPRoute
+ metadata:
+ name: llm-route
+ spec:
+ parentRefs:
+ - group: gateway.networking.k8s.io
+ kind: Gateway
+ name: inference-gateway
+ rules:
+ backendRefs:
+ - group: inference.networking.x-k8s.io
+ kind: InferencePool
+ name: llm-pool-v1
+ weight: 90
+ - group: inference.networking.x-k8s.io
+ kind: InferencePool
+ name: llm-pool-v2
+ weight: 10
+ ```
+
+1. Apply the sample manifest to your cluster:
+
+ ```
+ kubectl apply -f httproute.yaml
+ ```
+
+ The original `llm-pool-v1` InferencePool receives most of the traffic, while the `llm-pool-v2` InferencePool receives the rest.
+
+1. Increase the traffic weight gradually for the `llm-pool-v2` InferencePool to complete the new InferencePool roll out.
diff --git a/site-src/reference/spec.md b/site-src/reference/spec.md
index e16c113c1..d8e0c95bf 100644
--- a/site-src/reference/spec.md
+++ b/site-src/reference/spec.md
@@ -1,12 +1,14 @@
# API Reference
## Packages
-- [inference.networking.x-k8s.io/v1alpha1](#inferencenetworkingx-k8siov1alpha1)
+- [inference.networking.x-k8s.io/v1alpha2](#inferencenetworkingx-k8siov1alpha2)
-## inference.networking.x-k8s.io/v1alpha1
+## inference.networking.x-k8s.io/v1alpha2
+
+Package v1alpha2 contains API Schema definitions for the
+inference.networking.x-k8s.io API group.
-Package v1alpha1 contains API Schema definitions for the gateway v1alpha1 API group
### Resource Types
- [InferenceModel](#inferencemodel)
@@ -18,26 +20,152 @@ Package v1alpha1 contains API Schema definitions for the gateway v1alpha1 API gr
_Underlying type:_ _string_
-Defines how important it is to serve the model compared to other models.
+Criticality defines how important it is to serve the model compared to other models.
+Criticality is intentionally a bounded enum to contain the possibilities that need to be supported by the load balancing algorithm. Any reference to the Criticality field must be optional(use a pointer), and set no default.
+This allows us to union this with a oneOf field in the future should we wish to adjust/extend this behavior.
_Validation:_
-- Enum: [Critical Default Sheddable]
+- Enum: [Critical Standard Sheddable]
_Appears in:_
- [InferenceModelSpec](#inferencemodelspec)
| Field | Description |
| --- | --- |
-| `Critical` | Most important. Requests to this band will be shed last.
|
-| `Default` | More important than Sheddable, less important than Critical.
Requests in this band will be shed before critical traffic.
+kubebuilder:default=Default
|
-| `Sheddable` | Least important. Requests to this band will be shed before all other bands.
|
+| `Critical` | Critical defines the highest level of criticality. Requests to this band will be shed last.
|
+| `Standard` | Standard defines the base criticality level and is more important than Sheddable but less
important than Critical. Requests in this band will be shed before critical traffic.
Most models are expected to fall within this band.
|
+| `Sheddable` | Sheddable defines the lowest level of criticality. Requests to this band will be shed before
all other bands.
|
+
+
+#### EndpointPickerConfig
+
+
+
+EndpointPickerConfig specifies the configuration needed by the proxy to discover and connect to the endpoint picker extension.
+This type is intended to be a union of mutually exclusive configuration options that we may add in the future.
+
+
+
+_Appears in:_
+- [InferencePoolSpec](#inferencepoolspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `extensionRef` _[Extension](#extension)_ | Extension configures an endpoint picker as an extension service. | | Required: \{\}
|
+
+
+#### Extension
+
+
+
+Extension specifies how to configure an extension that runs the endpoint picker.
+
+
+
+_Appears in:_
+- [EndpointPickerConfig](#endpointpickerconfig)
+- [InferencePoolSpec](#inferencepoolspec)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `group` _[Group](#group)_ | Group is the group of the referent.
The default value is "", representing the Core API group. | | MaxLength: 253
Pattern: `^$\|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$`
|
+| `kind` _[Kind](#kind)_ | Kind is the Kubernetes resource kind of the referent. For example
"Service".
Defaults to "Service" when not specified.
ExternalName services can refer to CNAME DNS records that may live
outside of the cluster and as such are difficult to reason about in
terms of conformance. They also may not be safe to forward to (see
CVE-2021-25740 for more information). Implementations MUST NOT
support ExternalName Services. | Service | MaxLength: 63
MinLength: 1
Pattern: `^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$`
|
+| `name` _[ObjectName](#objectname)_ | Name is the name of the referent. | | MaxLength: 253
MinLength: 1
Required: \{\}
|
+| `portNumber` _[PortNumber](#portnumber)_ | The port number on the service running the extension. When unspecified,
implementations SHOULD infer a default value of 9002 when the Kind is
Service. | | Maximum: 65535
Minimum: 1
|
+| `failureMode` _[ExtensionFailureMode](#extensionfailuremode)_ | Configures how the gateway handles the case when the extension is not responsive.
Defaults to failClose. | FailClose | Enum: [FailOpen FailClose]
|
+
+
+#### ExtensionConnection
+
+
+
+ExtensionConnection encapsulates options that configures the connection to the extension.
+
+
+
+_Appears in:_
+- [Extension](#extension)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `failureMode` _[ExtensionFailureMode](#extensionfailuremode)_ | Configures how the gateway handles the case when the extension is not responsive.
Defaults to failClose. | FailClose | Enum: [FailOpen FailClose]
|
+
+
+#### ExtensionFailureMode
+
+_Underlying type:_ _string_
+
+ExtensionFailureMode defines the options for how the gateway handles the case when the extension is not
+responsive.
+
+_Validation:_
+- Enum: [FailOpen FailClose]
+
+_Appears in:_
+- [Extension](#extension)
+- [ExtensionConnection](#extensionconnection)
+
+| Field | Description |
+| --- | --- |
+| `FailOpen` | FailOpen specifies that the proxy should not drop the request and forward the request to and endpoint of its picking.
|
+| `FailClose` | FailClose specifies that the proxy should drop the request.
|
+
+
+#### ExtensionReference
+
+
+
+ExtensionReference is a reference to the extension deployment.
+
+
+
+_Appears in:_
+- [Extension](#extension)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `group` _[Group](#group)_ | Group is the group of the referent.
The default value is "", representing the Core API group. | | MaxLength: 253
Pattern: `^$\|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$`
|
+| `kind` _[Kind](#kind)_ | Kind is the Kubernetes resource kind of the referent. For example
"Service".
Defaults to "Service" when not specified.
ExternalName services can refer to CNAME DNS records that may live
outside of the cluster and as such are difficult to reason about in
terms of conformance. They also may not be safe to forward to (see
CVE-2021-25740 for more information). Implementations MUST NOT
support ExternalName Services. | Service | MaxLength: 63
MinLength: 1
Pattern: `^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$`
|
+| `name` _[ObjectName](#objectname)_ | Name is the name of the referent. | | MaxLength: 253
MinLength: 1
Required: \{\}
|
+| `portNumber` _[PortNumber](#portnumber)_ | The port number on the service running the extension. When unspecified,
implementations SHOULD infer a default value of 9002 when the Kind is
Service. | | Maximum: 65535
Minimum: 1
|
+
+
+#### Group
+
+_Underlying type:_ _string_
+
+Group refers to a Kubernetes Group. It must either be an empty string or a
+RFC 1123 subdomain.
+
+This validation is based off of the corresponding Kubernetes validation:
+https://github.com/kubernetes/apimachinery/blob/02cfb53916346d085a6c6c7c66f882e3c6b0eca6/pkg/util/validation/validation.go#L208
+
+Valid values include:
+
+* "" - empty string implies core Kubernetes API group
+* "gateway.networking.k8s.io"
+* "foo.example.com"
+
+Invalid values include:
+
+* "example.com/bar" - "/" is an invalid character
+
+_Validation:_
+- MaxLength: 253
+- Pattern: `^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$`
+
+_Appears in:_
+- [Extension](#extension)
+- [ExtensionReference](#extensionreference)
+- [PoolObjectReference](#poolobjectreference)
+
#### InferenceModel
-InferenceModel is the Schema for the InferenceModels API
+InferenceModel is the Schema for the InferenceModels API.
@@ -45,29 +173,31 @@ InferenceModel is the Schema for the InferenceModels API
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `apiVersion` _string_ | `inference.networking.x-k8s.io/v1alpha1` | | |
+| `apiVersion` _string_ | `inference.networking.x-k8s.io/v1alpha2` | | |
| `kind` _string_ | `InferenceModel` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[InferenceModelSpec](#inferencemodelspec)_ | | | |
| `status` _[InferenceModelStatus](#inferencemodelstatus)_ | | | |
+
+
+
+
#### InferenceModelSpec
-InferenceModelSpec represents a specific model use case. This resource is
+InferenceModelSpec represents the desired state of a specific model use case. This resource is
managed by the "Inference Workload Owner" persona.
-
-The Inference Workload Owner persona is: a team that trains, verifies, and
+The Inference Workload Owner persona is someone that trains, verifies, and
leverages a large language model from a model frontend, drives the lifecycle
and rollout of new versions of those models, and defines the specific
performance and latency goals for the model. These workloads are
expected to operate within an InferencePool sharing compute capacity with other
InferenceModels, defined by the Inference Platform Admin.
-
InferenceModel's modelName (not the ObjectMeta name) is unique for a given InferencePool,
if the name is reused, an error will be shown on the status of a
InferenceModel that attempted to reuse. The oldest InferenceModel, based on
@@ -81,10 +211,10 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `modelName` _string_ | The name of the model as the users set in the "model" parameter in the requests.
The name should be unique among the workloads that reference the same backend pool.
This is the parameter that will be used to match the request with. In the future, we may
allow to match on other request parameters. The other approach to support matching on
on other request parameters is to use a different ModelName per HTTPFilter.
Names can be reserved without implementing an actual model in the pool.
This can be done by specifying a target model and setting the weight to zero,
an error will be returned specifying that no valid target model is found. | | MaxLength: 253
|
-| `criticality` _[Criticality](#criticality)_ | Defines how important it is to serve the model compared to other models referencing the same pool. | Default | Enum: [Critical Default Sheddable]
|
-| `targetModels` _[TargetModel](#targetmodel) array_ | Allow multiple versions of a model for traffic splitting.
If not specified, the target model name is defaulted to the modelName parameter.
modelName is often in reference to a LoRA adapter. | | MaxItems: 10
|
-| `poolRef` _[PoolObjectReference](#poolobjectreference)_ | Reference to the inference pool, the pool must exist in the same namespace. | | Required: \{\}
|
+| `modelName` _string_ | ModelName is the name of the model as it will be set in the "model" parameter for an incoming request.
ModelNames must be unique for a referencing InferencePool
(names can be reused for a different pool in the same cluster).
The modelName with the oldest creation timestamp is retained, and the incoming
InferenceModel is sets the Ready status to false with a corresponding reason.
In the rare case of a race condition, one Model will be selected randomly to be considered valid, and the other rejected.
Names can be reserved without an underlying model configured in the pool.
This can be done by specifying a target model and setting the weight to zero,
an error will be returned specifying that no valid target model is found. | | MaxLength: 256
Required: \{\}
|
+| `criticality` _[Criticality](#criticality)_ | Criticality defines how important it is to serve the model compared to other models referencing the same pool.
Criticality impacts how traffic is handled in resource constrained situations. It handles this by
queuing or rejecting requests of lower criticality. InferenceModels of an equivalent Criticality will
fairly share resources over throughput of tokens. In the future, the metric used to calculate fairness,
and the proportionality of fairness will be configurable.
Default values for this field will not be set, to allow for future additions of new field that may 'one of' with this field.
Any implementations that may consume this field may treat an unset value as the 'Standard' range. | | Enum: [Critical Standard Sheddable]
|
+| `targetModels` _[TargetModel](#targetmodel) array_ | TargetModels allow multiple versions of a model for traffic splitting.
If not specified, the target model name is defaulted to the modelName parameter.
modelName is often in reference to a LoRA adapter. | | MaxItems: 10
|
+| `poolRef` _[PoolObjectReference](#poolobjectreference)_ | PoolRef is a reference to the inference pool, the pool must exist in the same namespace. | | Required: \{\}
|
#### InferenceModelStatus
@@ -100,14 +230,14 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#condition-v1-meta) array_ | Conditions track the state of the InferencePool. | | |
+| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#condition-v1-meta) array_ | Conditions track the state of the InferenceModel.
Known condition types are:
* "Accepted" | [map[lastTransitionTime:1970-01-01T00:00:00Z message:Waiting for controller reason:Pending status:Unknown type:Ready]] | MaxItems: 8
|
#### InferencePool
-InferencePool is the Schema for the Inferencepools API
+InferencePool is the Schema for the InferencePools API.
@@ -115,13 +245,17 @@ InferencePool is the Schema for the Inferencepools API
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `apiVersion` _string_ | `inference.networking.x-k8s.io/v1alpha1` | | |
+| `apiVersion` _string_ | `inference.networking.x-k8s.io/v1alpha2` | | |
| `kind` _string_ | `InferencePool` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[InferencePoolSpec](#inferencepoolspec)_ | | | |
| `status` _[InferencePoolStatus](#inferencepoolstatus)_ | | | |
+
+
+
+
#### InferencePoolSpec
@@ -135,8 +269,9 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `selector` _object (keys:[LabelKey](#labelkey), values:[LabelValue](#labelvalue))_ | Selector uses a map of label to watch model server pods
that should be included in the InferencePool. ModelServers should not
be with any other Service or InferencePool, that behavior is not supported
and will result in sub-optimal utilization.
In some cases, implementations may translate this to a Service selector, so this matches the simple
map used for Service selectors instead of the full Kubernetes LabelSelector type. | | Required: \{\}
|
-| `targetPortNumber` _integer_ | TargetPortNumber is the port number that the model servers within the pool expect
to receive traffic from.
This maps to the TargetPort in: https://pkg.go.dev/k8s.io/api/core/v1#ServicePort | | Maximum: 65535
Minimum: 0
Required: \{\}
|
+| `selector` _object (keys:[LabelKey](#labelkey), values:[LabelValue](#labelvalue))_ | Selector defines a map of labels to watch model server pods
that should be included in the InferencePool.
In some cases, implementations may translate this field to a Service selector, so this matches the simple
map used for Service selectors instead of the full Kubernetes LabelSelector type.
If sepecified, it will be applied to match the model server pods in the same namespace as the InferencePool.
Cross namesoace selector is not supported. | | Required: \{\}
|
+| `targetPortNumber` _integer_ | TargetPortNumber defines the port number to access the selected model servers.
The number must be in the range 1 to 65535. | | Maximum: 65535
Minimum: 1
Required: \{\}
|
+| `extensionRef` _[Extension](#extension)_ | Extension configures an endpoint picker as an extension service. | | Required: \{\}
|
#### InferencePoolStatus
@@ -152,33 +287,56 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#condition-v1-meta) array_ | Conditions track the state of the InferencePool. | | |
+| `parent` _[PoolStatus](#poolstatus) array_ | Parents is a list of parent resources (usually Gateways) that are
associated with the route, and the status of the InferencePool with respect to
each parent.
A maximum of 32 Gateways will be represented in this list. An empty list
means the route has not been attached to any Gateway. | | MaxItems: 32
|
+
+
+#### Kind
+
+_Underlying type:_ _string_
+
+Kind refers to a Kubernetes Kind.
+
+Valid values include:
+
+* "Service"
+* "HTTPRoute"
+
+Invalid values include:
+
+* "invalid/kind" - "/" is an invalid character
+
+_Validation:_
+- MaxLength: 63
+- MinLength: 1
+- Pattern: `^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$`
+
+_Appears in:_
+- [Extension](#extension)
+- [ExtensionReference](#extensionreference)
+- [PoolObjectReference](#poolobjectreference)
+
#### LabelKey
_Underlying type:_ _string_
-Originally copied from: https://github.com/kubernetes-sigs/gateway-api/blob/99a3934c6bc1ce0874f3a4c5f20cafd8977ffcb4/apis/v1/shared_types.go#L694-L731
+LabelKey was originally copied from: https://github.com/kubernetes-sigs/gateway-api/blob/99a3934c6bc1ce0874f3a4c5f20cafd8977ffcb4/apis/v1/shared_types.go#L694-L731
Duplicated as to not take an unexpected dependency on gw's API.
-
LabelKey is the key of a label. This is used for validation
of maps. This matches the Kubernetes "qualified name" validation that is used for labels.
-
+Labels are case sensitive, so: my-label and My-Label are considered distinct.
Valid values include:
-
* example
* example.com
* example.com/path
* example.com/path.html
-
Invalid values include:
-
* example~ - "~" is an invalid character
* example.com. - can not start or end with "."
@@ -202,10 +360,8 @@ of maps. This matches the Kubernetes label validation rules:
* unless empty, must begin and end with an alphanumeric character ([a-z0-9A-Z]),
* could contain dashes (-), underscores (_), dots (.), and alphanumerics between.
-
Valid values include:
-
* MyValue
* my.name
* 123-my-value
@@ -220,6 +376,25 @@ _Appears in:_
+#### ObjectName
+
+_Underlying type:_ _string_
+
+ObjectName refers to the name of a Kubernetes object.
+Object names can have a variety of forms, including RFC 1123 subdomains,
+RFC 1123 labels, or RFC 1035 labels.
+
+_Validation:_
+- MaxLength: 253
+- MinLength: 1
+
+_Appears in:_
+- [Extension](#extension)
+- [ExtensionReference](#extensionreference)
+- [PoolObjectReference](#poolobjectreference)
+
+
+
#### PoolObjectReference
@@ -234,9 +409,42 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `group` _string_ | Group is the group of the referent. | inference.networking.x-k8s.io | MaxLength: 253
Pattern: `^$\|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$`
|
-| `kind` _string_ | Kind is kind of the referent. For example "InferencePool". | InferencePool | MaxLength: 63
MinLength: 1
Pattern: `^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$`
|
-| `name` _string_ | Name is the name of the referent. | | MaxLength: 253
MinLength: 1
Required: \{\}
|
+| `group` _[Group](#group)_ | Group is the group of the referent. | inference.networking.x-k8s.io | MaxLength: 253
Pattern: `^$\|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$`
|
+| `kind` _[Kind](#kind)_ | Kind is kind of the referent. For example "InferencePool". | InferencePool | MaxLength: 63
MinLength: 1
Pattern: `^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$`
|
+| `name` _[ObjectName](#objectname)_ | Name is the name of the referent. | | MaxLength: 253
MinLength: 1
Required: \{\}
|
+
+
+#### PoolStatus
+
+
+
+PoolStatus defines the observed state of InferencePool from a Gateway.
+
+
+
+_Appears in:_
+- [InferencePoolStatus](#inferencepoolstatus)
+
+| Field | Description | Default | Validation |
+| --- | --- | --- | --- |
+| `parentRef` _[ObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectreference-v1-core)_ | GatewayRef indicates the gateway that observed state of InferencePool. | | |
+| `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#condition-v1-meta) array_ | Conditions track the state of the InferencePool.
Known condition types are:
* "Accepted"
* "ResolvedRefs" | [map[lastTransitionTime:1970-01-01T00:00:00Z message:Waiting for controller reason:Pending status:Unknown type:Accepted]] | MaxItems: 8
|
+
+
+#### PortNumber
+
+_Underlying type:_ _integer_
+
+PortNumber defines a network port.
+
+_Validation:_
+- Maximum: 65535
+- Minimum: 1
+
+_Appears in:_
+- [Extension](#extension)
+- [ExtensionReference](#extensionreference)
+
#### TargetModel
@@ -246,10 +454,10 @@ _Appears in:_
TargetModel represents a deployed model or a LoRA adapter. The
Name field is expected to match the name of the LoRA adapter
(or base model) as it is registered within the model server. Inference
-Gateway assumes that the model exists on the model server and is the
+Gateway assumes that the model exists on the model server and it's the
responsibility of the user to validate a correct match. Should a model fail
-to exist at request time, the error is processed by the Instance Gateway,
-and then emitted on the appropriate InferenceModel object.
+to exist at request time, the error is processed by the Inference Gateway
+and emitted on the appropriate InferenceModel object.
@@ -258,7 +466,7 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
-| `name` _string_ | The name of the adapter as expected by the ModelServer. | | MaxLength: 253
|
-| `weight` _integer_ | Weight is used to determine the proportion of traffic that should be
sent to this target model when multiple versions of the model are specified. | 1 | Maximum: 1e+06
Minimum: 0
|
+| `name` _string_ | Name is the name of the adapter or base model, as expected by the ModelServer. | | MaxLength: 253
Required: \{\}
|
+| `weight` _integer_ | Weight is used to determine the proportion of traffic that should be
sent to this model when multiple target models are specified.
Weight defines the proportion of requests forwarded to the specified
model. This is computed as weight/(sum of all weights in this
TargetModels list). For non-zero values, there may be some epsilon from
the exact proportion defined here depending on the precision an
implementation supports. Weight is not a percentage and the sum of
weights does not need to equal 100.
If a weight is set for any targetModel, it must be set for all targetModels.
Conversely weights are optional, so long as ALL targetModels do not specify a weight. | | Maximum: 1e+06
Minimum: 1
|