Skip to content

Add prefex aware routing proposal #602

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

liu-cong
Copy link
Contributor

@liu-cong liu-cong commented Mar 28, 2025

This proposal was initially discussed in #498

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Mar 28, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: liu-cong
Once this PR has been reviewed and has the lgtm label, please assign sergeykanzhelev for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

netlify bot commented Mar 28, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit 2595221
🔍 Latest deploy log https://app.netlify.com/sites/gateway-api-inference-extension/deploys/67e6d2c0828ed60008712031
😎 Deploy Preview https://deploy-preview-602--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 28, 2025

1. **Prefix affinity consistent hashing**

This goes a step beyond the session affinity by using a prefix aware hash function to route requests with similar prefixes to the same or similar servers. A naive hash function can be just taking the hash of the first N characters/tokens of the request, and therefore all requests with the same first N characters/tokens will be routed to the same server. The [vLLM production stack](https://github.com/vllm-project/production-stack/issues/59) is exploring this strategy using simhash, and preliminary experiments showed mixed results. KubeAI uses a simple strategy to only hash request prefix up to a configurable `prefixCharLength`. Its effectiveness is likely highly dependent on the input length distribution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is that a moving window of up to prefixCharLength? or does it always has exactly prefixCharLength characters?


Pros:

* Easy to explain (compared to hashing) and likely more effective than hashing strategy.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean "than consistent hashing strategy"?

1. Prefix affinity needs to be aware of the server load, otherwise we will create hot spots. We can use queue length and k-v cache utilization to understand the server load. This is similar to the [queue depth threshold](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/2a615e981228aa6ffc2a89219c986ac863dde776/pkg/epp/scheduling/scheduler.go#L40) for LoRA affinity.


## Proposal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to start with this approach since it seems relatively simple to implement, but also in theory should be more resilient than the other two options.

@smarterclayton
Copy link
Contributor

Cong's PoC is in main...liu-cong:llm-instance-gateway:prefix-poc (or at least, a version of it is) for those interested.


## Design Options

1. **Session affinity**
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider switching options to Header 3, its easy to have the options blend together as is, and since they have sub-bullets using 1. resets the count and they all ahve the value of 1.

Suggested change
1. **Session affinity**
### **Session affinity**

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants