fix(vllm): add ray to vllm extras (#3688)#3694
Open
kvr06-ai wants to merge 1 commit intoEleutherAI:mainfrom
Open
fix(vllm): add ray to vllm extras (#3688)#3694kvr06-ai wants to merge 1 commit intoEleutherAI:mainfrom
kvr06-ai wants to merge 1 commit intoEleutherAI:mainfrom
Conversation
vllm_causallms imports ray at module level but ray was missing from the vllm extras, so pip install lm_eval[vllm] fails on ImportError.
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
lm_eval/models/vllm_causallms.pyimportsrayat module level (line 14), butrayis missing from the[vllm]extras inpyproject.toml. vllm itself does not pullrayin transitively (absent fromvllm/requirements/cuda.txtand from vllm's PyPIrequires_dist), sopip install lm_eval[vllm]currently produces an environment that fails onImportError: No module named 'ray'the moment the backend loads.Fixes #3688
Changes
pyproject.toml: add"ray"to thevllmextras entryTesting
pip install -e '.[vllm]'now pulls ray, andpython -c "import lm_eval.models.vllm_causallms"succeeds in a clean environmentNotes
The three ray call sites (
@ray.remote,ray.get,ray.shutdown) sit inside thedata_parallel_size > 1 and not self.V1branch, so ray is technically only needed for multi-GPU data-parallel runs. A follow-up could lazy-import ray if the added install footprint becomes a concern, but that is out of scope for this fix. This PR keeps the existing import contract and only corrects the packaging metadata to match it.