Skip to content

fix(vllm): add ray to vllm extras (#3688)#3694

Open
kvr06-ai wants to merge 1 commit intoEleutherAI:mainfrom
kvr06-ai:fix/3688-ray-vllm-extras
Open

fix(vllm): add ray to vllm extras (#3688)#3694
kvr06-ai wants to merge 1 commit intoEleutherAI:mainfrom
kvr06-ai:fix/3688-ray-vllm-extras

Conversation

@kvr06-ai
Copy link
Copy Markdown

Summary

lm_eval/models/vllm_causallms.py imports ray at module level (line 14), but ray is missing from the [vllm] extras in pyproject.toml. vllm itself does not pull ray in transitively (absent from vllm/requirements/cuda.txt and from vllm's PyPI requires_dist), so pip install lm_eval[vllm] currently produces an environment that fails on ImportError: No module named 'ray' the moment the backend loads.

Fixes #3688

Changes

  • pyproject.toml: add "ray" to the vllm extras entry

Testing

  • pip install -e '.[vllm]' now pulls ray, and python -c "import lm_eval.models.vllm_causallms" succeeds in a clean environment

Notes

The three ray call sites (@ray.remote, ray.get, ray.shutdown) sit inside the data_parallel_size > 1 and not self.V1 branch, so ray is technically only needed for multi-GPU data-parallel runs. A follow-up could lazy-import ray if the added install footprint becomes a concern, but that is out of scope for this fix. This PR keeps the existing import contract and only corrects the packaging metadata to match it.

vllm_causallms imports ray at module level but ray was missing from the vllm extras, so pip install lm_eval[vllm] fails on ImportError.
@kvr06-ai kvr06-ai requested a review from 0xSMT as a code owner April 10, 2026 08:34
@kvr06-ai
Copy link
Copy Markdown
Author

@baberabb Would you mind taking a look when you get a chance? One-line metadata fix for #3688 (ray missing from [vllm] extras).

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 10, 2026

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ray missing as dependency of vllm backend

2 participants