cancel request and block new inputs when sleeping#4541
Open
grimoire wants to merge 1 commit intoInternLM:mainfrom
Open
cancel request and block new inputs when sleeping#4541grimoire wants to merge 1 commit intoInternLM:mainfrom
grimoire wants to merge 1 commit intoInternLM:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR: Guard PyTorch Engine Sleep Against In-flight and New Requests
Summary
This PR fixes a PyTorch engine sleep race where
sleep()can release model/KV-cache resources while requests are still active or while newEngineInstanceinputs are accepted. After sleep, those requests can resume or enter inference against invalid resources and break generation.The fix is scoped to
lmdeploy/pytorch/engine.Problem
Before this change, PyTorch engine sleep only delegated to executor/model-agent sleep. Direct PyTorch engine instances could still enqueue new inference work around the sleep transition, and existing scheduler sessions could remain alive even though sleep may release KV cache.
This creates unsafe cases:
Changes
Add a request-admission gate in the PyTorch request manager.
ADD_SESSIONandADD_MESSAGEduring explicit PyTorch engine sleep.STOP_SESSIONandEND_SESSIONenabled.ResponseType.CANCELand wakes the sender.Make
Engine.sleep()perform PyTorch-engine cleanup before resource release.Make
Engine.wakeup()re-enable inference only after all sleeping tags are restored.ADD_SESSIONandADD_MESSAGEand resumes scheduling.Add sleep-drain coordination in
EngineLoop.Engine.sleep()requests a drain and waits for the main loop to acknowledge a safe boundary.Add logs and comments for the sleep lifecycle.
Tests
Focused unit tests were added/updated for:
ADD_SESSIONandADD_MESSAGEreturningResponseType.CANCELimmediately.Engine.sleep()blocking input before executor sleep.EngineInstancerequests returningCANCELwhile sleeping.A real Qwen3-8B corner-case smoke test was also run and passed. It covers:
EngineInstancerequest while sleeping.Notes / Scope
lmdeploy/serve,AsyncEngine, Turbomind, public response enums, or HTTP middleware behavior.empty_initis not treated as PyTorch engine sleep. The sleep guard is driven only by explicitEngine.sleep()/Engine.wakeup()calls.