[server] Minor fixes and add timeout for filedata fetch#9703
[server] Minor fixes and add timeout for filedata fetch#9703
Conversation
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b37220c384
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if len(errFileIds) > 10 { | ||
| return nil, stacktrace.Propagate( |
There was a problem hiding this comment.
Don't abort GetFilesData when >10 item fetches fail
This new guard turns a partially successful bulk fetch into a hard 500 once more than 10 objects fail, even though the API already models per-file failures via ErrFileIDs. In practice, transient S3 issues or timeout cancellations can easily push failures above 10 for large requests, causing clients to lose all successful results and retry the whole batch repeatedly instead of making progress on good items.
Useful? React with 👍 / 👎.
Description
Tests