handle I;16 mode in pil_to_tensor#9457
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9457
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (2 Unrelated Failures)As of commit 2813bc1 with merge base b9ee001 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @knQzx! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
| cm = pytest.warns(UserWarning, match="deprecated") if f is F.to_tensor else contextlib.nullcontext() | ||
| with cm: | ||
| out = f(I16_pil_img) | ||
| assert out.dtype == torch.int32 |
There was a problem hiding this comment.
hey, thanks for the feedback! updated to use uint16 for both to_tensor and pil_to_tensor. also fixed the failing tests - they were using signed ShortTensor data which doesn't make sense for I;16 (unsigned), so now they generate values in valid range and expect uint16 output
| if pic.mode == "I;16": | ||
| img = img.astype(np.uint16) |
There was a problem hiding this comment.
I think this part is redundant because np.array(pic, copy=True) on an I;16 PIL image already returns a uint16 array. We can just keep the original code.
d3f956b to
3fbe21e
Compare
|
you're right, removed the redundant astype - np.array already handles it. also rebased on main and added a docs note about uint16/32/64 not being officially supported |
zy1git
left a comment
There was a problem hiding this comment.
I left some comments on the note section.
| inputs. If you're working with uint16 images (e.g. from 16-bit medical or | ||
| scientific imaging), consider converting to ``float32`` first using | ||
| :class:`~torchvision.transforms.v2.ToDtype`. |
There was a problem hiding this comment.
Is there any reason we mention the uint16-to-float32 conversion specifically?
There was a problem hiding this comment.
no specific reason, dropped that bit in the latest commit. the note just lists the unsupported dtypes now.
| .. note:: | ||
|
|
||
| ``torch.uint16``, ``torch.uint32``, and ``torch.uint64`` dtypes are not | ||
| officially supported by the torchvision transforms. While some operations | ||
| may work, most transforms expect ``torch.uint8`` or ``torch.float32`` | ||
| inputs. If you're working with uint16 images (e.g. from 16-bit medical or | ||
| scientific imaging), consider converting to ``float32`` first using | ||
| :class:`~torchvision.transforms.v2.ToDtype`. | ||
|
|
There was a problem hiding this comment.
We can put this note after "Use :class:~torchvision.transforms.v2.ToDtype to convert both the dtype and range of the inputs."
There was a problem hiding this comment.
moved it there in the latest commit.
…16-mode # Conflicts: # test/test_transforms_v2.py
|
@zy1git moved the note after the ToDtype line and dropped the float32 specifics, also merged main |
|
the 5 macos failures in TestErase::test_transform_image_correctness[*-cpu-dtype1-value-random] are pre-existing and reproduce on main without any of my changes — see https://github.com/pytorch/vision/actions/runs/24390062916 (same 5 tests fail on main). not related to this PR. my added test_I16_to_tensor[to_tensor] and [pil_to_tensor] both pass on all platforms. |
fixes #8188
pil images with I;16 mode use uint16 under the hood which pytorch doesn't support, so this converts them to int32 instead. also fixed the same issue in to_tensor where it was incorrectly using signed int16.