Bump the torch group in /requirements with 2 updates
Type: Pull Request
State: Open
Association: Contributor
Comments: 1
(6 months ago)
(6 months ago)
dependencies python
Bumps the torch group in /requirements with 2 updates: torch and torchvision.
Updates torch from 2.7.0 to 2.7.1
Release notes
Sourced from torch's releases.
PyTorch 2.7.1 Release, bug fix release
This release is meant to fix the following issues (regressions / silent correctness):
Torch.compile
Fix Excessive cudagraph re-recording for HF LLM models (#152287) Fix torch.compile on some HuggingFace models (#151154) Fix crash due to Exception raised inside torch.autocast (#152503) Improve Error logging in torch.compile (#149831) Mark mutable custom operators as cacheable in torch.compile (#151194) Implement workaround for a graph break with older version einops (#153925) Fix an issue with tensor.view(dtype).copy_(...) (#151598)
Flex Attention
Fix assertion error due to inductor permuting inputs to flex attention (#151959) Fix performance regression on nanogpt speedrun (#152641)
Distributed
Fix extra CUDA context created by barrier (#149144) Fix an issue related to Distributed Fused Adam in Rocm/APEX when using nccl_ub feature (#150010) Add a workaround random hang in non-blocking API mode in NCCL 2.26 (#154055)
MacOS
Fix MacOS compilation error with Clang 17 (#151316) Fix binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar on MPS devices (#152997)
Other
Improve PyTorch Wheel size due to introduction of addition of 128 bit vectorization (#148320) (#152396) Fix fmsub function definition (#152075) Fix Floating point exception in torch.mkldnn_max_pool2d (#151848) Fix abnormal inference output with XPU:1 device (#153067) Fix Illegal Instruction Caused by grid_sample on Windows (#152613) Fix ONNX decomposition does not preserve custom CompositeImplicitAutograd ops (#151826) Fix error with dynamic linking of libgomp library (#150084) Fix segfault in profiler with Python 3.13 (#153848)
Commits
e2d141dset thread_work_size to 4 for unrolled kernel (#154541)1214198[c10d] Fix extra CUDA context created by barrier (#152834)790cc2f[c10d] Add more tests to prevent extra context (#154179)62ea99a[CI] Remove the xpu env source for linux binary validate (#154409)941732c[ROCm] Added unit test to test the cuda_pluggable allocator (#154135)769d5da[binary builds] Linux aarch64 CUDA builds. Make sure tag is set correctly (#1...306ba12Fix uint view copy (#151598) (#154121)1ae9953[ROCm] Update CUDAPluggableAllocator.h (#1984) (#153974)4a815edci: Set minimum cmake version for halide build (#154122)4c7314e[Dynamo] Fix einops regression (#154053)- Additional commits viewable in compare view
Updates torchvision from 0.22.0 to 0.22.1
Release notes
Sourced from torchvision's releases.
TorchVision 0.22.1 Release
Key info
⚠️ We are updating the areas that TorchVision will be prioritizing in the future. Please take a look at pytorch/vision#9036 for more details.
⚠️ We are deprecating the video decoding and encoding capabilities of TorchVision, and they will be removed soon in version 0.25 (aimed for end of 2025). We encourage users to migrate existing video decoding code to rely on TorchCodec project, where we are consolidating all media decoding/encoding functionalities of PyTorch.
This is a patch release, which is compatible with PyTorch 2.7.1. There are no new features added.
Commits
59a3e1f[release-only] Bump version to 0.22.1 (#9061)- See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore <dependency name> major versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)@dependabot ignore <dependency name> minor versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)@dependabot ignore <dependency name>will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)@dependabot unignore <dependency name>will remove all of the ignore conditions of the specified dependency@dependabot unignore <dependency name> <ignore condition>will remove the ignore condition of the specified dependency and ignore conditions
Pull Request Statistics
0
0
+0
-0
Package Dependencies
torchvision
pip
0.22.0 → 0.22.1
Patch
/requirements
Technical Details
| ID: | 882869 |
| UUID: | 3119595863 |
| Node ID: | PR_kwDOClTaK86ZHhpd |
| Host: | GitHub |
| Repository: | qubvel-org/segmentation_models.pytorch |