Search Criteria
Package Details: python-pytorch-opt-cuda12.9 2.10.0-2
Package Actions
| Git Clone URL: | https://aur.archlinux.org/python-pytorch-cuda12.9.git (read-only, click to copy) |
|---|---|
| Package Base: | python-pytorch-cuda12.9 |
| Description: | Tensors and Dynamic neural networks in Python with strong GPU acceleration (Maxwell/Pascal/Volta support) (with CUDA 12.9 and AVX2 CPU optimizations) |
| Upstream URL: | https://pytorch.org |
| Licenses: | BSD-3-Clause-Modification |
| Conflicts: | python-pytorch |
| Provides: | python-pytorch, python-pytorch-cuda, python-pytorch-cuda12.9 |
| Submitter: | piernov |
| Maintainer: | piernov |
| Last Packager: | piernov |
| Votes: | 0 |
| Popularity: | 0.000000 |
| First Submitted: | 2025-10-19 10:58 (UTC) |
| Last Updated: | 2026-02-05 12:58 (UTC) |
Dependencies (48)
- abseil-cpp (abseil-cpp-gitAUR)
- cuda-12.9AUR
- cudnn9.10-cuda12.9AUR
- eigen (eigen-gitAUR, eigen3)
- gcc14-libsAUR
- gflags (gflags-gitAUR)
- glibc (glibc-gitAUR, glibc-eacAUR, glibc-git-native-pgoAUR)
- google-glog (ng-logAUR, glog-gitAUR, glog-gitAUR)
- intel-oneapi-mkl (intel-oneapi-hpckitAUR, intel-oneapi-basekit-2025AUR, intel-oneapi-base-toolkit, intel-oneapi-basekit)
- libuv (libuv-gitAUR)
- magma-cuda
- nccl-cuda12.9AUR
- numactl (numactl-gitAUR)
- onednn (onednn-gitAUR)
- openmp
- openmpi (openmpi-gitAUR)
- protobuf (protobuf-gitAUR)
- pybind11 (pybind11-gitAUR)
- python
- python-filelock
- Show 28 more dependencies...
Required by (341)
- baballonia (requires python-pytorch)
- baballonia (requires python-pytorch-cuda) (optional)
- chainner-bin (requires python-pytorch) (optional)
- clockwork-orange-git (requires python-pytorch-cuda) (optional)
- clockwork-orange-git (requires python-pytorch) (optional)
- coqui-tts (requires python-pytorch)
- fastnn (requires python-pytorch)
- final2x-bin (requires python-pytorch)
- freedv-gui (requires python-pytorch)
- handtex (requires python-pytorch)
- ik-llama.cpp (requires python-pytorch) (optional)
- ik-llama.cpp-cuda (requires python-pytorch) (optional)
- ik-llama.cpp-cuda-git (requires python-pytorch) (optional)
- ik-llama.cpp-vulkan (requires python-pytorch) (optional)
- kdenlive-git (requires python-pytorch-cuda) (optional)
- llama.cpp (requires python-pytorch) (optional)
- llama.cpp-aio (requires python-pytorch) (optional)
- llama.cpp-clblas-git (requires python-pytorch) (optional)
- llama.cpp-cublas-git (requires python-pytorch) (optional)
- llama.cpp-cuda (requires python-pytorch) (optional)
- Show 321 more...
Sources (46)
- 87773.patch
- add_gpu_targets_rocm.patch
- aotriton_disable_install.patch
- fix_cmake_prefix_path.patch
- fix_include_system.patch
- glog-0.7.patch
- pyproject.patch
- python-pytorch-cuda12.9-aiter
- python-pytorch-cuda12.9-benchmark
- python-pytorch-cuda12.9-composable_kernel
- python-pytorch-cuda12.9-cpp-httplib
- python-pytorch-cuda12.9-cpuinfo
- python-pytorch-cuda12.9-cudnn-frontend
- python-pytorch-cuda12.9-cutlass
- python-pytorch-cuda12.9-fbgemm
- python-pytorch-cuda12.9-fbjni
- python-pytorch-cuda12.9-flash-attention
- python-pytorch-cuda12.9-flatbuffers
- python-pytorch-cuda12.9-fmt
- python-pytorch-cuda12.9-FP16
- python-pytorch-cuda12.9-FXdiv
- python-pytorch-cuda12.9-gemmlowp
- python-pytorch-cuda12.9-gloo
- python-pytorch-cuda12.9-googletest
- python-pytorch-cuda12.9-ideep
- python-pytorch-cuda12.9-ittapi
- python-pytorch-cuda12.9-json
- python-pytorch-cuda12.9-kineto
- python-pytorch-cuda12.9-kleidiai
- python-pytorch-cuda12.9-mimalloc
- python-pytorch-cuda12.9-NNPACK
- python-pytorch-cuda12.9-NVTX
- python-pytorch-cuda12.9-onnx
- python-pytorch-cuda12.9-opentelemetry-cpp
- python-pytorch-cuda12.9-PeachPy
- python-pytorch-cuda12.9-pocketfft
- python-pytorch-cuda12.9-protobuf
- python-pytorch-cuda12.9-psimd
- python-pytorch-cuda12.9-pthreadpool
- python-pytorch-cuda12.9-pybind11
- python-pytorch-cuda12.9-sleef
- python-pytorch-cuda12.9-tensorpipe
- python-pytorch-cuda12.9-VulkanMemoryAllocator
- python-pytorch-cuda12.9-XNNPACK
- pytorch
- use-system-libuv.patch
Latest Comments
skualos commented on 2026-01-18 07:26 (UTC) (edited on 2026-01-18 07:26 (UTC) by skualos)
Thanks @pieronv. A virtual env is much better solution, I didn't know the packages were shipped with the CUDA dependencies included. I already have it up and running :) In the meantime, pytorch has been compiling with a single process for about 24h now, but it hasn't reached the failure point yet (just over 3700).
I think my ccache is working fine. But even when I just run
makepkg --noprepare, all the third party git repos get updated anyway, which I think in turn triggers re-compiling of most files ?piernov commented on 2026-01-17 11:05 (UTC)
@skualos You can try. If your
ccacheis working properly and your environment doesn't change, the next build should go through the already built objects pretty fast. But the stuff staring around the mid-3000s is extremely demanding, I have not investigated why.Alternatively I'd suggest using a virtual environment with e.g.
uvand using the PyTorch packages for CUDA 12.6. These still support Pascal for now.skualos commented on 2026-01-17 10:50 (UTC)
@piernov Thanks for the info. I only have 32GiB of memory... Do you think compiling could work with a single process and 32 GiB ? That could take a few days to complete I imagine.
The only alternative I see for now is using a pre-built docker image. But I'd prefer to avoid docker if possible. Any other suggestions ? Maybe it's time for a laptop with a GPU newer than Pascal :'( lol
piernov commented on 2026-01-17 10:32 (UTC)
@skualos
OoM killer triggered most likely.
Building this requires insane amounts of RAM especially with several of jobs in parallel. We're talking about likely more than a hundred or two GiB. Good luck.
skualos commented on 2026-01-17 04:36 (UTC)
I've been trying to compile python-pytorch-cuda12.9 for a few days now and would really appreciate some help. I was able to makepkg and install cuda-12.9, cudnn9.10-cuda12.9 and nccl-cuda12.9 But when compiling python-pytorch-cuda12.9, for the first couple times it would crash so bad that it would freeze my whole graphical session. I was able to solve that setting MAX_JOBS=10 instead of 20 on the PKGBUILD file.
But now, after several hours of build, I get the below error. I'm not really sure how to debug or fix that. Full logs are here