[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

uploading pytorch-cuda to experimental soon



Hi Team,

I'm preparing the upload of pytorch-cuda to experimental. The last
two steps are waiting for the build, and testing it with my nvidia gpu.
I don't know how much time it will take because I struggle at finding
a proper amd64 machine to build it.

At the current stage, we only need two extra locally built dependencies:
(if you cannot wait)

1. src:tensorpipe-cuda (uploaded to NEW)
    To build this locally:
    (1) git clone the repo of src:tensorpipe
    (2) gbp export-orig --pristine-tar
    (3) cp tensorpipe.orig.tar.gz tensorpipe-cuda.orig.gz
    (4) bash debian/cudabuild.sh   <-- if you remove this step, it will be
         the standard build for the CPU version.
    (5) sbuild

2. src:gloo-cuda (uploaded to NEW)
    To build this locally:
    Completely the same as above. 

3. src:pytorch-cuda (pending)
    To build this locally:
    Completely the same steps as above.

There is no longer a cuda branch in any of these repositories. The
cuda version of all these packages are regarded as a binary
rebuild with different control files and flags.

The changelogs of the -cuda package are truncated to an
automatically generated template. Any contribution should be
merged into the packaging of the CPU version.

Meanwhile, I have added placeholders for the ROCm version,
which will be treated similarly to the cuda version of all the
mentioned packages.

The implementation differs from my previous plan that builds
different version using pytorch-src. The current implementation
is anyway the simplest solution that won't drive me crazy.


Reply to: