[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports



Hi Aurelien,

On 8/8/19 10:38 PM, Aurelien Jarno wrote:

32-bit processes are able to address at maximum 4GB of memory (2^32),
and often less (2 or 3GB) due to architectural or kernel limitations.

[...]

Thanks for bringing this up.

1) Build a 64-bit compiler targeting the 32-bit corresponding
    architecture and install it in the 32-bit chroot with the other
    64-bit dependencies. This is still a kind of cross-compiler, but the
    rest of the build is unchanged and the testsuite can be run. I guess
    it *might* be something acceptable. release-team, could you please
    confirm?

As you noted, our current policy doesn't allow that. However, we could certainly consider reevaluating this part of the policy if there is a workable solution.

Some random notes (these are just my preliminary thoughts, not a new release team policy):

- There would need to be a team of '32-bit porters' (probably
  overlapping with the porters for the remaining 32-bit architectures)
  who manage the changes to make and keep this working. Without a team
  committed to this, we can't really support this in a stable release.

- There would need to be a rough consensus that the solution is the way
  to go.

- The solution needs to work on the buildds. We still want all binaries
  to be built on the buildds.

- We are talking about having both native 32-bit and 64-bit packages in
  the same environment. We are NOT talking about emulated builds. The
  resulting (32-bit) binaries still need to run natively in the build
  environment.

- It's not our intention to lower the bar for architectures in testing.
  On the contrary. We intend to raise the bar at some point. As we
  already stated in the past, we would really prefer if more release
  architectures had some type of automated testing (piuparts,
  autopkgtests, archive rebuilds, etc). Eventually, this will probably
  become a requirement for release architectures.

- For architectures to be included in a future stable release, they
  still need to be in good enough shape. I won't go into everything
  involved in architecture qualification in this mail, but I do want to
  mention that the buildd capacity for mipsel/mips64el is quite limited.
  During the buster release cycle, they had trouble keeping up. If this
  continues, we might be forced to drop (one of) these architectures in
  the near future.

    In the past it would have been enough to "just" do that for GCC, but
    nowadays, it will also be needed for rustc, clang and many more. The
    clang case is interesting as it is already a cross-compiler
    supporting all the architectures, but it default to the native
    target. I wonder if we should make mandatory the "-target" option,
    just like we do not call "gcc" anymore but instead "$(triplet)-gcc".
    Alternatively instead of creating new packages, we might just want
    to use the corresponding multiarch 64-bit package and use a wrapper
    to change the native target, ie passing -m32 to gcc or -target to
    clang.

I think a solution based on multiarch packages would probably be nicer than the mess of having packages for the 32-bit arch that contain the 64-bit compiler.

Thanks,

Ivo


Reply to: