[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports



>>>>> "Luke" == Luke Kenneth Casson Leighton <lkcl@lkcl.net> writes:

    >> I even agree with you that we cannot address these challenges and
    >> get to a point where we have confidence a large fraction of our
    >> software will cross-build successfully.

    Luke> sigh.

I don't really see the need for a sigh.
I think we can address enough of the challenges that we are not
significantly harmed.

    >> But we don't need to address a large fraction of the source
    >> packages.  There are a relatively small fraction of the source
    >> packages that require more than 2G of RAM to build.

    Luke> ... at the moment.  with there being a lack of awareness of
    Luke> the consequences of the general thinking, "i have a 64 bit
    Luke> system, everyone else must have a 64 bit system, 32-bit must
    Luke> be on its last legs, therefore i don't need to pay attention
    Luke> to it at all", unless there is a wider (world-wide) general
    Luke> awareness campaign, that number is only going to go up, isn't
    Luke> it?

I'd rather say that over time, we'll get better at dealing with cross
building more things and 32-bit systems will become less common.
Eventually, yes, we'll get to a point where 32-bit systems are
infrequent enough and the runtime software needs have increased enough
that 32-bit general-purpose systems don't make sense.
They will still be needed for embedded usage.

There are Debian derivatives that already deal better with building
subsets of the archive for embedded uses.
Eventually, Debian itself will need to either give up on 32-bit entirely
or deal with more of that itself.

I think my concern about your approach is that you're trying to change
how the entire world thinks.  You're trying to convince everyone to be
conservative in how much (virtual) memory they use.

Except I think that a lot of people actually only do need to care about
64-bit environments with reasonable memory.  I think that will increase
over time.

I think that approaches that focus the cost of constrained environments
onto places where we need constrained environments are actually better.

There are cases where it's actually easier to write code assuming you
have lots of virtual memory.  Human time is one of our most precious
resources.  It's reasonable for people to value their own time.  Even
when people are aware of the tradeoffs, they may genuinely decide that
being able to write code faster and that is conceptually simpler is the
right choice for them.  And having a flat address space is often
conceptually simpler than having what amounts to multiple types/levels
of addressing.  In this sense, having an on-disk record store/database
and indexing that and having a library to access it is just a complex
addressing mechanism.

We see this trade off all over the place as memory mapped databases
compete with more complex relational databases which compete with nosql
databases which compete with sharded cloud databases that are spread
across thousands of nodes.  There are trade offs involving complexity of
code, time to write code, latency, overall throughput, consistency, etc.

How much effort we go to support 32-bit architectures as our datasets
(and building is just another dataset) grow is just the same trade offs
in miniture.  And choosing to write code quickly is often the best
answer.  It gets us code after all.

--Sam


Reply to: