[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: respectful slightly dumb question about 64 bit computing.....

On Sun, Oct 29, 2006 at 01:22:49PM +0000, Michael Fothergill wrote:
> I have been reading about the benefits of 64 bit computing on the web.  In 
> the old days I used to run some molecular dynamics calculations on a DEC 
> Alpha with a 64 bit chip in it and the developer there did get a definite 
> boost from it.
> "The emergence of the 64-bit architecture effectively increases the memory 
> ceiling to 264 addresses, equivalent to 17,179,869,184 gigabytes or 16 
> exabytes of RAM. 

> A recent Linux kernel (version 2.6.16) can be compiled with support
> for up to 64 gigabytes of memory."
> OK, here's the dumb question:
> Let's suppose that money was no object and we managed in some
> technical feat to construct a computer that could have a 64 bit chip
> in it that would be properly hooked up to 16 exabytes of RAM.
> If I had such a computer in my possession and I offered to donate to
> the Debian community how would it respond?
> Would it say
> 2. We would be delighted to receive the donated computer.  We think
> that we could configure our Debian OS to run on it and yes, there
> would be serious computing problems it could address.
> What sorts of problems would they be?  I suppose it could one that
> would require e.g. a huge database.
> The other question I have is:  how much performance increase in
> database applications is typicall seen using 64 bit computing?

As someone who is migrating from a 486 with 32 MB ram to an AMD Athlon
with 1 GB ram, I'll be interested in this discussion.  In my own
experience, anything over 16 MB is what Mozilla wastes as you use it.
Any time I need fast or small, I go to Fortran 77.

Seriously, this is the realm of high-performance computing.  At that
level, its likely that beyond a certain amount of memory, its faster to
add a processor with its own memory (or a cluster node with its own
processors, memory and possibly disks) than to just pile the memory onto
one processor (or a multi-core).  Also, the programmes written to solve
the problem tend to be hardware specific, i.e. the programmer will only
try to use the memory available.

Your scenario suggests a large-data-set problem that does not

I suppose this could be an extremly huge database where you want
everything in memory for fast access.

Reply to: