Re: Advice on cluster hardware
Wow! I didn't know that. Very cool.
Anyway, Ross, I would get some testing time on a cluster from
one of the vendors and test your code with some dummy data
sets. Where I work, we talk to the vendors, get them to sign
an NDA (to cover our proprietary code) and then test on their
machines. A number of companies such as Racksaver, IBM,
Appro have been pretty good about getting us testing time on
their clusters. Also, AMD is setting up their Opteron cluster
test center. It should be partially up, so you can surf their
website and look for how you request testing time.
Be careful of companies that want to do the testing for you.
Somehow I just don't really trust them. A couple of companies
wanted to do that for us, but I was never really sure about the
results. I'd prefer to do the testing myself.
"Jeffrey B. Layton" <email@example.com> writes:
Ross Boylan wrote:
Although this list seems to have been quiet recently, perhaps there are
some folks out there with wisdom to share. I didn't turn up much in the
The group I am in is about to purchase a cluster. If anyone on this
list has any advice on what type of hardware (or software) would be
best, I'd appreciate it.
We will have two broad types of uses: simulation studies for
epidemiology (with people or cases as the units) and genetic and protein
studies, with which I am less familiar. The simulation studies are
likely to make heavy use of R. I suspect that the two uses have much
different characteristics, e.g., in terms of the size of the datasets to
manipulate and the best tradeoffs outlined below.
Are the code MPI at all? I've only looked at R in passing
so I don't know if it's parallel or not.
R can use LAM-MPI, sockets, or PVM to pass/distribute jobs to other
R/C/Fortran/etc message-passing enabled processes.