[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: running CUDA cards



>From the recent network install CD, I have set up a RAID1 software with

ext2 /boot in a first raid

and root, usr, opt, var, tmp and home ext3, plus swap into a second raid LVM.

Installed the 'base system' only for the moment, with sources.list
pointing to 'wheezy' (which is now in the state of testing, no more
unstable). Only failed to fetch multimedia in spite of requesting the
key (timeout). Perhaps I should have installed
<debian-multimedia_2008.1016_all.deb>, or what more recent exists,
with dpkg. Not tried.

Installed <xterm>.

Created hidden file <.Xsession> as follows

#!/bin/sh
xrdb -load$HOME/.Xresources
exec xterm

into my home, as I am used at computing from the Linux prompt, without
calling the X server (I dont' know if this will be the case with CUDA
).

Using your kind suggestions for a squeeze installation:


Installed <nvidia-kernel-dkms>.

At
apt-get install nvidia-glx-dev libcuda1-dev libcuda1

the first two were not available. The first one replaced by
libgl1-nvidia-glx libcuda1. The second one replaced by libcuda1.

apt-get install libgl1-nvidia-glx libcuda1

also installed <libnvidia-ml1> and <nvidia-smi>.

My .bashrc -as obtained from the installation - was only added of
<alias rm='rm -i'>.  In a i386 squeeze installation (not in lenny
amd64 installations) .bashrc contains

VEGADIR=/usr/local/bin/Vega
LD_LIBRARY_PATH=/usr/local/bin/Vega
export VEGADIR
export LD_LIBRARY_PATH

At the console command

X

the system crashes.

while command <startx> is not recognized>.

I regret putting these last naive reports, perhaps tired by the
attention required by the setting up of RAID1, a procedure that is
only repeated every many months and thus not at immediate memory.

Thanks a lot for advice (the case is for overclocked gaming, thus with
powerful 14cm fans).

francesco






On Tue, May 31, 2011 at 5:56 PM, Lennart Sorensen
<lsorense@csclub.uwaterloo.ca> wrote:
> On Tue, May 31, 2011 at 05:20:35PM +0200, Francesco Pietra wrote:
>> I have just set up a gaming machine with
>>
>> Gigabyte GA 890FXUD5
>> AMD Phenom II 10775T
>> 2 x GTX470 GPU cards
>> 4 x 4GB RAM
>> 2 x 1 Tb HD for RAID1
>>
>> and need to install amd64 to run molecular dynamics using (free for
>> non-commercial use) NAMD software (released binary below or
>> compilation from source). All that is experimental, with little
>> experience and I have no experience whatsoever with CUDA cards. My
>> question is about the version of amd6a to be best used (lenny or
>> squeeze) and what should be added to the typical server installation
>> according to the requirements:
>
> Lenny will stop having support soon, so absolutely go with squeeze.
> The nvidia-glx package in squeeze supports the GTX470 card.  Lenny does
> not.  So install squeeze.
>
> To install the driver, add 'contrib non-free' to your lines in
> /etc/apt/sources.list then do:
>
> apt-get update
> apt-get install nvidia-kernel-dkms
> apt-get install nvidia-glx nvidia-glx-dev libcuda1-dev libcuda1
>
>> (1) NVIDIA Linux driver version 195.17 or newer (released Linux
>> binaries are built with CUDA 2.3, but can be built with newer versions
>> as well).
>
> squeeze has 195.36.31 so that should work.
>
>> (2) libcudart.so.2 included with the binary (the one copied from the
>> version of CUDA it was built with) must be in a directory in your
>> LD_LIBRARY_PATH before any other libcudart.so libraries. For example:
>>
>>   setenv LD_LIBRARY_PATH ".:$LD_LIBRARY_PATH"
>>   (or LD_LIBRARY_PATH=".:$LD_LIBRARY_PATH"; export LD_LIBRARY_PATH)
>>   ./namd2 +idlepoll <configfile>
>>   ./charmrun ++local +p4 ./namd2 +idlepoll <configfile>
>
> The libcuda1 package should probably take care of the library I think,
> but maybe not.
>
> wheezy has a lot more cuda packages available, and much newer drivers too,
> but is of course testing, not stable.
>
>> THE FOLLOWING CAN BE SKIPPED, unless one is specifically interested in
>> the matter: The +idlepoll in the command line is needed to poll the
>> GPU for results rather than sleeping while idle, i.e. NAMD does not
>> use any non-specified GPU card. Each namd2 process can use only one
>> GPU. Therefore you will need to run at least one process for each GPU
>> you want to use. Multiple processes can share a single GPU, usually
>> with an increase in performance. NAMD will automatically distribute
>> processes equally among the GPUs on a node. Specific GPU device IDs
>> can be requested via the +devices argument on the namd2 command line,
>> for example:
>>   ./charmrun ++local +p4 ./namd2 +idlepoll +devices 0,2 <configfile>
>
> --
> Len Sorensen
>


Reply to: