To set my question more specifically, does upgrading from amd64 wheezy to amd64 jessie bring a nvidia driver capable of PCIexpress 3.0 with ivy bridge?
If so, is update/upgrade enough or a suitable kernel should also be installed? By using distupgrade I had unpleasant experience in the past, of a huge variety of applications installed, when my interest is totally out of graphic interfaces, which can not be used for MD with GPUs.
Thank so much. It is a pity to run MD with GPUs at the rate allowed by PCIe 2.0, when the hardware should allow PCIe 3.0 (eight vs five)
In motherboard Gigabyte X79-UD3 I have replaced sandy i7-3930 with ivy i7-4930K (and increased RAM speed to 1866MHz) in view of activating PCIe 3.0 between RAM and the two GTX-680 cards, both on 16 lanes.
As expected at current cuda driver 304.88 with amd64 wheezy, there was no speed increase in executing parallelized molecular dynamics (NAMD2.9 code, compiled at cuda 4, and running without the X-server). Both GPUs are working correctly, each one sharing the parts of the protein assigned to them by the code, both GPUs engaging the same amount of memory. As far as I know, nvidia PCIe 3.0 worked with 295.33 drivers under linux, but then nvidia disabled it from 295.xx to (at least) 310.xx.
What could I do now to get PDCIe 3.0?
(a) Upgrading to testing, if it is true that nvidia cuda drivers there (331.xx I believe) enable PCIe 3.0. This would not be the best for me in view of stability.
(b) Backporting testing nvidia drivers to wheezy. Is that possible?
Thanks for advice
PS: in carrying out the above benchmark, which is provided by NAMD itself, I selected both a light job (small protein) and a very heavy job (large protein in much water). Only in the latter case, the new ivy configuration afforded advantage, albeit as marginal as 0.12 s/step instead of 0.14 s/step. The higher RAM speed gave no advantage. Clearly, there is a bottleneck between the GPUs and RAM with both the sandy and the ivy hardware at driver 304.xx.