[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: calculix-ccx multithreaded not working



Am Samstag, 10. Oktober 2015, 15:09:30 schrieb Felix Hagemann:
> > I used the tutorial from here[1]. I run ccx (compiled and packaged from
> > the
> > git-repo on alioth) with the following command for X threads. With X being
> > 1, 2 and 4.
> > 
> > export OMP_NUM_THREADS=X; /usr/bin/time -o TimingLog_CPUX.log -a ccx -i
> > hook
> > 
> > As I can see, everything runs as expected. I see additional threads in
> > htop
> > for each cpu. So for OMP_NUM_THREADS=2 i see 3 threads. But only, when the
> > calculation is handed over to spooles. A few seconds after the output of
> > "Using up to 4 cpu(s) for spooles."
> 
> I have been trying to get that tutorial example working, but it fails with
> *ERROR in allocont: element slave surface SLAVE does not exist
> and unfortunately I do not have the time to investigate further. Can you
> provide that example in a working state?

Sorry I forget that there was some changes between the ccx versions. 'SURFACE 
TO SURFACE' is the default contact type now. I added a patch, which changes 
the contact type to 'NODE TO SURFACE'.

> >> As you can see I tried with 2, but have done it with 4 and 6 as well.
> >> The output messages and the messages in spooles.out change
> >> accordingly, but the runtime and the cpu usage as reported by htop
> >> remain the same.
> > 
> > There is an option for htop, to list each thread process. You can turn
> > this
> > behavior on by pressing H or in the "Setup / Display options" menu, "Hide
> > userland threads". Can you please test, whether switching this option
> > on/off does change anything?
> 
> No, it does not change anything. I still see a single ccx thread also when
> spooles does its job.

Ok, then I have to investigate this issue further on. At the moment I really 
don't know why it doesn't work for you. I normally would ask you to add a bug 
report, but as long as there is no package, this doesn't work. So my next 
steps will be to add ccx to the debian repository. After that we can test it 
again and if necessary add a bug report.

> > Another question: How large is your case? Is it large enough, to get a
> > significant difference between single and multiple cpu usage?
> 
> That is a difficult question for me who knows almost very little about FEM.
> Our test case - when run to the end - needs about 1h to finish. ccx tells me
> that the following about our case:
> nodes: 20320
> elements: 12102
> number of equations:  2463
> number of nonzero lower triangular matrix elements: 47409
> 
> Would you expect a significant difference for that case?

I think your case is large enough. There should be differences in the computing 
time using multiple cpus.

Kind regards,
Wolfgang

P.S. The patch:

--- "Analysis 1/hook.inp"	2015-10-07 11:58:18.025613936 +0200
+++ TimeAnalysis/hook.inp	2015-10-07 12:05:50.587394193 +0200
@@ -24,7 +24,7 @@
 
 *SURFACE,NAME=Slave,TYPE=NODE
 NNslave
-*CONTACT PAIR,INTERACTION=contact, ADJUST=0.01, SMALL SLIDING
+*CONTACT PAIR,INTERACTION=contact,TYPE=NODE TO SURFACE, ADJUST=0.01, SMALL 
SLIDING
 Slave,SSmaster
 *SURFACE INTERACTION,NAME=contact
 *SURFACE BEHAVIOR,PRESSURE-OVERCLOSURE=EXPONENTIAL

Attachment: signature.asc
Description: This is a digitally signed message part.


Reply to: