[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: calculix-ccx multithreaded not working



Hi Wolfgang,

thanks for checking up.

> I used the tutorial from here[1]. I run ccx (compiled and packaged from the
> git-repo on alioth) with the following command for X threads. With X being 1,
> 2 and 4.
>
> export OMP_NUM_THREADS=X; /usr/bin/time -o TimingLog_CPUX.log -a ccx -i hook
>
> As I can see, everything runs as expected. I see additional threads in htop
> for each cpu. So for OMP_NUM_THREADS=2 i see 3 threads. But only, when the
> calculation is handed over to spooles. A few seconds after the output of
> "Using up to 4 cpu(s) for spooles."

I have been trying to get that tutorial example working, but it fails with
*ERROR in allocont: element slave surface SLAVE does not exist
and unfortunately I do not have the time to investigate further. Can you
provide that example in a working state?

>> As you can see I tried with 2, but have done it with 4 and 6 as well.
>> The output messages and the messages in spooles.out change
>> accordingly, but the runtime and the cpu usage as reported by htop
>> remain the same.
>
> There is an option for htop, to list each thread process. You can turn this
> behavior on by pressing H or in the "Setup / Display options" menu, "Hide
> userland threads". Can you please test, whether switching this option on/off
> does change anything?

No, it does not change anything. I still see a single ccx thread also when
spooles does its job.

> Another question: How large is your case? Is it large enough, to get a
> significant difference between single and multiple cpu usage?

That is a difficult question for me who knows almost very little about FEM.
Our test case - when run to the end - needs about 1h to finish. ccx tells me
that the following about our case:
nodes: 20320
elements: 12102
number of equations:  2463
number of nonzero lower triangular matrix elements: 47409

Would you expect a significant difference for that case?

Kind regards
Felix


Reply to: