<div dir="ltr"><div>yes sure the solution that i have send out was done for cluster mode, just give other option to do the same it works too in a computer multicore but you need N times memory than the single simulation needs.<br><br></div><div>Togheter with that i dont know if gatemultithread touch the topic about the parallel random number generation. i need to re-introduce myselft to this looks like is a lot better than when i see the multithread version of Geant4<br></div><div><br></div>Saludos<br></div><div class="gmail_extra"><br><div class="gmail_quote">2015-07-19 15:10 GMT-03:00 Alex Vergara Gil <span dir="ltr"><<a href="mailto:alexvergaragil@gmail.com" target="_blank">alexvergaragil@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear @Copernicus<br>
<br>
I see that you have follow the MPI solution, but that is like trying<br>
to write an entire thing when just using G4MTRunManager shall be<br>
enough, please give a try to the recommendations that geant4 team has<br>
made to the multithreading support and maybe we can go MPI further.<br>
<br>
Basically running in cluster mode would take a lot of memory that<br>
doesn't scale linearly with process number, therefore you can't run<br>
gatempi in a xeon phi card. With G4MTRunManager is straightforward,<br>
only one big data for geometries and small memory comsumption per<br>
thread.<br>
<br>
Regards<br>
Alex<br>
<span class=""><br>
2015-07-16 13:46 GMT-04:00, Copernicus <<a href="mailto:copernicus231@gmail.com">copernicus231@gmail.com</a>>:<br>
> This was done using mpi in a cluster and run too on single machine is<br>
> distribute memory model.<br>
> <a href="http://www.cmpbjournal.com/article/S0169-2607%2813%2900265-4/abstract?cc=y=" rel="noreferrer" target="_blank">http://www.cmpbjournal.com/article/S0169-2607%2813%2900265-4/abstract?cc=y=</a><br>
</span>> <a href="https://github.com/copernicus231/gatempi" rel="noreferrer" target="_blank">https://github.com/copernicus231/gatempi</a><br>
> <a href="https://github.com/copernicus231/clhep-sprng" rel="noreferrer" target="_blank">https://github.com/copernicus231/clhep-sprng</a><br>
<span class="">> <a href="https://github.com/copernicus231/thirdparty" rel="noreferrer" target="_blank">https://github.com/copernicus231/thirdparty</a><br>
><br>
> Next month i will have some time, i will see if i can update this code to<br>
> the newest version of Gate<br>
><br>
><br>
> Saludos<br>
> Nicolas<br>
><br>
</span>> 2015-07-16 9:46 GMT-03:00 Alex Vergara Gil <<a href="mailto:alexvergaragil@gmail.com">alexvergaragil@gmail.com</a>>:<br>
><br>
>> Please see my project <a href="https://github.com/BishopWolf/Gate" rel="noreferrer" target="_blank">https://github.com/BishopWolf/Gate</a>, also the<br>
<span class="">>> thread in this list "Adding G4MTRunManager Support to GATE"<br>
>> You are welcome to contribute<br>
>><br>
>> Regards<br>
>> Alex<br>
>><br>
</span><span class="">>> 2015-07-16 4:43 GMT-04:00, Shubham Rai <<a href="mailto:subam.rai93@gmail.com">subam.rai93@gmail.com</a>>:<br>
>> > I am new to using GATE and am learning it as a part of my post graduate<br>
>> > project. I am using the DELL PRECISION T5600 workstation with 12 cores.<br>
>> The<br>
>> > problem at hand is that when I run the code and analyze my processor, I<br>
>> > find that only one core is being used. It takes a long time for the<br>
>> > simulation to get executed even for primitive problems.<br>
>> ><br>
>> > In the manual there are instructions about parallel processing using<br>
>> > clusters but there is no mention about multi core systems.<br>
>> ><br>
>> > Is it possible to divide the work among the cores of my system like in<br>
>> > clusters ?<br>
>> ><br>
>> _______________________________________________<br>
>> Gate-users mailing list<br>
</span>>> <a href="mailto:Gate-users@lists.opengatecollaboration.org">Gate-users@lists.opengatecollaboration.org</a><br>
>> <a href="http://lists.opengatecollaboration.org/mailman/listinfo/gate-users" rel="noreferrer" target="_blank">http://lists.opengatecollaboration.org/mailman/listinfo/gate-users</a><br>
>><br>
><br>
</blockquote></div><br></div>