Hi David,<br><br>1°/ Yes it is possible with Gate. And as you said the cluster tools are just providing some 'splitting' and 'seeding' tools.<br>Otherwise you don't have to link any programs (clhep, geant4, root, gate, ...) with the condor libraries, so Gate will run without any problem in the vanilla universe.<br>
<br>2°/ 3°/ We have two different clusters in our team, but there are not linked. Each one is built with a certain number of machines that are totally the same (hardware and software). So we use on each a NFS share system on which Gate is installed only one time. And we use the same NFS system to collect output data on the same HDD during the simulation to avoid huge network transfers after the simulation...<br>
Then I think that if you network supports at least 100Mbit/s transfer rate (that is very very common), it would be sufficient. And maybe if you have about 50, 100 or more processors reading and writing on your NFS share system, you will have some little <span class="clickable" onclick="dr4sdgryt(event,"Ox")">slowing down between the two parts of your cluster.<br>
Moreover concerning the writing part of the I/O you can customize your gcc compiler so that it will write its buffers only when they are 1Mo size for example (4 or 8 ko by default), this will avoid permanent network communication (and hospital staff will still be able to check their emails ;-).<br>
</span><br>Good luck,<br>Cheers,<br>Simon<br><br><div class="gmail_quote">On Sat, Feb 21, 2009 at 8:44 PM, David Roberts <span dir="ltr"><<a href="mailto:David.Roberts@icr.ac.uk">David.Roberts@icr.ac.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hello,<br>
<br>
I'm setting up Gate to work on our condor cluster and wanted to check<br>
which is the best way (or only way) to implement this. At present our<br>
BEAMnrc/EGSnrc simulations are submitted to our condor cluster using the<br>
vanilla universe. All executables, libraries, data files etc. are<br>
transferred to the remote computers by condor. Therefore the remote<br>
computers DO NOT need BEAMnrc/EGSnrc installed or require access to any<br>
shared drives. This enables additional computers to be added easily.<br>
<br>
Questions:<br>
<br>
1. Is the above possible with Gate? Looking at the cluster tools, all<br>
this seems to do is split the simulation up and provide different random<br>
seeds.<br>
<br>
2. If not. Does Gate therefore have to be installed on each remote<br>
computer?<br>
<br>
3. If some of the remote computers are the same (same architecture, same<br>
OS) could I just do an NFS share to the geant4/gate install directories?<br>
This might not be ideal if the I/O between remote computers is high<br>
during the simulations (note that half of our cluster is physically<br>
separated from the other and therefore communicate over the<br>
college/hospital network)<br>
<br>
Many Thanks<br>
Dave<br>
<br>
David Roberts<br>
<br>
Radiotherapy PhD Student<br>
Joint Physics Department<br>
Institute of Cancer Research and Royal Marsden NHS Trust<br>
Downs Road,<br>
Sutton, Surrey<br>
UK, SM2 5PT<br>
Tel : 020 8661 3490<br>
<br>
<br>
The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company Limited by Guarantee, Registered in England under Company No. 534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP.<br>
<br>
This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer and network.<br>
_______________________________________________<br>
Gate-users mailing list<br>
<a href="mailto:Gate-users@lists.healthgrid.org">Gate-users@lists.healthgrid.org</a><br>
<a href="http://lists.healthgrid.org/mailman/listinfo/gate-users" target="_blank">http://lists.healthgrid.org/mailman/listinfo/gate-users</a><br>
</blockquote></div><br>