[Gate-users] CT simulation, save multiple outputs, parallelization

Simon Rit simon.rit at creatis.insa-lyon.fr
Thu Feb 9 16:13:29 CET 2017


Hi,
For the uncertainty, it's computed using the history-by-history method, see
http://www.sciencedirect.com/science/article/pii/S0360301606005311
No need to change it for each projection but you need to change it for each
simulation of the same of the projection if you want to have several
independent realizations. I'd suggest to set it to auto instead of a number
to make it really stochastic.
Simon

On Thu, Feb 9, 2017 at 4:07 PM, Triltsch, Nicolas <nicolas.triltsch at tum.de>
wrote:

> Hey,
>
> Thanks for the useful hints and the literature.
>
> Can you explain a little bit more how the uncertainty image is created and
> how it is best to understand? What tell me the numbers or values in the
> image? I think this is important to understand, in order to choose the
> number of photons correctly.
>
> Yes, that's what I am doing already. I scaled the detectors resolution and
> the voxelized volume down by a factor of 4 and consequently simulate the
> scatter images.
>
> One more question concerning the engine seed. Do you choose for every
> projection a new seed by changing the order of the numbers (123456) or is
> that not necessary?
>
> Cheers,
>
> Nico
> On 02/09/2017 01:42 PM, Simon Rit wrote:
>
> Hi,
> Python is probably easier indeed.
> The stochastic part is for scatter and secondary radiations (compton,
> rayleigh and fluorescence). FFD uses a low statistics Monte Carlo
> simulation (therefore stochastic) and combines it with a deterministic
> calculation. Useful unordered references to understand the technique:
> dx.doi.org/10.1088/0031-9155/54/12/016
> dx.doi.org/10.1109/TMI.2004.825600
> doi.org/10.1109/TNS.2005.858223
> 1000 is not sufficient, I typically use 10^5 photons at least for one
> projection. The best is to record the uncertainty image to have an estimate
> of the precision of your Monte Carlo simulation (using the
> *enableUncertaintySecondary* option). You'll probably want to limit the
> number of pixels of your projection to accelerate the computation of your
> scatter images. I typically use finer lattices for primary images than for
> scatter images.
> Simon
>
>
> On Thu, Feb 9, 2017 at 1:11 PM, Triltsch, Nicolas <nicolas.triltsch at tum.de
> > wrote:
>
>> Hey Simon,
>>
>> Thanks for your always very helpful answers. For your last point, I found
>> out a little work around. I used an alias of the form "Gate
>> [rot_angle,$(angle*i)][run_id,$i] mymacro.mac" combined with a for loop
>> with parameter i. In the macro I named the files in the output folder
>> "output_files{i}". I can highly recommend not to use bash scripts to
>> execute the Gate commands which where created in a for loop style. Defining
>> variables (especially floats, etc.) is quite handy. It's better to use a
>> python script and execute the Gate macros in a so called subprocess.call(),
>> multiprocessing module is required here.
>>
>> Some more questions popped up my mind while I was reading your email.
>>
>> - I think you didn't understood me correctly. I used 1000 photons per
>> projection, not in total. I am NOT only interested in the primary image,
>> but also in the images compton.mha and rayleigh.mha. Where in the
>> calculation comes the NOT deterministic part? And do you have any
>> experience how many photons are necessary for a trustworthy result?
>>
>> Thanks in advance,
>>
>> Nico
>> On 02/07/2017 05:37 PM, Simon Rit wrote:
>>
>> Hi Nicolas,
>> Good to see that ffda is used. To answer your questions:
>> - yes, there is an "intrinsic parallelization". The number of threads is
>> set by the environment variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS. If
>> you haven't set it, it will use all your cores. The Monte Carlo part is
>> still single-threaded, but the ray casting is multi-threaded using RTK
>> (based on ITK).
>> - you use 1000 photons. I guess you're only interested in the primary
>> image? In this case, 1 photon per projection is enough since the primary
>> part is deterministic.
>> - you can use the printf format to set the run id in the file primary
>> name (see line 842 of GateFixedForcedDetectionActor.cc
>> <https://github.com/OpenGATE/Gate/blob/develop/source/digits_hits/src/GateFixedForcedDetectionActor.cc#L842>
>> ):
>> /gate/actor/ffda/primaryFilename    output/primary%0d.mha
>> - for further parallelization, I would suggest to run Gate on several
>> machines, each machine starting at a different angle and over an angle
>> range which would be limited. This requires some specific dev and careful
>> handling of all the outputs (they all start with a runid of 0, so you will
>> need to rename the outputs).
>> I hope this helps.
>> Simon
>>
>>
>> On Tue, Feb 7, 2017 at 10:58 AM, Triltsch, Nicolas <
>> nicolas.triltsch at tum.de> wrote:
>>
>>> Hello Gate community,
>>>
>>> I am using the fixed forced detection actor (ffda) and I try to run a
>>> full CT simulation with 1201 projections. My first question aims to the
>>> possibilities of parallelization. I noticed that if I run a single
>>> projection, all 4 cores of my local computer are running at almost 100%. Is
>>> there already some intrinsic parallelization step when using the ffda and
>>> what further parallelization steps are possible to speed up the simulation
>>> for 1 projection? If it helps I use a voxelized phantom, cone beam setup, a
>>> xray spectrum histogram, integrating detector and 1000 Photons.
>>>
>>> My second question is how to save several .mha images in the output
>>> folder when simulating all 1201 projections. Still, I am using the ffda
>>> actor and with the command "*/gate/actor/ffda/primaryFilename
>>> output/primary.mha" *the primary image gets overwritten for each
>>> projection. How can I save different primary images for each projection?
>>>
>>> Any help is appreciated!
>>>
>>> Nico
>>>
>>> --
>>> B.Sc. Nicolas Triltsch
>>> Masterand
>>>
>>> Technische Universität München
>>> Physik-Department
>>> Lehrstuhl für Biomedizinische Physik E17
>>>
>>> James-Franck-Straße 1
>>> 85748 Garching b. München
>>>
>>> Tel: +49 89 289 12591 <+49%2089%2028912591>
>>> nicolas.triltsch at tum.dewww.e17.ph.tum.de
>>>
>>>
>>> _______________________________________________
>>> Gate-users mailing list
>>> Gate-users at lists.opengatecollaboration.org
>>> http://lists.opengatecollaboration.org/mailman/listinfo/gate-users
>>>
>>
>>
>> --
>> B.Sc. Nicolas Triltsch
>> Masterand
>>
>> Technische Universität München
>> Physik-Department
>> Lehrstuhl für Biomedizinische Physik E17
>>
>> James-Franck-Straße 1
>> 85748 Garching b. München
>>
>> Tel: +49 89 289 12591 <+49%2089%2028912591>
>> nicolas.triltsch at tum.dewww.e17.ph.tum.de
>>
>>
>
> --
> B.Sc. Nicolas Triltsch
> Masterand
>
> Technische Universität München
> Physik-Department
> Lehrstuhl für Biomedizinische Physik E17
>
> James-Franck-Straße 1
> 85748 Garching b. München
>
> Tel: +49 89 289 12591 <+49%2089%2028912591>
> nicolas.triltsch at tum.dewww.e17.ph.tum.de
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opengatecollaboration.org/mailman/private/gate-users/attachments/20170209/58ca38a8/attachment-0001.html>


More information about the Gate-users mailing list