<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hey,<br>
<br>
Thanks for the useful hints and the literature.</p>
<p> Can you explain a little bit more how the uncertainty image is
created and how it is best to understand? What tell me the numbers
or values in the image? I think this is important to understand,
in order to choose the number of photons correctly. <br>
</p>
<p>Yes, that's what I am doing already. I scaled the detectors
resolution and the voxelized volume down by a factor of 4 and
consequently simulate the scatter images.<br>
</p>
<p>One more question concerning the engine seed. Do you choose for
every projection a new seed by changing the order of the numbers
(123456) or is that not necessary?</p>
<p>Cheers,</p>
<p>Nico<br>
</p>
<div class="moz-cite-prefix">On 02/09/2017 01:42 PM, Simon Rit
wrote:<br>
</div>
<blockquote
cite="mid:CAF0oig3ykop+JyiF5fVw8hG2m4hmgFt6HfS2isMZcYUQuHK7YQ@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div dir="ltr">
<div>
<div>
<div>
<div>Hi,<br>
</div>
Python is probably easier indeed.<br>
</div>
The stochastic part is for scatter and secondary radiations
(compton, rayleigh and fluorescence). FFD uses a low
statistics Monte Carlo simulation (therefore stochastic) and
combines it with a deterministic calculation. Useful
unordered references to understand the technique:<br>
<a moz-do-not-send="true"
href="http://dx.doi.org/10.1088/0031-9155/54/12/016">dx.doi.org/10.1088/0031-9155/54/12/016</a><br>
<a moz-do-not-send="true"
href="http://dx.doi.org/10.1109/TMI.2004.825600">dx.doi.org/10.1109/TMI.2004.825600</a><br>
<a moz-do-not-send="true"
href="http://doi.org/10.1109/TNS.2005.858223">doi.org/10.1109/TNS.2005.858223</a><br>
</div>
1000 is not sufficient, I typically use 10^5 photons at least
for one projection. The best is to record the uncertainty
image to have an estimate of the precision of your Monte Carlo
simulation (using the <b>enableUncertaintySecondary</b>
option). You'll probably want to limit the number of pixels of
your projection to accelerate the computation of your scatter
images. I typically use finer lattices for primary images than
for scatter images.<br>
</div>
Simon<br>
<div>
<div>
<div><br>
</div>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Feb 9, 2017 at 1:11 PM,
Triltsch, Nicolas <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:nicolas.triltsch@tum.de" target="_blank">nicolas.triltsch@tum.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Hey Simon,</p>
<p>Thanks for your always very helpful answers. For your
last point, I found out a little work around. I used an
alias of the form "Gate [rot_angle,$(angle*i)][run_id,<wbr>$i]
mymacro.mac" combined with a for loop with parameter i.
In the macro I named the files in the output folder
"output_files{i}". I can highly recommend not to use
bash scripts to execute the Gate commands which where
created in a for loop style. Defining variables
(especially floats, etc.) is quite handy. It's better to
use a python script and execute the Gate macros in a so
called subprocess.call(), multiprocessing module is
required here. <br>
</p>
<p>Some more questions popped up my mind while I was
reading your email.</p>
<p>- I think you didn't understood me correctly. I used
1000 photons per projection, not in total. I am NOT only
interested in the primary image, but also in the images
compton.mha and rayleigh.mha. Where in the calculation
comes the NOT deterministic part? And do you have any
experience how many photons are necessary for a
trustworthy result?<br>
</p>
<p>Thanks in advance,</p>
<p>Nico<br>
</p>
<div>
<div class="h5">
<div class="m_-5837947688201281229moz-cite-prefix">On
02/07/2017 05:37 PM, Simon Rit wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hi Nicolas,<br>
</div>
Good to see that ffda is used. To
answer your questions:<br>
</div>
- yes, there is an "intrinsic
parallelization". The number of threads
is set by the environment variable
ITK_GLOBAL_DEFAULT_NUMBER_OF_T<wbr>HREADS.
If you haven't set it, it will use all
your cores. The Monte Carlo part is
still single-threaded, but the ray
casting is multi-threaded using RTK
(based on ITK).<br>
</div>
- you use 1000 photons. I guess you're
only interested in the primary image? In
this case, 1 photon per projection is
enough since the primary part is
deterministic.<br>
</div>
- you can use the printf format to set the
run id in the file primary name (see line
842 of <a moz-do-not-send="true"
href="https://github.com/OpenGATE/Gate/blob/develop/source/digits_hits/src/GateFixedForcedDetectionActor.cc#L842"
target="_blank">GateFixedForcedDetectionActor.<wbr>cc</a>):<br>
/gate/actor/ffda/primaryFilena<wbr>me
output/primary%0d.mha<br>
</div>
- for further parallelization, I would suggest
to run Gate on several machines, each machine
starting at a different angle and over an
angle range which would be limited. This
requires some specific dev and careful
handling of all the outputs (they all start
with a runid of 0, so you will need to rename
the outputs).<br>
</div>
I hope this helps.<br>
</div>
Simon<br>
<div>
<div>
<div>
<div>
<div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Feb 7, 2017 at
10:58 AM, Triltsch, Nicolas <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:nicolas.triltsch@tum.de"
target="_blank">nicolas.triltsch@tum.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Hello Gate community,</p>
<p>I am using the fixed forced detection
actor (ffda) and I try to run a full CT
simulation with 1201 projections. My first
question aims to the possibilities of
parallelization. I noticed that if I run a
single projection, all 4 cores of my local
computer are running at almost 100%. Is
there already some intrinsic
parallelization step when using the ffda
and what further parallelization steps are
possible to speed up the simulation for 1
projection? If it helps I use a voxelized
phantom, cone beam setup, a xray spectrum
histogram, integrating detector and 1000
Photons.<br>
</p>
<p>My second question is how to save several
.mha images in the output folder when
simulating all 1201 projections. Still, I
am using the ffda actor and with the
command "<i>/gate/actor/ffda/primaryFilen<wbr>ame
output/primary.mha" </i>the primary
image gets overwritten for each
projection. How can I save different
primary images for each projection?<br>
</p>
<p>Any help is appreciated!<br>
</p>
<p>Nico<br>
</p>
<pre class="m_-5837947688201281229m_4700849550904366916moz-signature" cols="72">--
B.Sc. Nicolas Triltsch
Masterand
Technische Universität München
Physik-Department
Lehrstuhl für Biomedizinische Physik E17
James-Franck-Straße 1
85748 Garching b. München
Tel: <a moz-do-not-send="true" href="tel:+49%2089%2028912591" value="+498928912591" target="_blank">+49 89 289 12591</a>
<a moz-do-not-send="true" class="m_-5837947688201281229m_4700849550904366916moz-txt-link-abbreviated" href="mailto:nicolas.triltsch@tum.de" target="_blank">nicolas.triltsch@tum.de</a>
<a moz-do-not-send="true" class="m_-5837947688201281229m_4700849550904366916moz-txt-link-abbreviated" href="http://www.e17.ph.tum.de" target="_blank">www.e17.ph.tum.de</a></pre>
</div>
<br>
______________________________<wbr>_________________<br>
Gate-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gate-users@lists.opengatecollaboration.org"
target="_blank">Gate-users@lists.opengatecolla<wbr>boration.org</a><br>
<a moz-do-not-send="true"
href="http://lists.opengatecollaboration.org/mailman/listinfo/gate-users"
rel="noreferrer" target="_blank">http://lists.opengatecollabora<wbr>tion.org/mailman/listinfo/<wbr>gate-users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
<pre class="m_-5837947688201281229moz-signature" cols="72">--
B.Sc. Nicolas Triltsch
Masterand
Technische Universität München
Physik-Department
Lehrstuhl für Biomedizinische Physik E17
James-Franck-Straße 1
85748 Garching b. München
Tel: <a moz-do-not-send="true" href="tel:+49%2089%2028912591" value="+498928912591" target="_blank">+49 89 289 12591</a>
<a moz-do-not-send="true" class="m_-5837947688201281229moz-txt-link-abbreviated" href="mailto:nicolas.triltsch@tum.de" target="_blank">nicolas.triltsch@tum.de</a>
<a moz-do-not-send="true" class="m_-5837947688201281229moz-txt-link-abbreviated" href="http://www.e17.ph.tum.de" target="_blank">www.e17.ph.tum.de</a></pre>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
B.Sc. Nicolas Triltsch
Masterand
Technische Universität München
Physik-Department
Lehrstuhl für Biomedizinische Physik E17
James-Franck-Straße 1
85748 Garching b. München
Tel: +49 89 289 12591
<a class="moz-txt-link-abbreviated" href="mailto:nicolas.triltsch@tum.de">nicolas.triltsch@tum.de</a>
<a class="moz-txt-link-abbreviated" href="http://www.e17.ph.tum.de">www.e17.ph.tum.de</a></pre>
</body>
</html>