Carlos E de Almeida* and Eder Aguirre
Received: August 07, 2024; Published: August 16, 2024
*Corresponding author: Carlos E de Almeida, Radiological Sciences Laboratory, State University of Rio de Janeiro, Brazil, Email ID: cea71@yahoo.com.br
DOI: 10.26717/BJSTR.2024.58.009124
The Monte Carlo code uses the probability functions of different interaction processes that may occur between radiation and the structural characteristics and composition of the medium, obtaining expected values of macroscopic quantities as an approximate solution through simulations. This method makes use of statistics and computing systems to reproduce the behavior of real systems using mathematical models. The calculation time which can be a constrain the modern and now more easily accessible computers with multiple processors forming powerful clusters have reduced this time and made this solution more viable. The basis of the Monte Carlo method is to sample probability density functions (PDF) to describe the behavior of stochastic variables involved in the particle transport, using random numbers. These variables allow us to construct the trajectory of each particle, simulating each one individually with its secondary products and the physical processes that occur along its path, called history. This history is essentially shaped by free displacements and interaction events where the particle while changes its direction of movement, can lose energy and produce secondary particles. Due to its stochastic nature, you need to simulate many histories to get consistent results.
There are five basic modules of the Monte Carlo method that must be implemented, as described by Andreo, et al. [1].
1. First, use a random number generator that produces numbers
uniformly distributed between 0 and 1.
2. Second, consider all PDFs of the physical processes considered
in the simulation.
3. Third, use PDF sampling methods and thus obtain the value of
the stochastic variable in question. The most common are the
direct, rejection and mixed methods.
4. Fourth, store the values of the variables of interest during the
simulation, such as, for example, the absorbed dose.
5. Fifth, estimate the statistical uncertainties associated with the
quantities determined during the simulation.
It is important to visualize in detail what these components mean and review two important laws that represent the mathematical basis of the Monte Carlo method.
Pseudorandom Number Generator
The vital part of the MC method is the so-called generator of pseudorandom numbers because they are generated from deterministic algorithms so that they behave as stochastic to obtain consistent results in simulations. Normally, a pseudorandom number generator starts from a number called seed, and from there a series of numbers is produced, the sequence of which begins to repeat itself after a certain period. If the same seed is used, the same sequence of pseudo-random numbers will always be obtained, and the sequence of numbers produced must be uniformly distributed over a certain interval (typically between 0 and 1), with a sufficiently long period and numbers with little correlation. The period of a generator depends mainly on the size of the binary data used by the computer’s processor to perform its operations, generally 2b, where b is the size of that data. Initially, when computers used 32 bits processors, generator periods (∼232) it may reduce certain simulations. Now that all processors are 64 bits, the period of the generators (∼264) is long enough in most situations.
Probability Density Functions (PDF)
A PDF (p(x)) is a measure of the probability of observing in each interval (x,x+dx), for example, the distance traveled by a particle in a material until it interacts with it. The domain of the function p(x) is, in general, xmin, xmax, therefore, the properties of this function are as follows:
From the PDF, the cumulative probability function or, simply, probability function can be obtained according to the relationship:
which represents the probability of the stochastic variable having any value in the interval (xmin,x) given a normalization condition c(xmax) = 1.
Sampling Methods
The sampling methods can be considered of three types: direct method, rejection method and mixed method.
Direct Method: Initially, the PDF must be converted into its corresponding cumulative probability function to generate a pseudorandom number that represents the probability of observing a corresponding value of the stochastic variable. From there, the equation ε = c(x) is solved to find the value of x, x = c−1(ε).
Since all cumulative functions are invertible, whether analytically or numerically, the inverse function provides a practical mechanism used for generating stochastic values from random values of ε. Figure 1 shows the result of the direct method applied in the case of a normal distribution with domain [a,b].
Rejection Method: The first step is to obtain a new distribution function from the probability distribution function by its maximum value:
As a second step, a random number, r1, is chosen, uniformly distributed in the interval [0,1] to obtain a value of x that must be uniformly distributed in the interval of the probability distribution function [a,b]. Finally, another random number, r2, is chosen to perform the following evaluation: If 𝑟2<𝑝(𝑥) / 𝑝(𝑥𝑚𝑎𝑥) (region under 𝑝(𝑥) / 𝑝(𝑥𝑚𝑎𝑥)) in Figure 2, then x is accepted, if that is not the case, reject (shaded region above 𝑝(𝑥) / 𝑝(𝑥𝑚𝑎𝑥) in Figure 2) and return to step 2. Clearly, this method is limited to finite values of a and b.
Mixed Method: This method is the result of a combination of the two methods mentioned above expressing the (PDF) as follows
where f(x) is invertible, and g(x) is not invertible.
Then, both functions must be normalized to obtain f’ e g’, using the direct method applied to 𝑓’, with the x’ obtained to apply the rejection method to 𝑔’, that is, a random number r uniformly distributed in the interval [0,1] may be generated and, if 𝑔’(𝑥’)≤𝑟, the value of x found is accepted, if not, a different value for x.
Estimation of Statistical Uncertainties in MC
In principle, the uncertainty associated with the average value of a stochastic variable determined by MC, is inversely proportional to N0,5, being N the number of simulated histories. One way used currently to reduce the overall uncertainty by half, is to increase the number of stories four times. To achieve that, a value that allows its calculation using the Monte Carlo method must be assumed and then the conventional approach must be used to estimate the uncertainty for N stories. Thus, initially, the sum of the values of xi e xi2, must be accumulated, where the subindex i represents the ith simulated history. Then the two sums are divided over N and finally the estimated variance of the mean () is calculated from the following expression:
The Law of Large Numbers
The law of large numbers refers to the performance of the sum of numerous random variables. If n numbers at random with uniform PDF in the range [a,b] are chosen and each of these values is evaluated in the function, then the law determine that the sum of such functions must converge to the expected value of the function, according to James (1980)[2]. Thus, the above expression can be rewritten as follows:
Under certain conditions, the estimator that appears on the left side of the equation above converges to the exact value of the integral when it tends to infinity. Since the left-hand side is just a Monte Carlo estimate of the integral on the right-hand side, the law of large numbers can be interpreted as stating that the Monte Carlo estimate of an integral under “certain conditions” is a consistent estimate, i.e., converges to the correct answer when the random sample size becomes large enough.
Central Limit Theorem
This important theorem determine that the sum of many independent random variables normally presents itself as a Gaussian distribution. It does not matter how the individual random variables are distributed, only the predicted values (or mathematical predictions or averages), and the variances must be finite, and the value of n must be large enough, according to James [2]. Whereas the law of large numbers helps represent the Monte Carlo estimate of an integral as correct for “infinite” n, the central limit theorem also indicates how that estimate is distributed. The Gaussian distribution is completely defined knowing its expected value and variance s2.
The most frequent MC codes used in radiation transport and their associated fundamental references for further reading are:
a) ETRAN (ElectronTRANsport), Seltzer [3]
b) EGSnrc (Electron Gamma Shower), Krakow and Rogers [4]
c) GEANT4 (GeometryANdTracking), Agostinelli, et al. [5]
d) MCNP (Monte Carlo N-Particle), Brown, et al. [6]
e) PENELOPE (PENetration and Energy LOss of Positrons and
Electrons), Salvat, et al. [7].
ETRAN (ElectronTRANsport)
It was developed at the National Bureau of Standards by the legendary Seltzer [3] and its initial application was for the transport of electrons in the energy range from 1 keV to 100 GeV. The code is based on Molière’s theory, when describing multiple dispersions at different but not very complex geometries defined by the user. As it was the first code created to simulate the transport of electrons and photons in matter, it was widely used by several research groups. Examples of work can be obtained from Borasi [8], where different aspects of the interaction of photon and electron beams in different media are presented, comparing the results with experimental data.
EGSnrc (ElectronGammaShower)
A general-purpose Monte Carlo code was developed by Krakov and Rogers [9] for simulating the coupled transport of electrons and photons for a given arbitrary geometry and particle energies in the range 1 keV to 10 GeV. It incorporates significant improvements of the condensed history technique for the simulation of charged particle transport and rather improved low energy cross sections. The code contains a multiplatform version with several user codes such as: DOSRZnrc, which scores the dose in a generalized cylindrical geometry, and FLURZnrc, which scores particle in the same geometry, as well as EGSPP, a geometry package for implementing nearly arbitrary. There is a wide variety of works that use this code, and specifically in medical physics, as described by Rogers [9], where it is widely used to model different beams for clinical use in radiotherapy including brachytherapy sources. Campos and de Almeida [10] study different dosimetric parameters of the Bebig 60Cobalt High Dose Rate (HDR) source using the AAPM (2004) TG-43U1 formalism necessary for the treatment planning of patients with this type of source.
GEANT4 (GeometryANdTracking)
Developed as result of a worldwide collaboration of physicists and software engineers, Geant4 combine a set of tools to simulate the interaction of particles by matter Agostinelli et al. (2003). It has a full range of functionalities, including tracking, geometry, physical models, among others. The physical processes available in the code include electromagnetic, hadronic and optical processes, as well as a large set of particles, materials and long-lived elements, ranging from 250 eV to energies on the order of TeV. It was structured to handle complex geometries and allow easy adaptation to different sets of applications by exploring software engineering and object-oriented technology and implemented in the C++ programming language, and is regularly used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics. The article by Archambault et al. (2003) exemplifies a set of applications using the Geant4 code in different areas, such as medical physics (radiotherapy and brachytherapy), in studies at the cellular level, by Bernal, et al. [11], radiodiagnosis, among others.
MCNP (Monte Carlo N-Particle)
The MC code implemented in MCNP was developed by scientists of the Los Alamos National Laboratory and described by Brown, et al. [6]. This software is widely used in projects and research involving uncharged particles, such as neutrons and photons, as well as electrons, tracking these particles in wide energy ranges. The code also allows developing an arbitrary three-dimensional configuration of materials in geometric cells delimited by first and seconddegree surfaces and fourth degree elliptical tori. This code presents important standard features that make it very versatile and easy to use, including different types of fonts; geometry and output counting plotters; a wide collection of variance reduction techniques; a flexible counting structure and an extensive collection of cross-sectional data. Examples of works using this code in medical physics are described by de Kim, et al. [12], who characterized the MLC multileaf collimators of an accelerator for dynamic IMRT treatments, as well as Ajaj and Ghassal [13], who simulated a specific linear accelerator.
PENELOPE (PENetration and Energy Loss of Positrons and Electrons)
PENELOPE a code developed by Salvat, et al. [7] is used to perform coupled simulations of photon-electron transport in practically any medium. This code can handle the transport of photons, electrons and positrons from 50 eV to approximately 1 GeV. A mixed simulation strategy can be used to simulate the transport of electrons and positrons, where one part of the charged particle history is condensed and the other treated in detail. Furthermore, the user can configure the code to follow the charged particle step by step, that is, in a detailed simulation. PENELOPE is a well-documented MC code with a relatively simple structure with parameters C1 and C2 that control the mean free path between hard or catastrophic interactions in the mixed simulation algorithm. On the other hand, Wcr and Wcc define the threshold energy for the creation of a secondary particle through radiative emission and collision events, respectively. These parameters determine the accuracy and speed of the simulation, requiring that C1 and C2 have small values to guarantee a good precision of the order of 0.05, a value generally recommended for each one, obtaining a faster simulation for values greater than this. On the other hand, the Wcr and Wcc parameters mainly influence the simulated energy distribution, with a faster simulation the higher their values are, although a distorted dose distribution for very large values might result in an undesirable result.
Using the PENGEOM package, material systems are designed with defined geometry, with some examples available to the user. Furthermore, PENELOPE has a database of 280 pre-defined materials, each with its own identification number, making it possible to create a file of virtual cross sections for any material medium, if its chemical composition is known. The user must provide one of the two “user codes” or a master program for their problem. In the user code, the geometry is controlled regarding the evolution of the stories (tracks) and tracks the relevant quantities. The PENELOPE distribution package includes two user codes: PENCYL and PENMAIN. The first simulates the particle transport for cylindrical media and the second for quadratic geometries. Both have similar structures and formats, producing output files with generic information, such as the number of primary particles simulated, the speed of the simulation in primary particles per second, the average number of secondary particles generated, the average energy deposited, etc. The statistical uncertainty of all quantities and distributions evaluated are reported as 3σ.
The main PENELOPE subroutine packages are:
Penelope.f: for simulating the coupled transport of photons and
electrons in a homogeneous medium.
Pengeom.f: for tracking particles in any geometry with materials
made up of different homogeneous bodies limited by quadratic
surfaces.
Penvared.f: contains the variance reduction subroutines.
Timer.f: consisting of synchronization subroutines, based on
conventional Fortran 95 intrinsic procedures.
Material.f: the main program to generate files with effective
interaction sections for the materials considered in the simulation.
In the penelope.f subroutine package, the history of each particle
is simulated, each consisting of a sequence of segments or steps,
called “free flights” or “jumps”. At the end of each segment, the
particle interacts with the environment, where it can lose energy,
change direction and create secondary particles. The KNOCK function
is responsible for simulating collision events, explicitly, at the end
of each step. For the coupled electron-photon pair, simulate by
successively calling the following subroutines found in the penelope.f
file:
1. Subroutine CLEANS: initiates the secondary stack, where the
initial states of the secondary particles are stored.
2. Subroutine START: forces the next interaction event to be a
smooth event before starting a new story or sub-story, whether
of a primary or secondary particle, respectively, and when a particle
crosses an interface. The START subroutine is strictly necessary
for electrons and positrons only.
3. Subroutine JUMP: here the step length of the particle until the
next catastrophic event is calculated.
4. Subroutine KNOCK: calculates the new energy and direction of
motion, and stored initial states of the secondary particles are
generated, if any.
5. Subroutine SECPAR: establishes the initial state of a secondary
particle and removes it from the secondary stack.
6. Subroutine STORES: stores a particle in the secondary particle
stack.
Simulations using the MC method have been used in different areas of science, and in medical physics, during the last decades (Rogers [9]). Radiotherapy uses ionizing radiation to treat diseases such as cancer, delivering a lethal dose of radiation to tumor tissues, seeking to control tumor tissues and preserving normal tissues as much as possible. From a physical point of view, the main task in planning radiotherapy treatments is to determine the distribution of radiation doses in the desired target volume, with Monte Carlo being the most accurate method so far for determining these distributions. Andreo [14]. For example, the electron beam is a type of radiation used in radiotherapy for more common treatments, such as cases of superficial tumors, or in combination with photon beams, for better distribution in depth. Especially in this type of beams, due to their charged particle character, their interaction is dominated by processes other than photons, dose distributions in inhomogeneous media are more difficult to estimate, due to variations in the density of the material. For tissues such as lung, bone, etc., for example, electron penetration is a function of the density of atoms in the material. Carron [15]. International documents recommend an approximate way to modify dose distributions using the CET (Coefficient of equivalent thickness), defined, for a parallel beam, as the ratio of the water thickness in which the inhomogeneity produces the same dose rate transmission absorbed, according to ICRU (1984) [16]. There is, however, a very low agreement with reality, and little progress has been made in its clinical implementation. Furthermore, there are cases in which “cerrobend” or lead blocks are used to define fields with special shapes, and in cases where the patient has some type of prothesis in the treatment region, there may be a significant level of secondary scattering modifying the dose distribution. Due to the need to guarantee these situations in a physically correct way, the Monte Carlo method can be used to transport radiation for some clinical situations, previously validating each beam through direct comparison with experimental measurements, so that they can then be used in different cases of heterogeneities.
Several applications can be mentioned additionally, such as the dosimetric characterization of 125I sources reported by Rodrigues, et al. [17]; the determination of correction factors for the attenuation of the wall of cylindrical chambers used in high dose rate brachytherapy, according to Ferreira, et al. [18]; the determination of disturbance factors for cylindrical and parallel plate chambers in gamma radiation beams in a homogeneous medium, according to Ferreira [19]; spectra and dose deposition in depth in a PMMA breast phantom, as pointed out by David, et al. [20].
In these examples, the PENELOPE code version 2008 was used for electron beam simulations. The initial simulations were carried out using beams with energies of 6, 9 and 12 MeV validated with experimentally measured PDP curves, as described by Aguirre, et al. [21]. The values of the intrinsic transport parameters of the PENELOPE code, such as particle absorption threshold energy, cutoff energy for inelastic and bremsstrahlung collisions, used in all simulations were: Eabs=10 keV, C1=C2=0.1 w Wcc=Wcr=100 keV.
Three examples with different heterogeneities were simulated:
1. Water-air,
2. Hemi-blocked field with lead and
3. A case of breast cancer treatment planning.
All geometries were defined using quadratic surfaces from the PENELOPE package. In the second case of the hemi-blocked field, dose distributions for a 25 MeV polyenergetic beam by Pohlit [22] are reported in the literature, and it is therefore of interest to simulate that beam for comparison with the respective reference.
Example 1 Figure a: Water-Air Interface
The hypothetical case presented consists of an air cavity of a homogeneous medium located 2 cm deep from the typical position of the larynx, using an electron beam with nominal energy 6 MeV, more appropriate for this anatomical situation. What stands out in the results is the displacement of almost the same value of the air cavity (2 cm) in the isodose curves and in the PDP, as shown in Figure 3, due to the little interaction and the corresponding dose delivery in this region of the air. A decrease in dose can also be seen on the sides of the water-air interface, which could lead to an underdosage in the treatment in this region. The comparison between the two types of beams, perpendicular and divergent, indicates that there is a displacement of most of the PDP of the beam, reaching an intersection at approximately 20% of the dose after the region of maximum dose. This is due to the low spectral energies that are absorbed in the region closest to the target surface (Figure 4).
Example 2: Hemi-Blocked Field
Simulations carried out with a hemi-blocked field with lead are presented, using perpendicular beams and lead thickness suitable for each energy, in such a way that below them there are no primary electrons that deposit dose, thus, the pure effect of the “hot” region is a product from the scattering of electrons at different angles at the edge of the lead. As the energy of the primary beam increases, this effect becomes more pronounced, reaching 120% of the relative dose for an energy of 25 MeV, as shown in Figure 5, considering that all beams were normalized to the maximum “off-axis” of 2.5 cm. In both cases, that hot region exists, however, there is a difference in the simulations, where that hot region is on the side of the edge and not below it as presented in the literature.
Example 3: Breast Cancer Treatment
A clinical situation of a breast cancer treatment simulated using two nominal electron energies, 6 MeV and 9 MeV, and a heterogeneous medium composed of 4 components: water (0.5 cm), adipose tissue (1.5 cm), bone (0.5 cm), and lung (2.5 cm). The simulations carried out for each beam in a homogeneous water phantom served as a standard for comparing the PDP. The results are similar for the two simulated energies, with only the results for the energy of 9 MeV and field size of 6 x 6 cm2 being presented as an example in Figures 6 & 7. In this last, a more complex example, spectra validated in previous simulations were used, and, as a result, the isodose curves presented an unusual shape, as the electrons interact with the bone, generating backscattering caused by the change in density that influences the dose value, which can be better appreciated in the PDP curves. However, after crossing the lung region, less interaction is observed and, therefore, a greater range of dose deposition, generating a “tail” that becomes more pronounced for the beam with a nominal energy of 9 MeV. The influence of field size is of little relevance on the PDP and isodose curves, and as it increases, it becomes the isodose already known in the literature because of the interaction with water in this region.
In radiotherapy, accuracy in the dose calculation using TPS is of vital importance as part of the quality assurance program. This dose calculation is an important component to optimizing therapeutic gain, that is, maximizing the dose to the tumor, saving dose in healthy tissue for patients treated with radiation. Although the clinical benefit of more accurate dose distributions and how they impact tumor recurrence has not yet been quantified or demonstrated, but there is evidence of dose differences of the order of 7% that are clinically detectable, according to Dutreix [23]. Many studies have demonstrated that a 5% dose variation can result in changes between 10 to 20% in the Tumor Control Probability (TCP) or changes of up to 20 or 30% in the Normal Tissue Complication Probability (NTCP) if the dose prescription falls along the steepest region of the dose-effect curves, according to Orton [24], Stewart [25], Goitein [26].The Monte Carlo method is indeed a very useful tool for calculating doses in different media is currently the best option if one is searching for the best accuracy. The amount of research in recent decades using this method has increased exponentially, signaling that future applications of this method will continue to expand for beams of photons, electrons and other types of particles in medical physics [27-29].