2021 IEEE Symposium on Radiation Measurements and Applications
To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Poster Session

   
Shortcut: PS-02
Date: Thursday, 27 May, 2021, 10:30 AM
Room: Room 1
Session type: Poster Session

Contents

Click on an contribution to preview the abstract content.

10:30 AM PS-02-01

Artifact reduction in region-of-interest (ROI) digital tomosynthesis using a deep convolutional neural network (#4)

S. Park1, G. Kim2, H. Cho3

1 Korea Institute of Science and Technology, Brain Science Institute, Seoul, Republic of Korea
2 Korea Institute of Radiological & Medical Sciences, Seoul, Republic of Korea
3 Yonsei University, Department of Radiation Convergence Engineering, Wonju, Republic of Korea

Digital tomosynthesis (DTS) is a geometric tomography technique by a limited-angle scan which has been popularly used in both medical and industrial x-ray imaging applications. It provides some of the tomographic benefits of computed tomography (CT) at reduced imaging dose and time. However, conventional DTS based on filtered-backprojection (FBP) reconstruction requires a full field-of-view (FOV) scan and relatively dense projections to reconstruct DTS images of high quality, which results in high doses for medical imaging purposes. To overcome these difficulties, we investigated region-of-interest (ROI) DTS reconstruction where the x-ray beam span covers only a small ROI containing a target area. In some situations of medical diagnosis, for example, in chest imaging, dental imaging, cardiac imaging, etc., physicians are interested in a local area containing suspicious lesions from the examined structure. Figure 1 shows the schematic illustration of a ROI-DTS scan geometry. Here an x-ray tube and a small-area flat-panel detector move together in an arc around the rotational center during the projection data acquisition, covering only a small target ROI; this leads to imaging benefits such as decreasing scatters and system cost as well as reducing doses. To put this new DTS examination to practical use, an advanced reconstruction algorithm is needed because the ROI-DTS measures incomplete (i.e., truncated, limited-angle) projection data where conventional FBP-based algorithms are unsuccessful in producing clinically feasible images. Several techniques have been proposed to circumvent the interior and limited angle tomography problem, including sinogram extension technique, compressed-sensing (CS)-aided, etc. However, most of the techniques are typically unstable in case of completely interior truncation and limited-angle. In this study, we propose an artifact reduction method in FBP-based ROI-DTS using the U-Net that is a deep convolutional neural network (DCNN) proposed for low-dose and sparse-view CT. Figure 2 shows the U-Net architecture used in this study for image improvement in ROI-DTS. We implemented DCNN method to extract bright-band and limited-angle artifacts through mapping the ROI-DTS images and the ROI-CT images. Figure 3 shows the FBP-reconstructed ROI-CT and FBP, CS-reconstructed ROI-DTS images and the proposed ROI-DTS using the XCAT phantom. We successfully reconstructed ROI-DTS images of substantially high accuracy and no truncation artifacts by using the proposed DCNN method, preserving superior image homogeneity, edge sharpening, and in-plane spatial resolution, and reducing imaging dosages, compared to typical full-FOV DTS images.

Keywords: Low-dose X-ray imaging, Region-of-interest, Deep learning approach
10:30 AM PS-02-02

Partially sampled digital tomosynthesis reconstruction with a multislit collimator using deep learning technique (#5)

S. Park1, H. Cho2

1 Korea Institute of Science and Technology, Brain Science Institute, Seoul, Republic of Korea
2 Yonsei University, Department of Radiation Convergence Engineering, Wonju, Republic of Korea

Digital tomosynthesis (DTS) is a well-established multiplanar imaging technique that uses limited angular scanning to produce cross-sectional images of the scanned object with a moderate crossplane resolution. DTS images are typically reconstructed by using the computationally-cheap analytic filtered-backprojection (FBP) algorithm. This popular technique has been used in a variety of clinical applications such as chest imaging, mammographic imaging, and dental imaging owing to the fact that it provides tomography benefits at reduced radiation dose and scan time. DTS reconstruction methods at low radiation dose are an important field of research. Several methods for radiation dose reduction have been studied including sparse-view DTS, region-of-interest (ROI) DTS, and low-dose DTS. In a previous study, we investigated low-dose DTS reconstruction in partial sampling using a multislit collimation technique where a multislit collimator placed between the x-ray tube and the patient oscillates during projection data acquisition, partially blocking the x-ray beam to the patient thereby reducing radiation dose. Figure 1 shows (a) the schematic illustration of the proposed DTS scan with a multislit collimator and (b) three multislit collimator layouts designed for the simulation: C(2/2), C(3/3), and C(4/4). Here C(n/n) denotes a collimator layout that blocks the x-ray beam over n detector pixels vertically with a n-pixel interval. Partially sampled DTS images reconstructed using the analytic FBP algorithm usually suffer from severe bright-band artifacts around multislit edges of the collimator due to incomplete spatial sampling. Thus, we revisited the FBP algorithm with a new prior sinogram interpolation method in an attempt to obtain a reasonable image quality in partially sampled DTS reconstruction. Figure 2 shows (a) simplified diagram representing the prior sonogram interpolation based DTS reconstruction process (recovered) and (b) U-Net architecture used in this study for partially sampled DTS reconstruction (proposed). Figure 3 shows the FBP-reconstructed CT images using the fully-sampled projections, FBP-reconstructed DTS images using the original and the recovered projections for C(4/4) collimator, and the proposed DTS images using the XCAT phantom. As shown in Fig. 3, the reconstruction quality in the proposed DTS images was close to the CT images. Furthermore, the proposed DTS reconstruction more closely recovered the phantom structure in under-sampling situation (i.e., truncated and partially sampled imaging) compared to the FBP-based reconstruction. In this study, we investigated an effective method for using a deep learning scheme with convolutional neural network to reduce bright-band artifacts in partial sampled DTS.

Keywords: Partially sampled imaging, Deep learning technique, Digital tomosynthesis
10:30 AM PS-02-03

Development and Implementation of a Machine Learning Algorithm for Pulse Shape Discrimination (#7)

R. Garnett1, S. H. Byun1

1 McMaster University, Physics, Hamilton, Ontario, Canada

In any application where a radiation detector is utilized in a mixed radiation field, there is an inherent issue of separating the signal from the background. This research focuses on the separation of neutron and gamma-ray events in a liquid scintillator. The standard solution to this issue leverages the fact that the secondary particles generated in neutron and photon interactions are different; the former being nuclear recoils, typically protons in organic scintillators, and the latter being electrons or positrons in the case of pair production. The charge deposition characteristics of these secondary particles provide the basis for Pulse Shape Discrimination (PSD) as a means of event classification.

The work presented focuses on the development of a machine learning algorithm for use in the problem of pulse shape discrimination. The signals generated from gamma ray and neutron interactions inside the liquid scintillator used are comprised of three characteristic scintillation decay times: 3.16 ns, 32.3 ns, and 270 ns. The proportion of scintillation light generated from these three characteristic times is dictated by the stopping power of the particle traversing the scintillator, generating distinguishable signals for photon interactions relative to neutron interactions. Typically PSD is performed by allowing two different charge collection windows and comparing the amount of charge collected in both; tail-to-total integration. This work utilizes current generation machine vision algorithms for the task of event classification, where the PMT output from the scintillator will be utilized as the input to the machine vision algorithm. Current generation algorithms allow for feature extraction performed simultaneously at differing spatial extents, which enables the algorithm to extract valuable information about the signal from all three time scales present in the scintillator, 3, 30, and 300 ns. This new application of machine vision algorithms will be coupled with a current generation digitizer operating with 12 bit precision and sampling at 3.2 GS/s. This opens up the possibility of utilizing information from the rising edge of the signal, which is on the order of a few nanoseconds.

Training of this algorithm was performed in a supervised manner, which requires that labeled sets of data be provided for the algorithm to learn from. Any improperly labeled data utilized, diminishes the performance of the algorithm, making correctly labeled data paramount to success. Performing this for neutron and gamma-ray events in the scintillator is quite difficult due to the complications mentioned for event classification. The solution used in this project is to utilize an isotopic source of neutrons, PuBe. The PuBe source produces prompt gamma rays in conjunction with neutrons in some decays, with energies greater than 4 MeV. These prompt gamma rays can serve as an event trigger for detection of neutrons in a secondary detector, providing relative certainty that detected events within an appropriate time window are indeed neutron events. Producing a high degree of purity in the training data.

Keywords: Machine Learning, Pulse Shape Discrimination, Radiation Detection
10:30 AM PS-02-04

Approaches to Wide Area Sensor Networks for Distributed Radioactivity (#13)

D. Raji1, 2, R. Cooper2, J. Hayward1, T. Joshi2, M. Salathe2

1 University of Tennesee, Nuclear Engineering, Knoxville, Tennessee, United States of America
2 Lawrence Berkeley National Laboratory, Nuclear Science Division / Applied Nuclear Physics, Berkeley, California, United States of America

We present aspects of a theoretical wireless sensor network for mapping radiation distributed across a wide area. Our evaluation is carried out via an original software modelling framework built to assess parameterizations of and algorithms for the proposed network. Realism is increased by deriving source terms used for the analysis from maps of deposited material concentrations resulting from atmospheric simulations, also using real-world topographical and wind information for a selected area of interest.

Reconstructions of the source term intensity distribution are created via bicubic interpolation, Gaussian process regression, or deep generative methods using the individual node measurements as input data. The performance for each method or parameterization is compared quantitatively using summed L2-norm loss.

This analysis focuses on stepping beyond the basic framework functionality by seeking to optimize the network node placement scheme as well as to evaluate deep generative models for potential improvements to the network distributed source term reconstruction. We will exhibit the results of the study on the models that have been developed and tested for each of these two thrusts.

Keywords: Modeling and Simulation, Sensor Network, Radiation Mapping
10:30 AM PS-02-05

Non-destructive interrogation of nuclear waste barrels through muon tomography: A Monte Carlo study based on dual-parameter analysis via GEANT4 simulations (#15)

A. I. Topuz1, 2, M. Kiisk1, A. Giammanco2

1 University of Tartu, Institute of Physics, Tartu, Estonia
2 UCLouvain, Centre for Cosmology, Particle Physics and Phenomenology, Louvain-la-Neuve, Belgium

The structural characterization of the sealed or shielded nuclear materials constitutes an indispensable aspect that necessitates a careful transportation, a limited interaction, and under certain circumstances an on-site investigation for the nuclear fields including but not limited to nuclear waste management, nuclear forensics, and nuclear proliferation. To attain this purpose, among the promising non-destructive/non-hazardous techniques that are performed for the interrogation of the nuclear materials is the muon tomography where the target materials are discriminated by the interplay between the atomic number, the material density, and the material thickness on the basis of the scattering angle and the absorption in the course of the muon propagation within the target volume. In this study, we employ the Monte Carlo simulations by using the GEANT4 code to demonstrate the capability of muon tomography based on the dual-parameter analysis in the examination of the nuclear waste barrels. Our current hodoscope setup consists of three top and three bottom plastic scintillators made of polyvinyltoluene with the thickness of 0.4 cm, and the composite target material is a cylindrical nuclear waste drum with the height of 96 cm and the radius of 29.6 cm where the outermost layer is stainless steel with the lateral thickness of 3.2 cm and the filling material is ordinary concrete that encapsulates the nuclear materials of dimensions 20x20x20 cm3. By bombarding with a narrow planar muon beam of 1x1 cm2 over the uniform energy interval between 0.1 and 8 GeV, we determine the variation of the average scattering angle together with the standard deviation by utilising a 0.5-GeV bin length, the counts of the scattering angle by using a 5-mrad step, and the number of the absorption events for the five prevalent nuclear materials starting from cobalt and ending in plutonium. Via the duo-parametric analysis that is founded on the scattering angle as well as the absorption in the present study, we show that the presence of the nuclear materials in the waste barrels is numerically visible in comparison with the concrete-filled waste drum without any nuclear material, and the muon tomography is capable of distinguishing these nuclear materials by coupling the information about the scattering angle and the number of absorption in the cases where one of these two parameters yields strong similarity for certain nuclear materials.

Keywords: Muon tomography, Nuclear materials, Monte Carlo simulations
10:30 AM PS-02-06

AI-driven Analytics for Radiological Source Detection and Localization (#38)

S. Volkova1, E. Ayton1, S. Soni1, M. Bandstra2, B. Quiter2, N. Abgrall2, R. Cooper2

1 Pacific Northwest National Laboratory, Richland, Washington, United States of America
2 Lawrence Berkeley National Laboratory, Berkeley, California, United States of America

In this work we present AI-driven analytics capable of learning from both historical sensor signals and non-physical contextual data extracted from public sources to forecast potential background signatures across sensor locations and mitigate the operation burden posed by nuisance alarms during the deployment of unattended radiological sensors. Our novel descriptive and predictive analytics relies on machine learning, deep learning and natural language processing models to quantitatively estimate (1) to what extent historical sensor data is helpful to anticipate future sensor and isotope (Tc-99m, I-131 and 511 from PET isotopes (Positron Emission Tomography)) signatures, and (2) whether the incorporation of contextual data can be used to inform and explain physical sensor data. Our preliminary experimental results demonstrate clear ability to predict isotope and detector signatures from historical data and identify correlations between linguistic terms extracted from construction permits and Cs-137 industrial alerts. We found that detecting isotopes in Fairfax is easier than in DC, learning from monthly features is better than daily and weekly features, and 511 signatures are more difficult to predict than Tc-99m and I-131 in both locations. We discovered that isotopes have more distinct signatures than sensors, thus, predictive model performance is higher for source detection task than for sensor localization task. Finally, we identified positive correlations between linguistic terms extracted from construction permits and Cs-137 alerts for company names: Wash Gas and Light, AT&T in DC; demolition electrical and residential terms, and people names in Fairfax; and construction terms and types of work in both locations. To extend our predictive modeling experiments, we will move from classification to regression tasks to anticipate the number of alerts from each isotope at a specific hour/day/week at each location up to several days/weeks in advance. To further explain the role of domain knowledge extracted from construction permits, we will apply causal structure learning and cause-treatment effect estimation approaches to automatically infer causal relationships between linguistic terms and industrial alerts across locations.

Keywords: machine learning, sensor networks, open-source data analytics
10:30 AM PS-02-07

Optimization of fast neutron detection using pulse shape discrimination at high flux (#43)

G. Song1, H. Kim2, W. Kim1, S. Lee1, H. Choi1, J. Park1, G. Cho1

1 Korea Advanced Institute of Science and Technology, Dept. of Nuclear and Quantum Engineering, Daejeon, Republic of Korea
2 IRIS Co., Ltd., Daejeon, Republic of Korea

Organic crystalline, liquid and plastic scintillators with low Z-number substances are used to detect fast neutrons. Gamma rays are also sensitive while measuring fast neutrons, and pulse shape discrimination (PSD) is used to distinguish them. Pulse shape discrimination has been studied in various ways, pulse shape discrimination is performed using a charge comparison method comparing the total charge (Qbody) and the delayed charge (Qtail) at the peak. Most studies have been conducted at the laboratory level, such as 252Cf or 241Am-Be source. In our case, we will perform pulse shape discrimination in D-T generator and 15 MeV electron accelerator which are very high flux conditions. For the D-T generator, neutron flux is up to 109 #/sec. For 15 MeV electron accelerator, the total flux is 6.72 × 1014 #/sec, and the ratio of neutrons to gamma rays is 1:4071. These conditions require optimization of the detection system due to pile up. Pulse shape discrimination performance was optimized by changing the size of the silicon photomultiplier (SiPM) pixel pitch of the Hamamatsu MPPC and geometry of the EJ276G plastic scintillator before measuring high flux conditions. In general, the larger the pixel pitch, the higher the count per second due to the high photon detection efficiency (PDE). As photon detection efficiency increases, the noise-causing dark count and crosstalk increase, which can lead to poor result in terms of pulse shape discrimination. Thus, experiments were conducted to control voltage and each parameter, such as total pulse width and delay time at the peak, to improve pulse shape discrimination performance. In this study, optimization was performed in terms of count per second (cps) and pulse shape discrimination performance for measuring fast neutrons under the high flux conditions.

Keywords: Pulse Shape Discrimination (PSD), Plastic scintillator, High flux condition
10:30 AM PS-02-08

10:30 AM PS-02-09

A Compact Stilbene-Strontium Iodide Based Radioxenon Detection System (#56)

H. R. Gadey1, 2, A. T. Farsoni2, S. A. Czyz3

1 Pacific Northwest National Laboratory, Richland, Washington, United States of America
2 Oregon State University, School of Nuclear Science and Engineering, Corvallis, Oregon, United States of America
3 Lawrence Livermore National Laboratory, Livermore, California, United States of America

The Comprehensive Nuclear-Test-Ban Treaty (CTBT) prohibits the testing of nuclear weapons anywhere on the Earth. Measuring the concentration of various radioxenon isotopes in the atmosphere has been instrumental in the identification of clandestine sub-surface nuclear weapon detonations. The ratios between various xenon isotopes help in the discrimination of xenon produced from regular reactor operations and nuclear weapon testing. The state-of-the-art detection systems deployed in the field suffer from several drawbacks including memory effect, high-cost, and environmentally susceptible electronics. In this research, an effort has been made to improve upon some of these shortcomings while achieving similar performance to the state of the art. A stilbene-SrI2(Eu) detection system with near 4π solid angle for electron detection and a high-resolution media for sensing photons is presented.

 

The literature review has indicated that the crystalline structure of stilbene makes it an excellent candidate to mitigate the memory effect. Therefore, a custom well-type stilbene gas cell was designed for this application. Optical photons from the stilbene cell were read by a SensL J-Series SiPM array. Two custom-built D-shaped SrI2(Eu) scintillators were used in the capacity of photon detectors. This material was used taking into account its superior light yield and excellent energy resolution. A lateral cut was machined on top of the cylindrical ingot to accommodate SiPM arrays for optical photon readout. Signals from the detectors were routed through a custom-designed pulse processor for coincidence identification and pulse processing. Calibration and characterization of both the electron and photon detectors were carried out using lab check sources. The functionality of the coincidence identification module was tested using the coincidence backscatter between the two SrI2(Eu) detectors using a 137Cs source. A MATLAB user interface was developed to control various parameters of pulse processing and the coincidence identification process including the coincidence time window, the threshold for each channel, and offset of the pulse. Stable ultra-pure xenon samples were irradiated in the Oregon State University TRIGA reactor before being injected in the stilbene gas cell to record detector response. Based on the high-resolution characteristics of the SrI2(Eu) detector and the near 4π solid angle of the stilbene cell, this system is estimated to act as a compact, and cost-effective alternative to the detectors deployed in the field while meeting the Minimum Detectable Concentration (MDC) requirements of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO) of less than or equal to 1mBq per meter cube for 133Xe. The detector response, MDC, and memory effect results are presented in this work.

Keywords: Radioxenon, Stilbene, Strontium Iodide
10:30 AM PS-02-10

Sub-Pixel Sensing for Charge Sharing Events in Pixelated CdZnTe Detectors (#58)

Y. Zhu1, Z. He1

1 University of Michigan, Department of Nuclear Engineering and Radiological Sciences, Ann Arbor, Michigan, United States of America

Pixelated CdZnTe detectors have achieved excellent spectroscopic performance and showed advanced capability of coded-aperture and Compton imaging. Sub-pixel sensing is a technique to improve their imaging capabilities. For single-pixel events, sub-pixel sensing was demonstrated years ago. This paper proposes a method to calculate sub-pixel location for charge-sharing events.

Keywords: Pixelated, CdZnTe, Sub-Pixel
10:30 AM PS-02-11

Importance of Filter Design for Radiation Therapy using Monte Carlo Simulation (#65)

J. Park1, K. Choi3, H. Kim2, G. Song1, H. Choi1, G. Cho1

1 Korea Advanced Institute of Science and Technology, Nuclear & Quantum Engineering, KS015, Republic of Korea
2 IRIS Co.LTD, KS015, Republic of Korea
3 University of California San Francisco, Department of Radiation Oncology, 94133, California, United States of America

 Charged Particle Therapy (CPT) or hadron therapy is one of external beam therapy to treat tumor. To conduct hadron therapy, it is important to deliver energy precisely to cancer cells. For this reason, Monte Carlo (MC) simulations is a necessary process in hadron therapy. Hadron composed of heavy ion such as proton, neutron, helium, carbon, neon etc. Application of He ions provides a distinct clinical advantage. First, dose distributions with attributes such as a sharper Bragg peak and lateral penumbra (reduced range straggling and scattering) compared to protons, and similar potential for tumor control with a substantially reduced fragmentation tail compared to carbon ions. A certain amount of beam energy spread is need to uniformly give energy to the tumor. One of the methods to make beam energy spread is to penetrate the filter. We set up with two type of filter shape, with ripple shape and without ripple shape, and two type of thickness, 2mm and 3mm. Bragg peak width evaluated at the 80% dose level (BPW80) of with ripple shape filter is wider than without ripple shape filter. Bragg peak depth, height of with ripple shape filter is decrease. According to filter thickness thicker, these characteristics more prominently. These characteristics of Bragg peak with ripple shape filer are necessary to obtain a homogeneous Spread Out Bragg Peak (SOBP).

 We studied the importance of the design of filter on dose uniformity in the SOBP region by means of Monte Carlo simulations using TOPAS. We simulated the helium beam using different energies of the beams and used for a detailed analysis of the proposed filter design (thickness, shape).
Keywords: Radiation Therapy, Monte Carlo Simulation, Spread Out Bragg Peak
10:30 AM PS-02-12

Photoneutron detection in active interrogation scenarios using small organic scintillators (#71)

C. A. Meert1, A. T. MacDonald1, A. J. Jinia1, W. M. Steinberger1, S. Clarke1, S. Pozzi1

1 University Of Michigan, Nuclear Engineering and Radiological Sciences, Ann Arbor, Michigan, United States of America

A persistent challenge in photon active interrogation is the effect of pulse pile-up on neutron detection rates. During interrogation, the neutron signatures from illicit special nuclear material (SNM) can provide a characteristic signal; however, the intense radiation environment can cause pulse pile-up in detectors. Organic scintillator detectors are favorable in active interrogation due to their fast neutron efficiency and fast timing characteristics, and rely upon pulse shape discrimination to compare pulse shapes and classify detection events. Pile-up events are typically rejected during the analysis process because the pile-up events result in pulses with relatively large tail integrals, which appear similar to single neutron pulses. Thus, it is essential that pulse pile-up events are minimized to produce accurate results. In this work, we compare the performance of a 0.216 cm3 stilbene cube to a 103 cm3 stilbene cylindrical detector during photon active interrogation. We use a Varian linear accelerator (linac) to produce an interrogation beam of photons up to 9 MeV in energy, which induces photofission and photodisintegration in depleted uranium. By reducing detector size, detection rates and pile-up rates will decrease. Preliminary tests with a strong 137Cs source indicate that the 0.216 cm3 is robust against pile-up, i.e., in gamma-ray fluxes greater than 105 cm-2 s-1 neuron detection rates varied < 10% from the ground truth. Preliminary linac data shows lower total detection rates in the 0.216 cm3 stilbene detector, but the ratio of pile-up events relative to total detections is much lower than the 103 cm3 detector resulting in higher confidence results. Our analysis will serve as a starting point for a detector size optimization for detection of fast, prompt neutrons during photon active interrogation.

Keywords: active interrogation, stilbene, pile-up
10:30 AM PS-02-13

Multi-Gate Pulse Shape Discrimination and Pileup Rejection (#73)

W. G. J. Langeveld1, M. King1

1 Rapiscan Systems, Inc., Rapiscan Laboratories, Fremont, California, United States of America

Pulse Shape Discrimination (PSD) is generally used to distinguish neutron-induced signals from gamma-ray-induced signals in scintillation detectors with intrinsic PSD capability, i.e., detectors where the two signal types have different shapes. There are numerous PSD algorithms, of which the most commonly used is two-gate (or simply “gated”) PSD. In two-gate PSD, integrals I0 and I1 of the detector signal are computed over a long and a short time window respectively. One measure of the PSD value is the ratio (I0 - I1)/I0 . In PSD-capable detectors, a plot of PSD value vs. energy typically yields two more-or-less horizontal bands, with the top band representing neutron events and the bottom band gamma-ray events.

In high count-rate environments, signal pileup is a significant problem. With PSD, piled-up signals would typically show up as events outside of these bands. As discussed elsewhere, two-gate (or any) PSD can readily be used with detectors without intrinsic PSD capability for pileup rejection, but since piled-up signals associated with gamma-ray events contaminate the neutron band, two-gate PSD cannot entirely resolve the pileup problem for PSD-capable detectors. Other PSD methods, such as wavelet-based PSD can be adapted to include pileup rejection, and pulse-shape fitting PSD intrinsically rejects pileup. These methods are, however, generally more difficult to implement, especially in hardware.

It is, however, also possible to compute integrals for more than two time windows (“gates”), and use the additional information to verify that the signal corresponds to either a gamma-ray event or a neutron event using pulse-shape-fitting methods. This sparse pulse-shape-fitting technique thus can be used for simultaneous PSD and pileup rejection.

In this work we present sparse pulse-shape fitting and multi-gate PSD and pileup rejection results from data obtained using the test platform for the Platform Integrated and Robotic Active Neutron Interrogation Apparatus (PIRANIA). We also show how this approach can lead to a relatively straightforward hardware implementation.

This work has been supported by the U.S. Department of Homeland Security (DHS), Countering Weapons of Mass Destruction office (CWMD), under competitively awarded contract No. 70RDND18C00000007. This support does not constitute an express or implied endorsement on the part of the Government.

Keywords: Pulse Shape Discrimination, Pileup Rejection, Neutron Detection
10:30 AM PS-02-14

The non-proportionality and energy resolution of cesium iodide doped thallium calculated using analysis of raw scintillation light pulse events (#84)

Z. Mianowska1, M. Moszynski1, K. Brylew1, A. V. Gektin2, S. Mianowski1, A. Syntfeld-Kazuch1

1 National Centre for Nuclear Research, Otwock, Poland
2 Institute for Scintillation Materials, Kharkiv, Ukraine

The performance of cesium iodide doped thallium scintillation material (CsI:Tl) in gamma spectroscopy was analysed based on raw scintillation light pulse events. The crystal was excited using X-ray and gamma-ray sources with energies from 17keV up to 662keV. The raw signals were collected using high class digital oscilloscope. The analysis was performed off-line using Python scripts. The light response comparison was performed based on the light pulses selection from the region of full energy peak events convoluted with the single-photoelectron response of photomultiplier tube. The crystal was tested in changeable temperature condition in range from 293K (+20°C) down to 203K (-70°C). The authors show and discuss the non-proportionality phenomenon, light output and energy resolution parameters as a function of raw pulse integration time for chosen scintillator.

Keywords: non-proportionality, scintillation light pulse, energy resolution
10:30 AM PS-02-15

10:30 AM PS-02-16

Development of a digital data acquisition system capable of pulse-pileup recovery for HPGe detectors (#94)

T. Domingo1, S. H. Byun1

1 McMaster University, Radiation Sciences Graduate Program, Hamilton, Ontario, Canada

The versatility of gamma-ray spectroscopy has given rise to its many applications, from quantification of trace elements in a sample to maintaining nuclear material safeguards. Depending on the application and gamma-ray detector, there is often a compromise made between detection efficiency and energy resolution. While characterizing or quantifying trace radionuclide concentrations in an unknown sample, energy resolution is often the more important property. In these situations, high-purity germanium (HPGe) detectors are the detector of choice as they have a superior energy resolution which significantly reduces measurement uncertainties and improves minimum detection limits.

 

Applications using HPGe detectors are limited to counting rates on the order of a few tens of thousands of counts per second (cps) before the performance of the detector is severely diminished. The limiting factor for high counting rate measurements comes from the need to shape signals with a relatively long shaping time, on the order of 1-6 μs, to maintain good energy resolution. At higher counting rates however, if a signal is received while a previous signal is still being shaped, pulse-pileup occurs. Pulse-pileup distorts the energy measurement of the previous signal and entirely drops the measurement of the second signal. Traditionally, this situation is usually handled with a pileup rejection method. At ultra-high counting rates, on the order of a million cps, the percentage of the pileup rejection is so high that the spectroscopy system suffers from extremely high deadtimes.

 

In order to solve this critical problem, the present study aims to develop a data acquisition system capable of deconvoluting pileup signals into two or more recovered signals in real-time. This technique has been applied for NaI(Tl) and silicon drift detectors, which showed very promising results. However, applications of this technique for HPGe detectors have been unsuccessful as it relies on a fixed signal rising edge shape, a feature that cannot be applied for HPGe detectors due to the variation in the rising shape. Our study has been focused on developing a deconvolution algorithm to identify and recover piled-up signals using a planar HPGe. During this development stage, signal waveforms are analyzed offline, optimizing the algorithm over a range of counting rates while building a library of the rising edge shapes. Once completed, the deconvolution algorithm will be benchmarked for various high-rate measurements. The preliminary result of the pile-up deconvolution performance will be presented.

Keywords: High-rate gamma-ray spectroscopy, Pulse pile-up recovery, HPGe
10:30 AM PS-02-17

Investigating the Interpretability of ML-Guided Radiological Source Searches (#101)

G. R. Romanchek1, S. Abbaszadeh2

1 University of Illinois at Urbana-Champaign, Department of Nuclear, Plasma, and Radiological Engineering, Urbana, Illinois, United States of America
2 University of California Santa Cruz, Department of Electrical and Computer Engineering, Santa Cruz, California, United States of America

Reinforcement Learning (RL) techniques are an effective strategy for radiological source localization for single detector systems, as they decrease source localization time over uniform search protocols, require little on-site computation, and can host intrinsic capabilities of handle attenuating structures. Interpretability and explainability, however, are sacrificed if transitioning from a statistical technique to a machine learning (ML) one. In general, little feedback is given to the user during search efforts, and the derivation of results may be difficult to understand. Additionally, RL algorithms so far do not provide a clear search-termination point, and thus do not provide an adequate source location estimate. In this work, an RL convolutional neural network (CNN) is implemented as an algorithm for detector navigation and source localization in a single, mobile-detector system. This solution is outfitted with a search termination action, to be taken when the network predicts the detector is at the source location. For user-friendly, live feedback, the output values are converted into a confidence metric for each action via the softmax function. A second technique for feedback is an input perturbation scheme used to generate a pseudo-likelihood map of the source location. The combination of these two approaches affords users insight into the confidence of the discrete actions the network takes and a forecast of where it thinks the search will end. Network inputs include: (i) a map of the environment and detector location, (ii) a map of the mean counts recorded at each location, and (iii) a map of the number of times each location has been visited. A double Q-learning approach was used, and so the output vector returns the expected cumulative reward (q-values) for taking each of the five actions – move up, down, left, right, and terminate. Each step consists of the detector arriving at a new location, collecting a 1-sec measurement, updating the three input maps, and predicting which action to take next. Training took place in the 10-m × 10-m simulated environment, with random source and detector location initializations, an average source intensity of 5000 cnts/s and background intensity of 25 cnts/s, random wall length and location, and for one-million episodes. The trained network localized the source with median error of 0-m and IQR of 1-m, with attenuating obstacles present, over 1000 test searches. The input perturbation strategy is not used during training, as it is only intended to provide the user with interpretable information during network use. For the deployed network, during each step, and for every detector-legal location, the network yields the confidence terminating at that point. After normalization, this map, or a variation of it, is presented to the user to help inform them on what the network thinks will happen. Additionally, the confidence measures of all action are be presented to the user for each step taken. The information provided through these techniques is typically unavailable to the user who must otherwise wait for the search to end to receive feedback.

Keywords: reinforcement learning, source search, interpretability and explainability
10:30 AM PS-02-18

Estimation of Aerial Gamma-ray Background Using Deep Denoising Autoencoder (#105)

K. Hojik1, C. Gyuseong1, K. Jinhwan1, J. Byoungil2

1 Korea Advanced Institute of Science and Technology, Department of Nuclear & Quantum Engineering, Daejeon, Republic of Korea
2 Korea Atomic Energy Research Institute, Intelligent Computing Laboratory, Daejeon, Republic of Korea

Aerial radiation detection using Gamma ray spectroscopy has been widely used for mineral exploration and environmental monitoring, and measuring soil contamination in man-made sources of radiation in the event of radiation material leakage, starting with uranium exploration. Eliminating background spectrum such as radon and cosmic spectrum is the one of most important issue in aerial radiation detection. However, the conventional methods require complex calibration of the coefficients or installation of additional upgrade-looking detector. Furthermore, those are not available if other man-made sources of radiation like cs137 exist on the surface of soil. In this manuscript, we present a denoising deep autoencoder that reconstructs the cosmic spectrum and the radon spectrum, which are the background spectrum from the aerial gamma-ray spectrum. To evaluate proposed method, we set-up spectra with the experimentally obtained GEB and structure based on NaI(Tl) and SiPM for Monte Carlo N Particle Transport Code 6 (MCNP6). The simulated spectra were generated at altitudes of 100–300 m, including cosmic-ray, radon, and territorial spectra at energy ranges of 50keV to 3MeV. The dataset used for training the deep denoising autoencoder consisted of 8,000 training dataset, 2000 validation dataset, and 500 test dataset. The composition ratio of spectrum varied from 2–5% for cosmic ray spectra, 9–30% for radon spectra, and 65%–89% for terrestrial spectra. In spite of the statistic struggle of noise input, the Deep denoising autoencoder successfully estimated the background spectrum.

Keywords: deep denoising autoencoder, background correction, deep learning
10:30 AM PS-02-19

Development of a portable neutron spectrometer (#109)

T. C. Borgwardt1, K. Meierbachtol1, K. Smith1, K. Bartlett1

1 Los Alamos National Laboratory, Los Alamos, New Mexico, United States of America

A portable neutron spectrometer system has been developed at Los Alamos National Laboratory for passive interrogation of

nuclear material. The system utilizes four EJ-301 liquid scintillators, allowing measurement of the neutron energy spectrum.

The energy spectrum is derived from the measured light output spectrum by an unfolding procedure that utilizes a measured

detector response matrix. This unfolding procedure allows the neutron spectrum to be characterized without utilizing time-

of-ight techniques. The portable nature of the system, combined with the characterization of neutron energy spectrum

without time-of-ight, creates the opportunity for the system to nd uses in areas of national and homeland security.


In this presentation, an overview of the system will be presented. The measurement to characterize the detectors and

build a detector response matrix will be explained. Several sources with dierent shielding combinations were measured

and unfolded to test the system. These results will be presented with a discussion of some of the current limitations of the

system. Additionally, some potential upgrades will be discussed, which include deuterated materials for the detectors and

silicon photomultipliers.

Keywords: Neutron Spectroscopy, Passive Interrogation
10:30 AM PS-02-20

Evaluation and Simulation of a Vest-Wearable Gamma-Neutron Detection System Based on CLLBC Scintillators (#114)

M. McClish1, A. Gueorguiev1, C. Sosa1, H. Yu1, J. Glodo1

1 Radiation Monitoring Devices, Inc (RMD), Watertown, Massachusetts, United States of America

RMD developed a wearable gamma-neutron detection system based on RMD’s Cs2LiLa(Br,Cl)6 (CLLBC) dual mode gamma and neutron scintillation technology. The system is intended to replace the large and heavy Backpack Radiation Detectors (BRD) at a fraction of the size and weight. We performed system evaluation based on ANSI 42.53 Backpack Radiation-Detection Systems Standard. The focus was the Static Backpack Characterization in the ANSI 42.53 standard, including system response to gamma and neutron radiation; false alarm rate, identification of shielded and unshielded radionuclides; simultaneous radionuclide identification; overload characteristics for identification; gamma exposure rate accuracy; gamma and neutron sensitivity; and angular dependence of the directional indication. The system was modeled using GEANT and the measurement were compared to the simulations. A feedback from the simulations was used to improve the system angular response, gamma and neutron radionuclide detection and isotope identification at close to background radiation levels.

This work has been supported by the US Defense Threat Reduction Agency, under competitively awarded contracts DTRA # HDTRA1-17-S-0003 and HDTRA1-20-C-0039.  This support does not constitute an express or implied endorsement on the part of the Government.  DISTRIBUTION A: Approved for public release.

Keywords: wearable system, directionality, CLLBC
10:30 AM PS-02-21

Unsupervised Probability Density Expectation Maximization Algorithm for Particle Identification in Pulse-Shape Discrimination Applications (#118)

J. Cole1, B. V. Egner2, B. Frandsen2, D. Holland2, J. E. Bevins2

1 Bard College, Department Of Physics, Annandale-on-Hudson, New York, United States of America
2 The Air Force Institute of Technology, Department of Engineering Physics, Wright-Patterson AFB, Ohio, United States of America

For detectors with sensitivity to multiple types of radiation, separating these events becomes important for a range of applications from nuclear security to nuclear physics. For organic scintillators, pulse-shape discrimination is popular approach, but current approaches generally suffer from limited to no separation below a couple of hundred keVee. This work proposes an unsupervised probability density expectation maximization algorithm for determining particle identification probabilities from one-dimensional charge-integration pulse-shape discrimination distributions. Specifically, 10B(n,α) events are separated from a fast neutron and gamma-ray background in a 10B-enriched deuterated toluene liquid scintillator. Proof-of-concept is achieved using Gaussian and truncated-Gaussian distributions with time-tagged, where the model determined particle identification to within 3-4\% of nominal values. For application to un-tagged data, the model achieved a Kolmogorov-Smirnov Two-Sample Test p-value of 1.00, indicating that the model-reconstructed distribution was statistically consistent with the measured data. Future work will implement a flexible Beta distribution model with a more robust set of tagged data to verify and explore model performance.

Keywords: Pulse shape discrimination, radiation detection, unsupervised models
10:30 AM PS-02-22

Machine Learning-Based Characterization of Nuclear Fuel Cycle Operations (#122)

M. W. Brinker1, A. A. Bickley1, B. J. Borghetti2, A. L. Franz1, B. F. L. Goldblum3, 4, J. H. Whetzel5, J. E. Bevins1

1 Air Force Institute of Technology, Department of Engineering Physics, WPAFB, Ohio, United States of America
2 Air Force Institute of Technology, Department of Electrical and Computer Engineering, WPAFB, Ohio, United States of America
3 Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, California, United States of America
4 University of California, Berkeley, Department of Nuclear Engineering, Berkeley, California, United States of America
5 Sandia National Laboratories, Livermore, California, United States of America

Deterring nuclear proliferation has been a fundamental U.S. policy since before the advent of nuclear nuclear weapons.  Fundamentally, nonproliferation policies are dependent on detecting proliferation activities at fuel cycle facilities, which requires linking facility operations to field measurement data. 
One potential approach to provide this link are machine learning-based models. In this work, a one-dimensional convolutional neural network was developed to identify a facility’s level of operation using magnetometer data. The network achieved a test accuracy of 0.892±0.014 in a 2-class (“on/off”) classification scenario, representing a ~15% increase over previous methods.  The network also achieved test accuracies of 0.547±0.054 and 0.507±0.015 in 3-class and 5-class classification scenarios, indicating that facility operations could be identified at finer granularity than “on/off."  Future work into the generalizability of these models between facilities could enhance U.S. proliferation detection capabilities and advance non-proliferation policies.

Keywords: machine learning, convolutional neural networks, nonproliferation
10:30 AM PS-02-23

Automatic Spectral Identification of Uranium using Laser-induced Breakdown Spectroscopy and Non-negative Matrix Factorization (#127)

E. H. Kwapis1, C. J. Hunter1, K. C. Hartig1

1 University of Florida, Department of Materials Science and Engineering, Nuclear Engineering Program, Gainesville, Florida, United States of America

In the event of a large-scale radiological release, rapid and accurate knowledge of the location of release as well as the composition and behavior of the contamination plume are of utmost importance for the continuation of effective operations with improved situational awareness. Laser-induced breakdown spectroscopy (LIBS) is a robust, field-deployable analytical technique capable of standoff, isotopically resolved and phase identifiable (e.g. UO, UO2) detection of elements across the periodic table. LIBS uses a high-powered pulsed laser to produce a luminous micro-plasma, wherein this plasma is then imaged to produce a spectrum of characteristic atomic and molecular emission lines characterizing a sample. To enable the rapid mapping and compositional analysis of contamination plumes following a nuclear detonation or nuclear accident scenario, non-negative matrix factorization (NMF) has been applied to the analysis of LIBS spectra of uranium.

NMF is an unsupervised machine learning technique that is based on the constrained factorization of a data matrix to provide an alternative representation of that data1. This method can be used to separate superimposed signals assuming that the sources are independent and that their mixing is linear. NMF is constrained to reconstruct signals as non-negative, which is an important requirement for the analysis of LIBS spectra, of which strict positivity is an innate characteristic. Over 300,000 lines of U I and U II have been reported2. Previous work in the analysis of LIBS signals of uranium have been limited to intensive and limited template-based matching procedures. Hence by requirement, historically only a small subsample of these lines has been selected for the analysis. Recent research has shifted towards multivariate partial least squares regression to perform spectral fitting3,4, yet this still requires the manual selection of uranium lines and is limited to small sample sizes. NMF removes the need to label emission lines, can separate spectral signatures characterizing different elements, and output the relative abundances of those elements. In addition to demonstrating the success of the NMF algorithm on the automatic spectral identification of uranium, an investigation into the physical limitations (i.e. noise and line broadening) that the environment and instrumentation impose on the algorithm will be provided. There is significant potential in the combination of LIBS and NMF towards supporting the real-time measurement and analysis of a contaminated area following a radiological release.

[1] D.D. Lee and H.S. Seung, Neural Inf. Process. Syst. 13 (2000). [2] P. Voigt, Phys. Rev. A. 11 (1975). [3] J. Song et al., Spectrochim. Acta Part B 150 (2018). [4] G. Chan et al., Spectrochim. Acta Part B 89 (2013).

Keywords: Uranium, Laser-induced Breakdown Spectroscopy, Non-negative Matrix Factorization
10:30 AM PS-02-24

Design and Simulation of a Next-Generation Dual n-gamma Detector Array at Los Alamos National Laboratory (#128)

E. Bennett1, K. Kelly1, M. Devlin1, J. O'Donnell1

1 Los Alamos National Laboratory, P-3, Los Alamos, New Mexico, United States of America

Measurements of neutron elastic scattering and angular distributions remain one of the largest uncertainties in simulations of fission-driven nuclear systems. Current experimental techniques are limited by issues arising from measuring only neutrons or gammas or, in the case where both particles are measured, the relatively small number of angles current dual n-gamma detectors are capable of covering. Next-generation elpasolite detectors offer near-perfect n-gamma pulse shape discrimination across a wide array of energies and the availability of these detectors in sufficient quantities make the construction of a large, highly-segmented detector array a possibility for the first time. Preliminary studies have isolated CLYC detectors as the ideal candidate for use in measuring neutron elastic scattering. We describe the mechanical design and simulation of the next-generation dual n-gamma detector array utilizing these detectors currently under development at Los Alamos National Laboratory. Additionally, we discuss an upcoming campaign utilizing CLYC detectors in conjunction with Chi-Nu’s existing array of 54 liquid scintillator detectors.

Keywords: neutron elastic scattering, detector development, nuclear data
10:30 AM PS-02-25

Preliminary measurements with D-D Associated Particle Imaging (API) Neutron Generator (#133)

M. D. Coventry1

1 Starfire Industries, Champaign, Illinois, United States of America

One challenge in many fast neutron-based active interrogation scenarios whether it is based on fast neutron detection or inelastic scatter gamma measurement is the substantial interference generated from unrelated or undesired events such as thermal captures in or scattering from surrounding materials that produce baseline noise or even the desired signals from the wrong sources.  The associated particle imaging (API) technique is a way to track single fast neutrons to isolate fast neutron-induced signals from undesired sources.  API generator systems detect the timing and location of the charged particle produced in the neutron-production reaction and from that, knowledge of a finite spatial and temporal range is obtained.  Tracking induced events (fast neutron or gamma ray) and using time-correlation to the associated particle events results in improved signal to noise ratio and localizing the source of these events.

Due to small market size and technical challenges with the associate particle imaging detector, few off-the-shelf options exist.  As part of a DOE SBIR effort, Starfire Industries has developed an API variant of an existing neutron generator platform, the nGen 400.  The nGen 400 was developed for portable fast neutron radiography and has the requisite small spot size for neutron imaging applications and API requirements and plenty of headroom in neutron output capability.

A preliminary D-D API detector was built and tested using a 50-mm diameter, 200-μm thick YAG:Ce scintillator mounted on a 2-mm thick sapphire viewport with a single PMT mounted on the air side.   Signals from that large, single-pixel API detector, an EJ-301 2x2” liquid scintillator for fast neutron detection and a 3x3” NaI for gamma ray detector were connected to a CAEN DT5730 digitizer for fast signal acquisition are reported.  D-D API requires isolation of the He-3 events from the proton and triton events from a separate fusion reaction.  Data are presented showing time-of-flight measurements using the liquid scintillator detector and inelastic scatter gammas were clearly shown within and outside of the interrogation cone.

 

Supported by Department of Energy, DNN R&D SBIR Phase II, Award: DE-SC0018677.  The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressly or implied, of the DOE or the U.S. Government.

Keywords: Neutron generator, Associated Particle Imaging, Active interrogation
10:30 AM PS-02-26

Feature Engineering for Stationary Directional Gamma Ray Detection (#135)

M. Durbin1, A. Lintereur1

1 The Pennsylvania State University, Nuclear Engineering, State College, Pennsylvania, United States of America

An array of close-packed detectors can provide directional information of a gamma ray source by way of differences in solid angle and the effects of self-occlusion between detectors within the array. This modality does not depend on temporal variations or the movement of the source and array. To extract directional information in the static, but potentially complicated scenario, sophisticated data analysis must be used. Machine learning (ML) has demonstrated promise as a powerful data analysis tool for various radiation detection applications, and in previous work, has been shown to outperform reference table-based methods for stationary directional gamma ray detection.  While modern software libraries have made ML easily deployable on a variety of data types, many facets of the overall ML approach must be optimized to reap the greatest benefits. These facets can include model selection and hyper-parameter tuning, but in some cases, domain-aware feature engineering may have the greatest effect on the overall usefulness of a ML model. Good feature engineering can improve model performance, provide insight into the physical domain, and guide the process of incorporating ML into a problem space in an efficient way. In the case of data produced from an array of detectors, the simplest choice of input features (IFs) would be the normalized counts received in each detector. However, for detectors with spectroscopic capabilities, additional IFs could be extracted, such as total peak and Compton continuum counts, or spectral binning schemes. To test the effects of IF choice, a dataset was simulated using MCNP with 60Co, 137Cs, and 192Ir point sources located up to five meters away from a four-detector NaI array. Preliminary results using a Random Forest model showed that using just total counts as input features gave 47% accuracy in angular predictions, but using photopeak and Compton continuum regions as IFs increased localization accuracy to 58%. Further, including features to indicate isotope and using a simple spectral binning scheme led to accuracies of 60% and 68% , respectively. This notable improvement in directional capability demonstrates that even for the same model and dataset, feature selection is critically important and can have a large impact on model performance. The benefit of spectral binning led to investigations with convolutional neural networks (CNNs). CNNs involve convolution and pooling layers which extract features from the input data before being fed through a series of fully connected neural layers. In this sense, the CNN can be thought of as an additional feature engineering technique. Preliminary results with a CNN architecture yielded an accuracy of 70%.  This work investigates a variety of IF options for a range of directional detection scenarios and investigates the correlation between the IFs and the effect they have on the model’s predictions.These studies are intended to demonstrate the importance of domain-aware feature engineering for source localization problems, and by extension, radiation detection applications to homeland and national security.

Keywords: Machine Learning, Feature Engineering, Directional Detection
10:30 AM PS-02-27

New type of ultra-high ( (#137)

V. Gayshan1, A. Gektin2, V. Suzdal2, P. Steinmeyer3

1 Scintitech, Shirley, Massachusetts, United States of America
2 Amcrys, Ltd., KHARKIV, Ukraine
3 Radiation Safety Associates, Inc., Hebron, Connecticut, United States of America

 

The results of the first phase of development of an ultra-high energy resolution acquisition system, “Eagle Eye,” are presented. The goal of this phase was to modify an existing commercial multichannel analyzer (MCA) to utilize an Eagle Eye type of spectrum acquisition; to conduct measurements of Cs137, Ba133, Co60 isotopes with NaI(Tl), CsI(Na), CsI(Tl), SrI2(Eu) and LBC crystals; and to validate the results. Also, spectra from different types of damaged crystals (hydration, mechanical damage, or otherwise non-optimal light collection) were studied with the goal of using Eagle Eye-equipped acquisition systems to develop a detector diagnostic tool.The results of the first phase of development of an ultra-high energy resolution acquisition system, “Eagle Eye,” are presented. The goal of this phase was to modify an existing commercial multichannel analyzer (MCA) to utilize an Eagle Eye type of spectrum acquisition; to conduct measurements of Cs137, Ba133, Co60 isotopes with NaI(Tl), CsI(Na), CsI(Tl), SrI2(Eu) and LBC crystals; and to validate the results. Also, spectra from different types of damaged crystals (hydration, mechanical damage, or otherwise non-optimal light collection) were studied with the goal of using Eagle Eye-equipped acquisition systems to develop a detector diagnostic tool.

Keywords: Resolution <3%, ultra-high resolution for halide scintillators, digital pulse analisys
10:30 AM PS-02-28

Simulation of Radiation Source Localization in An Unknown Environment Through Circular Approach (#148)

H. M. Durukan1, S. Kalafatis1

1 Texas A&M University, Department of Electrical and Computer Engineering, College Station, Texas, United States of America

Manual (eg. Geiger-Muller) localization of radioactive sources in an unknown environment is time consuming and hazardous. Recent developments in autonomous robotics enable automated radioactive source localization, thus protecting human workers from harmful radiation. Recently, a radiation source model of Cs-137 and Am-241 with a pixel detector enabled researchers to simulate various environments despite the absence of a real detector and source [1]. To estimate the location of the radiation source, a circular approach used probability density function (PDF) of normal distribution and geometric efficiency of the detector. The true position of a single radiation source intersects the smallest circumference circle, if the first region of interest is properly compressed using inverse square law and is centered with respect to new counts in each iteration. Once the map of the environment is acquired using a G-map algorithm, the first exploration is obtained through a zig-zag path planner employing waypoints to navigate the robot [Fig. 1].  The robot collects count readings during the exploration. The data gathered is then filtered to obtain the five highest count readings. The circular algorithm takes the average of these highest counts with 1σ uncertainity to decide the interval, and the counts are assigned as hotspots. Finally, the algorithm records these coordinates in an (x,y) plane.  Angle between source and the detector surface is critical in count determination. In a real experiment, the maximum count reading is seen if the largest surface of the detector is perpendicular to the single point source. Given this, we draw a circle around each hotspot assigned by the circular algorithm [Fig. 2]. The intersection of the circles can be considered as an estimation of the true position of the radiation source, but this approach relies on a perfect path planner and orientation of the robot with respect to the source, so in actuality, an error is observed. In the circular algorithm presented, the position of hotspots are centered and the largest radius is chosen in order to cover all hotspots with assigned (x,y) coordinates. This step is iteratively repeates until the minimum circular area (limited by the size of the robot base) is reached [Fig. 3]. The compression rate of the circle is determined by comparing the current mean value of highest counts to that of previous one, consistent with inverse square law. The PDF needs to be updated in each iteration, as the circular algorithm approaches the minimum radius. After the estimation of true position, the accuracy [Table 1] and duration [Table 2] to complete the mission are tested. In our research, a Gazebo simulator has been employed to emulate an empty and attenuation-free environment of 36 m2 to test the circular algorithm using a Timepix detector with an Am-241 radiation source.
References
[1]        P. Stibinger, T. Baca, and M. Saska, “Localization of Ionizing Radiation Sources by Cooperating Micro Aerial Vehicles With Pixel Detectors in Real-Time,” IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 3634–3641, Apr. 2020, doi: 10.1109/LRA.2020.2978456.

Keywords: Radiation Localization, Mapping, Homeland Security
10:30 AM PS-02-29

Pixelated CdZnTe Systems for Spectroscopy and Imaging of Gamma Rays Above 3 MeV (#149)

F. Zhang1, K. Moran1, Y. Boucher1

1 H3D, Inc., Ann Arbor, Michigan, United States of America

Cadmium Zinc Telluride (CdZnTe) has become a standard material for gamma-ray detection, identification and imaging for applications between 50 keV and 3 MeV. Recent advances in read-out electronics by the University of Michigan has enabled the extension of the dynamic range to 9 MeV, which enables a wide variety of additional applications to be explored. These applications include active interrogation for homeland security as well as proton therapy beam verification. To enable this type of research, H3D has developed a pair of prototype systems that each contains 16 CdZnTe detectors as well as a new application specific integrated circuit (ASIC) that can handle gamma rays up to 9 MeV. This work will discuss the performance and characteristics of these systems as well as some of the potential applications of the technology.

Keywords: Gamma-ray Detection, Radiation Imaging, Semiconductor Radiation Detectors
10:30 AM PS-02-30

Optical Surface Model Selection in GEANT4 for High Aspect Ratio EJ-200 (#162)

C. Redding1, C. Delzer1, J. Hayward1

1 University of Tennessee, Dept. of Nuclear Engineering, Knoxville, Tennessee, United States of America

The optical response of scintillators is directly coupled to many of the performance characteristics of these materials. As such, it is important for the designers of scintillation detectors to adequately be able to predict the detector response for a given configuration prior to constructing the final detector assembly. This process is usually carried out by utilizing codes for optical transport modeling such as Zemax or GEANT4. Despite significant advances in modeling of the optical surfaces in the 1980's-2000's, many of the prepackaged optical surface models in GEANT4 are not well understood by many users of the code and its derivatives (such as GATE). With parameterized models containing up to 9 parameters, or look-up-table models derived from measurements on select scintillator materials, choosing an appropriate model is not very straightforward. In response to this, and in an effort to understand which model performs better in a practical application, experiments are performed on polished EJ-200 samples with aspect ratios of 2:1, 8:1, and 16:1 using a calibrated photomultiplier tube. Three cross-sectional area types are investigated along with three surface reflectors: Teflon, aluminum foil, or titanium dioxide paint. The experimental results are compared to the model result for both the Unified and LUT (LBNL) models with reasonably selected parameters. Early results show that the Unified model performs better in absolute terms when compared to the LUT model for the chosen parameters.

Keywords: Optical surface model, Light Collection Modeling, Plastic Scintillator
10:30 AM PS-02-31

Smart Data Analytics Transform of Low-Count Gamma-Ray Spectra for Enhancing Isotope Detection using a Self-Learning Window Driven Relevance Vector Regression (#164)

M. Alamaniotis1

1 University of Texas at San Antonio, Electrical and Computer Engineering, San Antonio, Texas, United States of America

Detection of isotope signatures in low-count gamma-ray spectra measured with a scintillation detector is of great importance on non-proliferation and nuclear security applications. Of special interest is the case of radioactive source search scenarios, where the challenge is to detect illicit use and/or movement of nuclear materials. In source search scenarios, where a mobile detector acquires consecutive measurements in very short time intervals (for instance, every one second), the measured spectra exhibit low count rates, and likely, high fluctuation. Statistical fluctuation observed in all gamma-ray spectra as a result of several factors may mask signature patterns that indicate the presence of nuclear materials of interest. Among many, one way to overcome the challenges in radioisotope detection is by developing data analytics methods that process the acquired spectrum and provide a clean signal. A profound barrier in data analysis is imposed by the detection limit, which requires a minimum amount of radioactivity before a pattern is detectable. However, the time constraints together with the dynamically varying environment (and conditions) impose significant limitations in data acquisition leading to cases where the measurement consists of a very low count rate (often below the detection limits). Thus, advanced solutions are required to process the low count spectra that lead to enhanced detection of nuclear material signatures. The recent explosion in machine learning analytics has paved the way for developing smart methods that are more efficient than traditional data analysis. In this work, a new smart method for processing low count gamma-ray spectra obtained with portable low-resolution detectors is presented. The method makes use of a machine learning tool, namely, the relevance vector regression (RVR) to transform the initial raw measurement to a signal with enhanced features. Spectrum transformation is conducted by the use of a specific window that slides over the detector channels and applies the RVR model in that window. In particular, the RVR model equipped with a Gaussian kernel function initially uses the values within the window (i.e., training data) for self-learning (i.e., smart) and subsequently provides a set of values that replace those within the window. This process is repeated until the sliding window reaches the end of the spectrum providing a completely transformed spectrum. RVR utilizes a subset of the available data to train itself, and thus, is less susceptible to high variance within the window, while it provides fast processing. The proposed data analytics method has been tested on a set of real-world low count spectra obtained with a low-resolution NaI detector in extremely short time intervals. Results demonstrate that the data analysis method transforms the spectrum in a form that enhances detection of photopeaks by i) removing a significant amount of the signal fluctuation making the photopeaks easier resolved, and ii) significantly reducing the number of spectral maxima (i.e., potential photopeaks) being identified by a maxima localization algorithm.

Keywords: machine learning, RVR, gamma-ray spectra
10:30 AM PS-02-32

Locally Competitive Algorithm for Sparse Radiation Source Imaging (#171)

G. Landon1, D. Holland2, A. Lintereur3, J. Bevins2

1 Cedarville University, School of Engineering and Computer Science, Cedarville, Ohio, United States of America
2 Air Force Institute of Tech, Department of Engineering Physics, Wright Patterson AFB, Ohio, United States of America
3 Penn State University, Department of Nuclear Engineering, University Park, Pennsylvania, United States of America

It has been shown that Rotating Scatter Mask (RSM) systems provide a portable, directional radiation detection system with $\sim4\pi$ field-of-view over a broad range of photon energies. However, accurately reconstructing the source image from a RSM response is difficult due to spatial similarities in the mask design. This work utilizes a Locally Competitive Algorithm (LCA) to take advantage of the sparse nature of radiation sources in a given field-of-view expected for most applications. LCA and maximum-likelihood expectation-maximization reconstruction performance is compared for simulated radiation sources of varying shapes and sizes. Accurate reconstructions of the simulated detector response were obtained from both methods. However, image reconstruction, when compared to the known source distribution, varied. Point sources were found to reconstruct well in both methods, but LCA obtains improved performance (higher resolutions and correctly identified activated pixels) for challenging source distributions, such as ring sources. These improved imaging characteristics demonstrate promise for LCA to be a useful method to reconstruct high-resolution images from sparse radiation sources.

Keywords: radiation imaging, locally competitive algorithm, encoded imagers