15th European Molecular Imaging Meeting
supported by:

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).
*.ics

Data Processing & Quantification

Session chair: Wuwei Ren (Zurich, Switzerland); Joanna Polanska (Gilwice, Poland)
 
Shortcut: PW14
Date: Wednesday, 26 August, 2020, 5:30 p.m. - 7:00 p.m.
Session type: Poster

Contents

Abstract/Video opens by clicking at the talk title.

900

ANTx2 – a toolbox for atlas registration of rodent CT and MR images

Stefan P. Koch1, 2, Susanne Mueller1, 2, Yijen Wu3, Anja M. Oelschlegel4, 5, Ulrich Dirnagl1, 6, Christoph Harms1, Philipp Boehm-Sturm1, 2

1 Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Department of Experimental Neurology and Center for Stroke Research Berlin, Berlin, Germany
2 Charité - Universitätsmedizin Berlin, Cluster of Excellence NeuroCure and Charité Core Facility 7T Experimental MRIs, Berlin, Germany
3 University of Pittsburgh School of Medicine, Department of Developmental Biology, Pittsburgh, United States of America
4 Leibniz Institute for Neurobiology, Magdeburg, Germany
5 Otto-von-Guericke-University Magdeburg, Institute of Anatomy, Magdeburg, Germany
6 Berlin Institute of Health (BIH), QUEST Center for Transforming Biomedical Research, Berlin, Germany

Introduction

Atlas registration is a prerequisite for state-of-the-art postprocessing workflows of neuroimaging data, e.g. voxel-based statistics, automated volume-of-interest analyses and connectomics. To this purpose, we have recently introduced ANTx, a MATLAB toolbox to register mouse MR images to the Allen mouse brain atlas. Here, we present ANTx2 (https://github.com/ChariteExpMri/antx2), implementing several key improvements for preclinical imaging researchers working with mouse MR, rat MR and mouse SPECT/CT neuroimaging data.

Methods

The following adaptions to previous version of ANTx were made: First, since the previously used tissue probability maps (TPMs) provided by SPMMouse1 neglect the lateral ventricles, we have added more recent TPMs of Hikishima et.al.2. Second, the latest version (2017) of the Allen brain atlas was implemented including more regions and the ventricular system. Third, in order to enable processing of rat MRI data with the two most commonly used atlases, the TPMs and rat brain atlas of Valdes-Hernandez et al. (VA)3 were nonlinearly registered to the Waxholm (WA) atlas4. Fourth, workflows were developed for ex vivo MR images of excised fixed brains. Fifth, a pipeline for registration of mouse SPECT/CT data to the Allen brain atlas was developed as outlined in Fig. 1.

Results/Discussion

Inclusion of the Hikishima TPMs substantially improved the quality of image registration close to the lateral ventricles. Masks generated from the most recent Allen atlas now match the publicly available Allen database on the homepage (http://mouse.brain-map.org/). Qualitative inspection of overlays of the Waxholm atlas and rat TPMs of Valdes-Hernandez et al. revealed accurate registration of T2-weighted images of the Wistar rat brain (Fig. 2). Background removal and watershed-based segmentation algorithms of MR images of excised brains allowed fully automated Allen atlas registration of MR images of two mouse brains in one tube scanned in one session. The SPECT/CT pipeline enabled automated measurement of SPECT tracer content in Allen brain atlas regions.

Conclusions

Based on the widely used software packages SPM and elastix, ANTx2 presents a comprehensive toolbox to process rodent neuroimaging data including image segmentation, registration, quantification and statistics.

AcknowledgmentWork was supported by the Deutsche Forschungsgemeinschaft Cluster of Excellence NeuroCure (Exc 257) and the German Federal Ministry of Education and Research (BMBF; 01EO0801, Center for Stroke Research Berlin), the BMBF under ERA-NET NEURON scheme (01EW1811) and by the German Research Foundation (DFG 428869206).
References
[1] Sawiak SJ, Wood NI, Williams GB, Morton AJ, Carpenter TA. SPMMouse: A New Toolbox for SPM in the Animal Brain. In: Proc Int’l Soc Mag Res Med. 2009, p 1086.
[2] Hikishima K, Komaki Y, Seki F, Ohnishi Y, Okano HJ, Okano H. In vivo microscopic voxel-based morphometry with a brain template to characterize strain-specific structures in the mouse brain. Sci Rep 2017; 7: 85.
[3] Valdes Hernandez P, Sumiyoshi A, Nonaka H, Haga R, Aubert Vasquez E, Ogawa T et al. An in vivo MRI Template Set for Morphometry, Tissue Segmentation, and fMRI Localization in Rats   . Front. Neuroinformatics  . 2011; 5: 26.
[4] Papp EA, Leergaard TB, Calabrese E, Johnson GA, Bjaalie JG. Waxholm Space atlas of the Sprague Dawley rat brain. Neuroimage 2014; 97: 374–386.
Fig. 1: Pipeline for image registration of SPECT/CT data to the Allen brain atlas.
The first step includes watershed-based segmentation of a brain mask from individual CT scans. These masks are registered to a brain mask template for which a transformation to the Allen space is known. The mask template and corresponding nonlinear transformation to the Allen brain atlas were generated based on a T2-weighted image from an animal of the same cohort. Finally, the CT and aligned SPECT image are transformed into Allen brain atlas space.
Fig. 2: Quality of rat brain MR atlas registration.
(A) T2w image in WA space overlaid with the template TPMs of gray matter (red), white matter (blue) and cerebrospinal fluid (green), and hemispheric mask (light/dark color of TPM’s). T2w image in WA space overlaid with (B) WA atlas and (C) VA atlas. (D) Deformation field (Jacobian) to describe the amount of transformation (yellow: compression;  blue: stretching). (E) T2w image with tissue segmentations in native space. T2w in native space overlaid with with (F) WA atlas and (G) VA atlas.
Keywords: mri, spect/ct, atlas registration, image registration
901

A free-time-point pharmacokinetic model for Dynamic Contrast Enhanced exploration

Julie Fefebvre1, Ikram Djebali1, Augustin Lecler1, Afef Bouchouicha1, Mailyn Perez-Liva1, Joevin Sourdon1, Isma Bentoumi1, Charles-André Cuenod1, 2, Daniel Balvay1

1 Université de Paris / Inserm, Parcc, Paris, France
2 AP-HP,Hôpilal Européen Georges Pompidou, Service de ratiologie, Paris, France

Introduction

Dynamic-Contrast-Enhanced (DCE) MRI was widely studied to identify microcirculatory disorders involved by pathologies such as tumors or ischemia. The capacity of the DCE as a diagnostic aid has been demonstrated in many monocentric studies [1]. However, the interpretation of the DCE curves remains problematic, which leads to difficulties in quantification [2]. In this work, we propose a minimal-hypothesis model (FTPM), to explore DCE data and test hypotheses assumed by other models. The consistence of FTPM was evaluated by comparison with two current pharmacokinetic models (PM).

Methods

FTPM generated tissue impulse responses (TIR) by cubic Hermite splines interpolation provided between decreasing free-time points [Fig.1]. Two incompatible pharmacokinetic models used in the literature (2CX and AATH) [3] were used to generate DCE data which were considered, iteratively, as the ground truth. Both dataset was fitted by FTPM to test the compliance of the FTPM modelling with these two reasonable assumptions. TIR generated initially by PMs were compared with the one estimated by FTPM (TIRe) by using a normalized quadratic error (QE). Qualitatively, a survey was proposed to readers to know if the observation of TIRe was sufficient to estimate the PM (its hypothesis) which was in line with the observations. FTPM performances were compared with those of the B-Spline model [4].

Results/Discussion

Whatever the PM used for data generation, QE was lower than 3% without noise and increased under 10% for a signal to noise ratio (SNR) of 20. Overall, the patterns of TIR patterns remained identifiable in TIRe curves [Fig 2], with respect to both the sharpness and regularity of the curves to be evaluated. This compatibility between sharpness and stability was possible thanks to the temporal freedom of the model. It is not accessible with conventional deconvolution technics. For SNR=20, sensitivity and specificity of PM identification from TIRe by readers were over 80%, resulting in 8 exams to identify PM with 95% of confidence. FTPM was more efficient and neutral than B-Spline.

Conclusions

In DCE exams, TIR contains the information of the tissue microcirculation. Its challenging evaluation was performed efficiently with FTPM in a preliminary simulation study which included two pharmacokinetic assumptions. It resulted in TIR curves that were sufficiently accurate and precise to identify these assumptions qualitatively and quantitatively.

References
[1] O’Connor JPB, Jackson A, Parker GJM et al. DCE-MRI biomarkers in the clinical evaluation of antiangiogenic and vascular disrupting agents. Br J Cancer 2007; 96:189–95.
[2] Heye T, Davenport MS, Horvath JJ et al. Reproducibility of Dynamic Contrast-enhanced MR Imaging. Part I. Perfusion Characteristics in the Female Pelvis by Using Multiple Computer-aided Diagnosis Perfusion Analysis Solutions. Radiology 2013.
[3] Sourbron SP, Buckley DL. Classic models for dynamic contrast-enhanced MRI: CLASSIC MODELS FOR DCE-MRI. NMR Biomed 2013;26:1004–27.
[4] Jerosch-Herold M, Swingen C, Seethamraju RT. Myocardial blood flow quantification with MRI by model-independent deconvolution. Med Phys 2002;29:886–97.
Figure 2
Examples of deconvolution provided by FTPM on tissues data generated by two pharmacokinetic models, without noise (PM: 2CX and AATH with the equal parametric values). A: tissue enhancement provided by 2CX and AATH models for the same arterial input function (AIF in red). In green the data fitting with FTPM. B: Tissue impulse responses assumed by 2CX and AATH and their assessment by FTPM. The shapes of the curves reveal the hypothesis underlying the data.
Figure 1
FTPM principle. Including bolus arrival time (τo), 6 reference point (blue crosses) are evaluated in time and intensity, assuming a decreasing constraint (green arrows). TIR values for any time are evaluated by cubic Hermite spline interpolation, to avoid oscillations between reference points that would be incompatible with physiology.
Keywords: DCE, pharmacokinetic, deconvolution, spline
902

A comparison of different CEST quantification techniques in vitro and in vivo

Daniel Schache1, Solène Bardin2, Henriette Lambers1, Julien Flament3, Fawzi Boumezbeur2, Cornelius Faber1, Luisa Ciobanu2, Verena Hoerr1

1 University Hospital Münster, Department of Clinical Radiology, Münster, Germany
2 CEA, CEA/DRF/Joliot/Neurospin/UNIRS, Gif-sur-Yvette, France
3 CEA, CEA/DRF/Jacob/MIRCen, Fontenay-aux-Roses, France

Introduction

CEST MRI is based on the chemical exchange of protons between water molecules and solutes, which allows measuring low concentration metabolites with enhanced sensitivity [1]. CEST data are typically quantified by calculating the asymmetry ratio from measured Z-spectra, containing contaminated signals from different solutes [2].

Fitting the Z-spectrum with Lorentzian functions, based on the Bloch-McConnell equations, was proposed as an alternative to quantify dissolved solutes and metabolites [3].

Here, we asses these two quantification methods both in vitro and in vivo.

Methods

Phantoms of glucose (10-60mM) and lactate (15-90mM) were measured on a 9.4T MR Biospec system (Bruker) equipped with a 72 mm quadrature coil. Datasets were acquired with a CEST RARE sequence [4].

In vivo mouse datasets were recorded under 1-2% isoflurane on a 11.7T MR Biospec system (Bruker) equipped with a cryoprobe. Datasets were acquired with a CEST RARE sequence: TE/TR 30ms/5000ms, FOV (12.8mm)2, matrix 128x128, resolution (0.1mm)2. 23 saturation offsets from -6.25 to 6.25ppm were sampled and two S0 measurements ±50ppm were recorded.

For Lorentzian Fitting: CEST datasets were corrected by WASSR [5] and first fitted locally for the individual components (metabolites, direct saturation (DS), magnetization transfer (MT)). These results were used as starting values for a global fit.

Results/Discussion

Fig. 1a shows the corrected and inverted Z-spectrum of an in vitro phantom containing a mixture of glucose and lactate in a concentration of 50mM. In the asymmetry spectrum (Fig. 1b), the two compounds could not be quantified separately due to their close resonance frequencies (glucose: 0.66ppm, 1.28ppm, 2.08ppm, 2.88ppm; lactate: 0.4-0.5ppm). In contrast, the different contributions of glucose and lactate to the signal could be resolved by Lorentzian fit functions (Fig. 1a) and quantified by calculating the area under the curve (AUC): 59.0±0.6mM (glucose); 45.2±0.9mM (lactate). In the range of 15-90mM (lactate) and 10-60mM (glucose) an approximately linear correlation between concentration and AUC was measured.

By Lorentzian fitting selective assignment of major metabolites was also possible in vivo (Fig.2). In the CEST spectrum of the hippocampus of a mouse brain, glutamate was identified, glucose only in small amounts while lactate was not detected.

Conclusions

Lorentzian fitting offers the potential for a more precise determination of the individual contributions of the CEST spectrum compared to the asymmetry spectrum. However, especially for in vivo quantification, the development and simulation of suitable boundary conditions for the fit parameters on the basis of measurable quantities like exchange rate, relaxation times, metabolite concentrations and saturation intensities is necessary.

AcknowledgmentThis work is supported by the ANR and DFG under the project BAMBOO.
References
[1] Ward KM et al. (2000), J Magn. Reson. 143(1): 79-87.
[2] Guivel-Scharen V et al. (1998), J Magn. Reson. 133(1): 36-45.
[3] Zaiss M et al. (2011) J Magn. Reson. 211(2): 149-155.
[4] Kentrup D et al. (2017) Kidney International 92(3): 757-764.
[5] Kim M et al. (2009) Magn. Reson. Med. 61(6):1441-1450.
In vitro analysis of a glucose-lactat phantom.
(a) Inverted CEST spectrum (1-normalized signal) with Lorentzian fits of the individual components (least square: 0.0066), (b) asymmetry spectrum MTRasym.
In vivo analysis of a mouse brain.
(a) Inverted CEST spectrum (1-normalized signal) with Lorentzian fits of the major metabolites (least square: 0.7027), (b) asymmetry spectrum MTRasym.
Keywords: CEST, Lorentzian Fitting, CEST Quantification
903

Application of Deep Learning approaches to segmentation of white matter hyperintensities

Lorenzo Carnevale1, Giuseppe Lembo1, 2

1 I.R.C.C.S. INM Neuromed, Department AngioCardioNeurology and Translational Medicine, Pozzilli, Italy
2 University of Rome, Department of Molecular Medicine, Rome, Italy

Introduction

White Matter Hyperintensities (WMH) are a major hallmark of cerebrovascular diseases as small vessel diseases or cerebral atherosclerosis. The link between the onset and progression of cognitive decline in middle age and the presence of white matter hyperintensities calls for better methods to segment the lesions and extract quantitative parameters of volume and position, to overcome the qualitative grading common in the clinical practice. To this aim we implement a deep learning approach, which has proven as an unparalleled tool to solve computer vision and pattern recognition problems.

Methods

The segmentation approach implements a deep neural network trained on publicly available images (White Matter Hyperintensities Segmentation Challenge – MICCAI 2017 [1]). The network is a Fractal U-Net[2-3], an architecture which leverages the structure of U-Net without passing any non-filtered residual to the subsequent layers. The network is trained to segment white matter hyperintensities on 80 x 80 patches extracted from T2-FLAIR images, in both original and standard space. The model has been implemented in Tensorflow using Keras programming libraries, the chosen optimizer was Adam optimizer, with variable learning rate (0.001 with decay), 150 epochs training, Dice Coefficient Loss function. Data augmentation has been performed by reflection of patches on the X and Y axis.

Results/Discussion

With our network we achieved a 0.944 Dice Score on the validation data set, with precision 0.951 and recall 0.938 and accuracy of the segmentation 0.994 (Figure 1). To apply the segmentation model in standard space we compared three different interpolation approaches to reconstruct affine registration to move images from standard space to MNI152 space. We compared Trilinear, B-Spline and Sinc Window approach. The network has been retrained on patches extracted from co-registered images with the same architecture parameters. Non-linear interpolation methods performed better on average, achieving 0.872 and 0.868 Dice coefficient, in contrast with 0.836 achieved by Trilinear interpolation. A similar trend is evidenced in the Precision (0.883 and 0.879 in contrast to 0.851) and no differences in Recall (0.865 and 0.855 in contrast to 0.869)(Figure 2).

Conclusions

With our work we present a novel approach to segment white matter hyperintensities on FLAIR imaging, without the need of 3d isometric voxels thanks to the 2d patch approach. Moreover, we compared different co-registration strategies to evaluate the impact of image resampling on deep learning segmentation, pointing out the better preservation of patterns with non-linear interpolations.

AcknowledgmentThis work has been supported by Ministry of Health "Ricerca corrente" and "5x1000"
References
[1] H. J. Kuijf et al., "Standardized assessment of automatic segmentation of white matter hyperintensities; results of the wmh segmentation challenge," IEEE transactions on medical imaging, 2019.
[2] D. Liciotti, M. Paolanti, R. Pietrini, E. Frontoni, and P. Zingaretti, "Convolutional networks for semantic heads segmentation using top-view depth data in crowded environment," in 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 1384-1389: IEEE.
[3] G. Larsson, M. Maire, and G. Shakhnarovich, "Fractalnet: Ultra-deep neural networks without residuals," arXiv preprint arXiv:1605.07648, 2016.
Figure 1
Original Patch, Ground Truth, Predicted Lesion and Differences Map of the segmented patch
Figure 2
Evaluation of the model at each epoch on test data.
Keywords: Deep Learning, Segmentation, White Matter Hyperintensities
904

Improved machine learning based diagnosis of paediatric brain tumours combining Magnetic Resonance Spectroscopy, relaxation measurements, and targeted feature selection

James T. Grist1, Dadi Zhao1, Andrew Peet1

1 University of Birmingham, Institute of Cancer and Genomic Sciences, Birmingham, United Kingdom

Introduction

Childhood brain tumours are the leading cause of oncological mortality in the paediatric population. Magnetic resonance imaging (MRI) and spectroscopy (MRS) studies have been undertaken to use non-invasive methods to automatically discriminate between the three most common tumour types: Pilocytic astrocytoma (PA), medulloblastoma (M), and ependymoma (E)(1,2). Here we use MRS and relaxometry with a combination of different feature selection methods to discriminate between the three major tumour types.

Methods

Patients with confirmed brain tumours underwent single voxel spectroscopy (M=33,E=11,PA=39,Ethics reference=04/MRE0/41).Spectroscopy data were processed using TAQUIN (4.3.11), and water T2* quantified using a Laplacian fit(3). Multiple feature selection methods were used to determine the optimal combination of metabolites for classification – 3T (Citrate, Glutathine, Glycene, Lactate, Taurine, Total (T) NAA, TCholine (TCho), TCreatine (TCr) with/without T2*), ex vivo features (TCr, Combined Glutamate/Glutamine (Glx), Glycene, TCho, Scyllo-Inositol with/without T2*), and principal component analysis reduced metabolites from the 1H full brain basis.
Supervised machine leaning using SVM, AdaBoost, and a Neural Network was performed with 5-fold cross validation (Orange).

Results/Discussion

Results combining PCA with linear discriminant analysis to classify brain tumours results in a learner with 70-85% precision(1,2). All forms of feature selection provided accurate (>90% overall precision) classification of tumours. Combining the 1.5T data used in this study with the 3T features from previous work(2) provided the best classifier for predicting PA (98%), ex vivo with T2* for M and both ex vivo and PCA to predict E. The classifier with the highest overall precision (average of all classes) and F-statistic were the 3T features combined with a neural network (demonstrating 94% precision and 0.94 F-stat). All other results are detailed in tables 1 and 2. Addition of T2* data to metabolite features did provide a moderate increase in learner accuracy – range 0-5%.
The results have shown that a number of feature selection methods, combined with supervised machine learning, can provide a significant increase in predictive power in classification of paediatric brain tumours.

Conclusions

This study has shown the potential for many different feature selection methods to discriminate between the three main paediatric brain tumour groups. Further work also showed that relaxometry provided an additional informative feature for classification.  Finally, this work can be further expanded by incorporation in to a decision support tool to aid in clinical practise(4).

Acknowledgment

This study was funded by the Little Princess Trust, and Help Harry Help One PhD student fund.

References
[1] Zarinabad N, Abernethy LJ, Avula S, et al. Application of pattern recognition techniques for classification of pediatric brain tumors by in vivo 3T 1 H-MR spectroscopy-A multi-center study. Magn. Reson. Med. 2017;2366:2359–2366.
[2] Vicente J, Fuster-Garcia E, Tortajada S, et al. Accurate classification of childhood brain tumours by in vivo 1H MRS-A multi-centre study. Eur. J. Cancer 2013;49:658–667.
[3] Provencher SW. CONTIN : A general purpose constrained regularization program for inverting noisy linear algebraic and integral equations. Comput. Phys. Commun. 1982;27:229–242.
[4] Zarinabad N, Meeus EM, Manias K, Foster K, Peet A. Automated modular magnetic resonance imaging clinical decision support system (MIROR): An application in pediatric cancer diagnosis. J. Med. Internet Res. 2018;20:1–16.
Table 1
Overall precision, area under the curve (AUC) and F-statistic results for each feature selection method with and without T2* relaxation data.
Table 2
Individual class accuracies for each feature set.
Keywords: MRS, Machine Learning, Paediatric, Tumour, Cancer
905

SAMRI — A Comprehensive Workflow Collection for Small Animal Magnetic Resonance Imaging

Horea-Ioan Ioanas1, Markus Marks2, Fatih Mehmet Yanik2, Markus Rudin1

1 ETH and University of Zurich, Institute for Biomedical Engineering, Zurich, Switzerland
2 ETH and University of Zurich, Institute of Neuroinformatics, Zurich, Switzerland

Introduction

Magnetic Resonance Imaging (MRI) is a measurement method with high depth penetration, which enjoys extensive use in imaging organs with an intricate holistic mode of function, such as the brain.
MRI has significant translational potential, constituting a mainstay of human brain imaging, and finding increased use in preclinical research on model animals, such as mice and rats.
A main challenge in the analysis of preclinical MRI data is the adaptation of existing toolkits, designed primarily for human use, to the constraints and additional experimental capabilities accessible in small animals.

Methods

Workflows for the most common variations of MRI data processing and analysis are available in an automatically manageable software package (SAMRI).
The front end interfaces providing access to the workflows are available in Bash and Python, with the workflow control being implemented in Python, using workflow construction and execution support from the nipype package.
Established toolkits from human brain imaging, including ANTs, FSL, AFNI, SciPy, scikit-learn, and nilearn, are leveraged for atomized data processing applications.
Plotting functions (using matplotlib and Blender) are distributed alongside the workflows, enabling operator quality control and the generation of publication-ready plots without the need of graphical editing.

Results/Discussion

We have created a collection of high-level workflows, which distribute extensive procedural knowledge associated with MRI data handling, as well as domain knowledge regarding small animal neuroimaging, in a transparent and reusable form.
These workflows encompass the conversion from the proprietary Bruker format to the Brain Imaging Data Structure1,2, preprocessing3, as well as scan-level and population-level neuroimaging statistics.
The workflow package is integrated with automatically manageable data sources (e.g. standardized mouse brain atlases, as well as connectivity and gene expression data from the Allen Institute), which can automatically be queried and used for feature contrasts between e.g. function and structure.
To improve comparability with model animal as well as human neuroimaging results, we ship slice-wise plotting functions (fig.1, more common in model animal MRI and histology), as well as 3D plotting functions (fig.2, more common in human MRI).

Conclusions

Our work enables the analysis of preclinical MRI data in a standardized, automated, and quality-controled fashion.
It leverages state-of-the-art preclinical data repositories, as well as cutting-edge analysis functionalities from human MRI.
Further, it permits publishing neuroimaging articles as fully reproducible software, a crucial functionality for the improvement of translational neuroimaging and scientific reproducibility.

References
[1] Gorgolewski, Krzysztof J and Auer, Tibor and Calhoun, Vince D and Craddock, R Cameron and Das, Samir and Duff, Eugene P and Flandin, Guillaume and Ghosh, Satrajit S and Glatard, Tristan and Halchenko, Yaroslav O and others 2016, ‘The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments’, Scientific Data, 3, 160044, Nature Publishing Group
[2] Ioanas, Horea-Ioan and Marks, Markus and Garin, Clément and Dhenain, Marc and Yanik, Mehmet Fatih and Rudin, Markus 2019, ‘An Automated Open-Source Workflow for Standards-Compliant Integration of Small Animal Magnetic Resonance Imaging Data’, bioRxiv, Cold Spring Harbor Laboratory
[3] Ioanas, Horea-Ioan and Marks, Markus and Yanik, Mehmet Fatih and Rudin, Markus 2019, ‘An Optimized Registration Workflow and Standard Geometric Space for Small Animal Brain Imaging’, bioRxiv, Cold Spring Harbor Laboratory
Coronal Slice Comparison between VTA Functional and Structural Projection Areas.
Slice-wise plot depicting t-statistic scores thresholded at t greater or equal 3, for both structural (green contour) and functional (orange-red heat map) VTA projection areas, with strucural data sourced from the Allen Brain Institute database.
Plotting, as well as data preprocessing and analysis, are fully controlled via workflows from the SAMRI package.
3D and PArcellation-Region Density Visualizations of Seed-Based Connectivity from the VTA.
Visualization of VTA seed-based connectivity during block optogenetic stimulation of light-sensitive dopaminergic cells of the VTA. The top-level plotting, as well as the data conversion, preprocessing, and analysis, are fully controlled via workflows distributed in the SAMRI package.
Keywords: data processing, MRI, fMRI, preclinical, translational
906

Determining Adequate Quantitative Indocyanine Green Fluorescence of Parathyroid Glands to Avoid Hypoparathyroidism in Thyroid Surgery

Milou Noltes1, 2, Madelon Metman2, Wido Heeman2, 3, 4, Lorne Rotstein1, Adrienne Brouwers5, Gooitzen M. van Dam2, Schelto Kruijff2, Jesse Pasternak1

1 University Health Network, Endocrine Surgery, Toronto, Canada
2 University Medical Center Groningen, Surgical Oncology, Groningen, Netherlands
3 University of Groningen, Faculty Campus Fryslân, Leeuwarden, Netherlands
4 LIMIS Development BV, Leeuwarden, Netherlands
5 University Medical Center Groningen, Nuclear Medicine & Molecular Imaging, Groningen, Netherlands

Introduction

The goal of total thyroidectomy (TTx) is to remove all thyroid tissue, while ensuring adequate adjacent parathyroid gland perfusion. Iatrogenic hypoparathyroidism is a critical complication of TTx. Use of indocyanine green (ICG) has been proposed to asses parathyroid perfusion during TTx, aiming to guide autotransplantation and prevent hypoparathyroidism. To date, a method for quantification of ICG uptake predicting hypoparathyroidism is lacking. This study aims to quantitatively assess parathyroid perfusion patterns using ICG-angiography to develop a predictive model for hypoparathyroidism.

Methods

This is a prospective proof-of-concept study using ICG-angiography during TTx. ICG was injected (10 mg total) intravenously following surgical dissection, and the fluorescence intensity of parathyroid flow was measured real-time for 2 minutes. Perfusion graphs were produced, plotting time versus fluorescence signal per parathyroid gland. Parathyroid perfusion patterns were analyzed based on perfusion graph curves. The main quantification endpoints included the difference of inflow-to-outflow curves, described as the perfusion slope (in arbitrary flow units [AFU]/second) and the inflow to 80% outflow peak range (IOPR), in seconds.

Results/Discussion

Nine patients were included in this ongoing study. Three patients were found to have postoperative hypoparathyroidism (PTH <14.1 pg/mL [<1.5 pmol/L]), and started on calcium and calcitriol supplementation which was monitored in the ambulatory setting. The perfusion graphs of patients without postoperative hypoparathyroidism showed a steeper slope and faster outflow than patients in whom postoperative hypoparathyroidism was present (Figure 1). In a post-dissection well-perfused parathyroid gland, the fluorescence slope was seen to be >7.1 AFU/second with a visualized IOPR <17 seconds. In patients with postoperative hypoparathyroidism, the slope of the best perfused gland was significantly lower (<4.0 AFU/second) and the IOPR was >22 seconds.

Conclusions

Analyzing ICG perfusion patterns using perfusion graphs may be a feasible method to quantitatively assess parathyroid perfusion and predict short-term hypoparathyroidism. The next phase of investigation is correlation of this fluorescence model to predict long-term viability of parathyroid glands, providing the surgeon a roadmap for autotransplantation.

Figure 1. Parathyroid ICG fluorescence images and corresponding perfusion graphs
Parathyroid (arrow) ICG fluorescence images of a well perfused parathyroid gland (A) and a potentially compromised parathyroid gland (B), with corresponding perfusion graphs of a patient without postoperative hypoparathyroidism (C) and a patient with postoperative hypoparathyroidism (D). Each curve represents a single parathyroid gland. In the patient with postoperative hypoparathyroidism (B), one parathyroid gland was not identified intra-operatively, and was later identified in the thyroid specimen (intrathyroidal parathyroid gland). 
Keywords: Quantification, indocyanine green, hypoparathyroidism, thyroid surgery
907

The effect of external factors on the calibration quality and optical property determination in quantitative reflectance and fluorescence spectroscopy

Iris Schmidt1, Wouter B. Nagengast1, Dominic Robinson2

1 University Medical Center Groningen, Department of Gastroenterology and Hepatology, Groningen, Netherlands
2 Erasmus Medical Center Rotterdam, Department of Otorhinolaryngology & Head and Neck Surgery, Rotterdam, Netherlands

Introduction

Quantitative MDSFR/SFF spectroscopy is performed in multiple clinical studies to quantify fluorescence intensity both in vivo and ex vivo. This technique determines the optical properties of the tissue and corrects the measured fluorescence for these parameters and results in a measurement of the intrinsic fluorescence. Accurate quantification of the optical properties and fluorescence signal requires a calibration of the system before each procedure. This calibration accounts for the spectral illumination, transmission and detection efficiencies of the measurement system.

Methods

The aim was to determine the variability in the calibration and measurements of the system over time by measuring the calibration quality (SNR) and optical property determination. Calibration of the system was performed as described previously [1, 2] and contains an integrating sphere calibration, a calibrated light source measurement and measurements in water and in a liquid phantom containing 6.6% intralipid-20%. The two parameters were determined by measuring the variability in external factors like fiber condition, the liquid phantom quality and the effect of the device fiber-optic components. The same standardized calibration procedure, including a tripod, was performed with different fibers, different liquid phantoms of Intralipid-20% various sets of fiber-optic components.

Results/Discussion

The condition and performance of the fiber-optic components and the clinical used fiber itself showed a considerable influence on the calibration measurement. A decrease in condition reduces the measured spectrometer counts and therefore the total calibration performance.  Five measurements per liquid phantom are plotted for five different clinical fibers (fig. 1). The counts/ms of each measurement per fiber correlate with the fibers condition. The fluctuation within a fiber is the result of variance in the calibration procedure. The %SD of the mean for fiber 1 to 5 are respectively, 2.27, 10.67, 2.82, 1.57 and 4.87. The outlier in fiber 2 is possibly the result of illumination differences during measurement, caused by a miscalibration. Detailed knowledge of the effects of external factors on the calibration could improve the correction and analysis of clinical measurements as variances could be better divided between the effect of external factors or the tissue.

Conclusions

We demonstrated the effect of day-to-day variance on the calibration quality (SNR) and therefore the optical property measurement. The influence of these variations on in vivo and ex vivo clinical measurements of both optical properties and intrinsic fluorescence will be presented.

References
[1] Gamm, U. A., Kanick, S. C., Sterenborg, H. J. C. M., Robinson, D. J., & Amelink, A. (2012). Quantification of the reduced scattering coefficient and phase-function-dependent parameter γ of turbid media using multidiameter single fiber reflectance spectroscopy: experimental validation. Optics letters37(11), 1838-1840.
[2] Hoy, C. L., Gamm, U. A., Sterenborg, H. J., Robinson, D. J., & Amelink, A. (2013). Method for rapid multidiameter single-fiber reflectance and fluorescence spectroscopy through a fiber bundle. Journal of biomedical optics18(10), 107005.
Calibration measurements for five different fibers
Keywords: MDSFR/SFF spectroscopy, calibration, validation
908

Impact of varying CT reconstruction parameters on measured Hounsfield Units

Wendy McDougald1, 2, Sam Watson3, Adriana A. S. Tavares1, 2

1 University of Edinburgh, BHF-Centre for Cardiovascular Science, College of Medicine & Veterinary Medicine, Edinburgh, United Kingdom
2 University of Edinburgh, Edinburgh Preclinical Imaging, Edinburgh, United Kingdom
3 University of Aberdeen, School of Medicine and Dentistry, Aberdeen, United Kingdom

Introduction

Preclinical computed tomography (CT) is a widely used in vivo imaging technique providing anatomical information, expanding its usefulness as a quantitative tool. For example, cardiovascular research evaluating calcium deposits in vessels use CT Hounsfield Units (HU) to assist in quantifying levels of atherosclerotic plaque and vulnerability. Reconstructing CT images using different filters and parameters to improve spatial resolution, signal-to noise ratio, is standard procedure. This study focuses on the impact such variations have on measured HUs regardless of the filters primary purpose.

Methods

A CT image was acquired of the air/water quality control and the tissue equivalent material (TEM) phantoms on the Mediso nanoPET/CT using a low dose protocol (tube voltage of 25kVp, exposure of 170ms and 360 projections, dose < 13mGy). The images were reconstructed multiple times varying the filter parameters (Ram-Lak, Butterworth, Shepp-Logan, Cosine, Hamming, Hanning and Blackman), slice thickness, pixel size and filter cut-off value. Reconstructed images were exported to PMOD image analysis software for the extraction of HUs. A set volume of interest template was used on each CT image for HU extraction in PMOD. HU values for air, water, lung, muscle and cortical bone were analysed.

Results/Discussion

As expected, data revealed slight variations across the different applied filters when other parameters remained constant. The greatest HU percent difference measured was between the high pass and low pass filters (Blackman vs Ram-Lak) of 15% in muscle HU. Otherwise the biases remained < 3% between air, water, lung and cortical bone HU across all filters. However, substantial HU measured biases were seen within each different filter when slice thickness, pixel size and/or the cut-off value was varied. The greatest HU percent difference measured was in the muscle using the Shepp-Logan filter varying the pixel size (matrix of 972x972, 0.125x0.125 mm to matrix 324x324, 0.376x0.376) of 68%. Greatest HU percent difference measured for air, water, lung and cortical bone was 0.9% (Hamming), 16% (Ram-Lak), 11% (Butterworth) and 7% (Butterworth), respectively. Measured HU results for muscle (soft tissue) were the most impacted by varying the reconstruction parameters.

Conclusions

Nowadays preclinical CT standardisation and lower dose CT imaging are sought after in order to generate more reliable reproduceable results and avoid ionizing radiation overexposure. Given biases shown, the choices of reconstruction filtering warrants revisiting. Choosing suitable CT reconstruction parameters requires going beyond spatial resolution, noise trade-offs to include considering potential quantitative biases in measured HU.

Measured HU results

Figure 1:Tukey box and whisker plots show each material (air, water, lung, muscle and cortical bone) HU measurements. Results represent bias generated by the different parameter variations (slice thickness, pixel size and filter cut-off value) within each filter. Densities for lung, muscle and cortical bone are 0.21, 1.06 and 1.57 g/mL, respectively. TEM phantom rods are 4 mm as reported by the manufacturer. A set template volume of interest was used on each CT image for HU extraction in PMOD.

Keywords: Preclinical computed tomograpy, Hounsfield Units, CT filters
909

Simultaneous acquisition of a dual camera setup for 3D behavioral studies in Neocardinia davidi

María Revuelta1, María del Carmen Prieto1, Diego Díaz1, Manuel Desco Menéndez1, 2, 3, Roberto Fernández1, 4, Jorge Ripoll1, 2

1 Universidad Carlos III de Madrid, Bioengineering and Aerospace Engineering Department, Leganés, Spain
2 Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
3 Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
4 I.U. Física Aplicada a las Ciencias y las Tecnologías, Universidad de Alicante, Alicante, Spain

Introduction

The snowball shrimp is a variety of Neocardinia davidi suitable for behavioral studies due to its small size, transparency and social skills. The aim of this study is to perform a clustering of behaviors of this invertebrate by analyzing the changes in position, velocity and orientation over several days. A system of two CMOS cameras was designed and built to obtain a 3D reconstruction of an observation aquarium. In addition, a custom-built LabVIEW code allowed the simultaneous image acquisition and posterior processing from both cameras in real time.

Methods

The cameras were placed at the front and on top of the aquarium to observe the y-z plane and the x-y plane. A NIR led was placed on the top corner to improve the image contrast and facilitate the posterior segmentation. Through LabVIEW, both cameras were initiated together, and read simultaneously, assuming the exposure times and the frame rates were identical. Images were obtained with a resolution of 70 µm and with a field of view that accommodated the whole aquarium (approx. 30cm in width). Each image was segmented using a threshold-based method in which the shrimps were the region of interest (ROI), also saving a BW image that only included the animal. In order to distinguish the images, these were tagged with a camera identifier and a time stamp.

Results/Discussion

The 3D localization of the animals using the two views from the segmented images will allow the extraction of several parameters: the length, the width, an estimate of the volume, and the exact coordinates of the center of mass of the shrimps at each moment. These primary features will be quantified and analyzed with MATLAB to generate a cluster of behaviors. To obtain this clustering, a multi-dimensional space composed of several parameters (e.g. position, speed, acceleration, path length, path reversal, etc.) will be created. These will allow classifying simple different behaviors (e.g. resting, foraging, swimming, etc.) as an initial approach. This behavioral study will take place without intervention and with appropriate water and food availability conditions. Once the different behaviors have been classified, an ethogram will be built to show the evolution of each individual shrimp with time.

Conclusions

This experiment will allow extracting the variation of behaviors of Neocardinia davidi during large periods of time simulating their wild type environment. The changes in locomotion could be useful for understanding the organism and distinguishing mutants of this animal. Even though this species has received little attention in research, we believe its transparency makes it an ideal invertebrate for behavioral and 3D microscopy imaging studies.

AcknowledgmentThis project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 801347 SENSITIVE and Spanish Ministry of Economy and Competitiveness (MINECO) Grant FIS2016-77892-R. R.F acknowledges funding from Generalitat Valenciana and European Social Fund through postdoctoral grant APOSTD/2018/A/084. The CNIC is supported by the Ministerio de Ciencia, Innovación y Universidades and the Pro CNIC Foundation, and is a Severo Ochoa Center of Excellence (SEV-2015-0505).
References
[1] Yemini, E., Jucikas, T., Grundy, L. J., Brown, A. E., & Schafer, W. R. (2013). ‘A database of Caenorhabditis elegans behavioral phenotypes’. Nature methods, 10(9), 877.
[2] Robie, A. A., Hirokawa, J., Edwards, A. W., Umayam, L. A., Lee, A., Phillips, M. L., ... & Reiser, M. B. (2017). ‘Mapping the neural substrates of behavior’. Cell170(2), 393-406.
[3] Vogelstein, J. T., Park, Y., Ohyama, T., Kerr, R. A., Truman, J. W., Priebe, C. E., & Zlatic, M. (2014). ‘Discovery of brainwide neural-behavioral maps via multiscale unsupervised structure learning’. Science344(6182), 386-392.
Figure 1
Shrimp images acquired with LabVIEW. Images on the left have been taken using a NIR led. The top right image was illuminated with white light from the side. The bottom right image was acquired from the top with white illumination.
Figure 2
Flux diagram showing the whole process implemented in Labview and MATLAB
Keywords: Dwarf Shrimp, Behavior, 3D live imaging, Image processing, Clustering
910

Development and validation of three different methods to improve CEST contrast calculation in presence of fat

Daisy Villano1, Annasofia A. Anemone1, Lorena Consolino2, Dario L. Longo3

1 University of Torino, Molecular Imaging Center, Department of Molecular Biotechnology and Health Sciences, Torino, Italy
2 Institute for Experimental Molecular Imaging, Department of Nanomedicines and Theranostics, RWTH Aachen University, Aachen, Germany
3 Institute of Biostructures and Bioimaging (IBB), Italian National Research Council (CNR), Torino, Italy

Introduction

CEST-MRI is an emerging imaging technique suitable for several in vivo application, including pH imaging to characterize tumor acidosis. Conventionally, CEST contrast is calculated by applying the asymmetry analysis, but the presence of a strong fat signal (especially in breast imaging) leads to wrong contrast quantification, hence to wrong pH measurements (1-3). In this study we propose some alternative ways to calculate the CEST contrast to overcome fat signal influences and correctly measure pH without the need to apply complex fat saturation or water–fat separation techniques.

Methods

Three different methods to calculate CEST contrast in presence of fat were proposed and compared with conventional asymmetry analysis (Fig1):

1. Removal of offsets around the fat peak, interpolating these points with a line and calculating the contrast via asymmetry analysis as:
STΔω=S-Δω -SΔω/S-Δω     S±Δω=SI at ±Δω ppm
2. Lorentzian fitting of the water peak and use of the negative half of the fitted curve for the asymmetry analysis
3. Calculating the contrast considering the positive part of the Z-spectrum as:
STΔω=S0 -SΔω/S0        S0=SI without saturation

The efficiency of the proposed methods in measuring pH values was assessed on a breast cancer murine model characterized by low and high amount of fat. CEST in vivo validation experiments were carried out at 7T after iopamidol injection.

Results/Discussion

Fig 2A shows pH maps and the percentage of detected pixels obtained with all the investigated methods on a tumor bearing mouse (4T1:HER2+ murine breast cancer cell) with a low amount of fat. The application of the proposed approaches resulted in good pH measurement and tumor coverage. Compared with traditional asymmetry analysis, Method 3 produced the most similar pH maps, pH average values and percentage of detected pixels, hence providing comparable results to the conventional analysis. In Fig 2B all approaches are applied on a tumor murine model with an evident presence of fat (arrow). Using the asymmetry analysis, the presence of the fat-peak in Z-spectrum resulted in a very low iopamidol detection and in wrong pH measurement due to the “masking” effect caused by the fat peak. In contrast, all the proposed approaches determined an increased pixels detection, with Method 3 that generated the best maps in terms of iopamidol detection and pH measurement although the presence of fat.

Conclusions

Several approaches to overcome the not correct quantification of the CEST contrast in presence of fat were described. In vivo validation showed the possibility to use these approaches to calculate correct CEST contrast, hence accurate tumor pH, without requiring additional RF power or further images acquisition. In particular, Method 3 produced the best results in terms of pixel detection and pH measurement in tissue with low or high fat content.

Acknowledgment

We gratefully acknowledge the support of MOLIM-ONCOBRAIN (PON R&I 2014-2020 / ARS01_00144) and of Associazione Italiana Ricerca Cancro (AIRC MFAG #20153) projects.

References
[1] Sun, PZ, Zhou, J, Sun, W, Huang, J, van Zijl ,PC 2005, 'Suppression of lipid artifacts in amide proton transfer imaging.', Magn Reson Med, 54(1), 222-5.
[2] Zhang, S, Keupp, J, Wang, X, Dimitrov, I, Madhuranthakam, AJ, Lenkinski, RE, et al. 2018 'Z-spectrum appearance and interpretation in the presence of fat: Influence of acquisition parameters.', Magn Reson Med., 79(5), 2731-7.
[3] Zimmermann, F, Korzowski, A, Breitling, J, Meissner, JE, Schuenke, P, Loi, L, et al. 2020, 'A novel normalization for amide proton transfer CEST MRI to correct for fat signal-induced artifacts: application to human breast cancer imaging.', Magn Reson Med., 83(3), 920-34.
Figure 1:
In-vivo Z-spectra showing the presence of fat (arrow) and graphical representation of the four investigated methods including conventional asymmetry analysis, Method 1, Method 2 and Method 3. In particular, for the Method 1 the fat peak (blue dotted line) is replaced by a line (red solid line), for the Method 2 the negative half of the original spectrum (blue dotted line) is replaced by the result of the lorentzian fitting (red solid line).
Figure 2:
Z-spectra, pH maps, pH mean values and percentage of detected pixels obtained with all tested approaches on a tumor bearing mouse with low (A) and high (B) amount of fat.
Keywords: CEST-MRI, pH imaging, fat correction
911

KCML: A Knowledge- and Context- Driven Machine Learning for Analysing High Throughput Imaging Data

Heba Sailem1, Jens Rittscher1, Lucas Pelkmans2

1 University of Oxford, Engineering Science/ Institute of Biomedical Engineering, Oxford, United Kingdom
2 University of Zurich, Department of Molecular Life Sciences, Zurich, Switzerland

Introduction

Characterisation of gene functions in a context-dependent manner is crucial for identifying of gene roles in health and disease. High throughput imaging is routinely used for imaging cells following large scale gene perturbations using siRNA or CRISPR technologies. These imaging datasets allow the determination of gene function by monitoring phenotypic changes following genetic perturbations. However, a framework for a comprehensive analysis of high throughput imaging datasets is still lacking due to the challenges in data analysis and complexity of gene functions

Methods

We developed a Knowledge- and Context-driven Machine-Learning framework (KCML) to automatically annotate imaging data of genetically modified cells with various biological functions. KCML utilises existing biological databases, such as gene ontologies, to train an ensemble of support vector machine classifiers that can identify any associated phenotypes. Each classifier learns to discriminate between perturbation phenotypic profiles of genes annotated with a particular gene ontology term (positive class) and a random set of remaining genes (negative class) (Fig. 1A). We apply KCML to an image-based genetic screen in a colorectal cancer cell line where cells were stained with DAPI and Viral Protein 6.

Results/Discussion

We show that KCML can identify genes associated with many biologically relevant phenotypes such as multicellular organisation. For example, KCML identified a novel role for olfactory receptors in cellular organisation which manifested in cell clumping and increased local cell density in colorectal cancer cells. We validated this finding in colorectal cancer patients. Our results illustrate the generalizability and utility of KCML as a framework for systematic gene function discovery from high content screening data across multiple biological scales. We believe that KCML can scale-up to more complex phenotypic screens utilising advanced multiplexing technologies or probing microtissues or organisms.

Conclusions

KCML is a first of its kind framework for systematic analysis high throughput imaging datasets and identifying context and tissue-dependent gene functions. We envision that systematic application of KCML to the large amounts of generated image-based genetic screens will greatly accelerate and advance our understanding of gene functions at the molecular, cellular and tissue levels which can lead to the discovery of new therapeutic target genes.

Acknowledgment

Heba Sailem is funded by a Sir Henry Wellcome Fellowship.

Fig. 1
A) KCML workflow. B) Colon cancer Imaging dataset and extracted features of 500 million cells.
Keywords: high throughput imaging, colon cancer, nuclear morphology
912

Anatomically Informed Time-of-Flight PET Image Reconstruction with STIR Toolkit

Palak Wadhwa1, 2, Daniel Deidda3, Kris Thielemans4, William Hallett2, Roger Gunn2, David Buckley1, Charalampos Tsoumpas1, 2, 5

1 University of Leeds, Biomedical Imaging Science Department, Leeds, United Kingdom
2 Invicro, London, United Kingdom
3 National Physical Laboratory, Teddington, United Kingdom
4 University College London, Institute of Nuclear Medicine, London, United Kingdom
5 Icahn School of Medicine at Mount Sinai, Biomedical Engineering & Imaging Institute, Mount Sinai, United States of America

Introduction

PET/MR imaging modality is capable of highly sensitive and specific molecular and anatomical imaging. The combination of these modalities allows to exploit MR anatomical information to reconstruct PET images and improve the quantitative accuracy and whilst reducing the noise. The kernelised expectation maximisation (KEM) algorithm [1] has been recently implemented in Software for Tomographic Image Reconstruction (STIR, http://stir.sf.net) library [2]. This investigation aims at studying the performance of KEM reconstruction for TOF PET data from GE SIGNA PET/MR scanner to reconstruct datasets.

Methods

Any TOF-PET listmode uncompressed file can be extracted from the GE SIGNA PET scanner and histogrammed into TOF histograms using existing classes and utilities in STIR [3]. Normalisation and attenuation correction histograms are calculated using also classes and utilities implemented within STIR for GE SIGNA data [4]. Background events (i.e. randoms and scatter) are extracted within STIR space using custom utilities implemented during this work. A clinical dataset using STIR the TOF-KEM algorithm is reconstructed and demonstrated without the manufacturer’s software. Gaussian post-filtering with FWHM of 4 mm is applied to the reconstructed images. TOF-KEM and standard TOF-OSEM reconstructions are compared using standardised uptake value ratio (SUVR) and coefficient of variation (CoV).

Results/Discussion

The kernel parameters are optimised using TOF-PET data from the GE SIGNA PET/MR scanner. The kernel parameters that produce images with lower noise without degrading image resolution are selected. These parameters are: σm = 0.5 and σdm = 3. The reconstructed images displayed visual improvement with TOF-KEM over TOF-OSEM algorithm. SUVR comparisons were conducted by choosing region of interests (ROIs) in the liver and the lungs. The SUVR comparisons conducted using TOF-OSEM produce the value of 18.5 and the same comparison conducted with TOF-KEM demonstrates a rise of 1.62\% in SUVR. The ground truth of SUVR is assumed to be the value extracted from the image reconstructed using vendor's reconstruction software. The value calculated using TOF-OSEM with GE toolbox is 25.5. CoV was also calculated for ROI drawn in the liver with respect to the lung over PET images reconstructed for the first 6 iterations. TOF-KEM demonstrates improvement in the uniformity over TOF-OSEM.

Conclusions

This work demonstrates that the incorporation of the MR kernel in the reconstruction algorithm improves the uniformity and quantitative accuracy of the images over the standard algorithm using STIR toolkit. This work further broadens the capabilities of STIR and allows TOF-KEM image reconstructions for data extracted from GE SIGNA PET/MR. Future work aims at demonstrating the improved performance of TOF-KEM over TOF-OSEM for low count datasets.

Acknowledgment

This work is funded by the Medical Research Council (MR/M01746X/1). Dr Charalampos Tsoumpas is supported by a Royal Society Industry Fellowship (IF170011) and EPSRC (EP/P022200/1).

We would like to thank Floris Jansen for GE support. We would also like to thank Nikos Efthimiou, Ottavia Bertolli and Elise Emond. Furthermore, we are thankful Kristen Wangerin, Timothy Deller and Michel Tohme for GE toolbox support and for the supplied information. Finally, we are grateful to the CCP PET-MR network (EPSRC grant EP/M022587/1) for providing necessary support and resources to implement this work.

Ethics number 17/WM/0084 with permission from a clinical study performed at Invicro.

References
[1] Wang, G. and Qi, J., 2014, 'PET image reconstruction using kernel method.', IEEE transactions on medical imaging, 34(1), pp.61-71.
[2] Deidda, D., Karakatsanis, N.A., Robson, P.M., Efthimiou, N., Fayad, Z.A., Aykroyd, R.G. and Tsoumpas, C., 2018, 'Effect of PET-MR inconsistency in the kernel image reconstruction method.', IEEE Transactions on Radiation and Plasma Medical Sciences, 3(4), pp.400-409.
[3] Efthimiou, N., Emond, E., Wadhwa, P., Cawthorne, C., Tsoumpas, C. and Thielemans, K., 2019, 'Implementation and validation of time-of-flight PET image reconstruction module for listmode and sinogram projection data in the STIR library.', Physics in Medicine & Biology, 64(3), p.035004.
[4] Wadhwa, P., Thielemans, K., Efthimiou, N., Bertolli, O., Emond, E., Thomas, B.A., Tohme, M., Wangerin, K.A., Delso, G., Hallett, W. and Gunn, R.N., Tsoumpas, C., 2018, 'Implementation of Image Reconstruction for GE SIGNA PET/MR PET Data in the STIR Library.', 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), pp. 1-3.
Demonstration of TOF-OSEM and TOF-KEM reconstructions with STIR

This figure demonstrates TOF-OSEM and TOF-KEM reconstructions for lung fibrosis patients injected with experimental 18-F radiotracer. TOF-OSEM reconstructions are Gaussian post-filtered with FWHM of 4 mm. TOF-KEM reconstructions are reconstructed using optimised kernel parameters: σm = 0.5 and σdm = 3. TOF-OSEM and TOF-KEM reconstructions are conducted using STIR toolkit with 28 subsets for 2 iterations.

Keywords: TOF-PET, PET/MR, KEM, TOF-KEM
913

Biotin oligonucleotide labeling reactions: a method to assess their effectiveness and reproducibility.

Luisa Poggi1, Aldo Di Vito1, Erika Reitano1, Margherita Iaboni1

1 Bracco Imaging SpA, CRB, Colleretto Giacosa, Italy

Introduction

The molecular interaction between biotin and streptavidin is widely used in the growing field of nucleic acid nanotechnology. Several molecular biology and nanotechnology applications in the molecular imaging field rely on quantitative comparison between biotinylated samples, in particular for affinity studies of molecular vectors with the selected target. Thus, it mandatory to have a quality control to assess the biotinylation status of different samples. Here, we present an accurate and reliable method to achieve a qualitative and quantitative analysis of oligonucleotide biotinylation

Methods

The discussed method is based on the use of highly conjugated streptavidin magnetic beads and consists of three consecutive phases. The first step allows the harvesting of biotinylated oligonucleotides from a reaction mixture in which non-biotinylated oligonucleotides are present. In the second step, using conditions similar to those reported by Tong1, the immobilized biotinylated oligonucleotides are displaced from the beads. After the following electrophoretic separation, in the third step the effectiveness of biotinylation reaction is determined by measuring the amount of the recovered biotinylated oligonucleotide with appropriate instrumentation and software.

Results/Discussion

The applied method has been optimized with 1 pmol amount of biotinylated oligonucleotide sample input. All the experimental conditions of three subsequent steps have been defined and  validated.

Then, the protocol was applied on several biotinylated oligonucleotide samples (S1–S9) to evaluate the effectiveness of an in-house 5′- end enzymatic biotinylation reaction. Results are shown in Figure 1. As shown, the biotinylation yield is included between 20 and 70%, with a very high variability between different oligonucleotides. Such variability is an index that the reaction could be sequence dependent or biotin in some samples could be less exposed for the interaction with streptavidin. Most importantly, unknow variability of biotinylation status can impair any quantitative evaluation (including dissociation constants in ligand-target interactions) to be obtained from those samples.

Conclusions

The method described employs the typical instruments available in molecular biology laboratories. Our method is independent from kinetic equilibrium parameters and has a detection limit one order of magnitude lower than the gold standard assay2

References
[1] X. Tong, L.M. Smith, Solid-phase method for the purification of DNA sequencing reactions, Anal. Chem. 64 (1992) 2672–2677
[2] R.H. Batchelor, A. Sarkez, W.G. Cox, I. Johnson, Fluorometric assay for quantitation of biotin covalently attached to proteins and nucleic acids, Biotechniques 43 (2007) 503–507
Figure 1
Analysis of oligonucleotide biotinylation.
Keywords: Molecular Vectors, Aptamers, Biotin, Binding assay, Molecular Imaging
915

An in-silico study, based on preclinical data, for imaging and dosimetry assessment of GNPs in rodents’ leg muscle

Konstantinos Chatzipapas2, Sophia Sarpaki1, Irinaios Pilatis1, Maritina Rouchota1, George Loudos1, Panagiotis Papadimitroulas1

1 BIOEMTECH, Athens, Greece
2 University of Patras, Department of Medical Physics, Rion, Greece

Introduction

Cell therapy offers promising opportunities to cure several diseases that currently do not have effective therapy. The incorporation of gold nanoparticles (AuNPs) in cell therapy offers the advantage of evaluating the treatment evolution. Monte Carlo (MC) techniques play a crucial role in the investigation of imaging and dosimetry assessment allowing the standardization of the muscle injury treatment protocols. The goal is to in-silico investigate different concentrations of AuNPs in rodents’ leg muscles, to assess the dosimetry and to quantify the stem-cells based on the Au concentration.

Methods

GATE1 MC toolkit and the MOBY2 computational model were used to simulate X-ray imaging using 16 different concentrations of GNPs in the muscle. The injection of AuNPs in a mouse’s leg muscle was modelled. Several levels of Au concentration in the muscle were modelled, ranging from 0-5 % per weight. A 35kVp X-ray source was modelled, to simulate the preclinical X-ray system of our lab3. The segmentation of the muscle was executed in VivoQuant to accurately imitate in-vivo data acquired in our lab. The simulations were repeated to assess the enhancement of the absorbed dose in the leg muscle. Additionally, a rat model was used to investigate the efficacy on imaging and dosimetry on larger rodents and to find the optimal Au concentration for the evaluation of muscle injury cell-therapy.

Results/Discussion

This study standardized a methodology for the assessment of AuNPs as contrast agents using several concentration levels. The correlation of muscle contrast enhancement was measured against the Au concentration percentage. Even at low Au concentrations, the CNR rises, in the range of 0-11 %, as depicted in fig.1. Moreover, the correlation on the energy deposited in the muscle was calculated and plotted against the Au concentration percentage. There is a linear correlation between the dose enhancement and the concentration percentage of Au. 3D-dose-maps were exploited, while an in-house algorithm was developed, to measure the deposited energy. The dose enhancement ranges from 1 up to 9 times, as shown in fig.2. These results are explained by the use of low energy spectrum, which has a big cross section when interacting with materials of high atomic number, like the Au. The whole process was repeated for a rat mathematical model and similar results were depicted.

Conclusions

Simulated imaging and dosimetry datasets were created showing that the uptake ratio of gold in the leg muscle of rodents can be measured using low energy X-ray imaging systems. Correlation on the dosimetry measurements has been calculated. Such procedure can lead to the mathematical estimation of the AuNPs concentration within the tissue. Future steps include the standardization of the procedure incorporating additional in-vivo experiments.

AcknowledgmentThis study is part of a project that has received funding by the European Union’s Horizon 2020 research and innovation program under the grant agreement No 761031.
References
[1] Jan S. et al., 2004, "GATE: a simulation toolkit for PET and SPECT," Phys Med Biol 49, 18.
[2] Segars W., Tsui B., 2007, "4D MOBY and NCAT phantoms for medical imaging simulation of mice and men," Journal of Nuclear Medicine 48, 203P
[3] Rouchota M., Georgiou M., Fysikopoulos E., Fragogeorgi E., Mikropoulos K., Papadimitroulas P., Kagadis G., Loudos G., 2017,‘A prototype PET/SPECT/X-rays scanner dedicated for whole body small animal studies’, Hellenic Journal of Nuclear Medicine, 20(2), 146-153.
Figure 1.
The contrast enhancement in several Au concentration levels.
Figure 2
The dose enhancement in several Au concentration levels.
Keywords: X-ray imaging, gold nanoparticles, Dosimetry, Monte Carlo, simulations