EMIM 2018 ControlCenter

Online Program Overview Session: PW-02

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Data Analysis & Methodology

Session chair: Jonathan Disselhorst - Tubingen, Germany; Markus Seeger - Munich, Germany
 
Shortcut: PW-02
Date: Thursday, 22 March, 2018, 11:30 AM
Room: Banquet Hall | level -1
Session type: Poster Session

Abstract

Click on an contribution preview the abstract content.

# 015

Radiomics of metastatic clear-cell renal carcinoma: reproducibility and correlation for feature reduction (#205)

A. Bouchouicha1, J. Deidier1, K. Benac2, D. Balvay3, S. Oudard4, R. Pirracchio2, L. Fournier1, B. Rance5

1 European Georges Pompidou Hospital, Radiology, Paris, France
2 European Georges Pompidou Hospital, Biostatistique, Paris, France
3 Paris descartes university, Cadiovascular research center, Paris, France
4 European Georges Pompidou Hospital, Oncology, Paris, France
5 European Georges Pompidou Hospital, INSERM UMR_S 1138, Paris, France

Introduction

To evaluate the robustness and redundancy of radiomics features extracted from CT images.

 

Methods

28 metastatic clear-cell renal carcinoma patients were enrolled, before therapy initiation. Tumor was manually delineated by three expert physicians. Three categories of features were extracted: shape descriptors, intensity distribution and textural features. Impact of filters and gray level scales on features was studied. Concordance correlation coefficients (CCC) and inter-class correlation coefficients (ICC) were calculated to assess the reproducibility. Spearman correlations wereperformed to assess feature redundancy.

Results/Discussion

1773 radiomics features were extracted from each image. Different filters had little effect and gray levels no significant effect on extracted radiomics feature values. 223/1773 features showed high reproducibility for ICC (ICC≥0.8), and 198/1773 for CCC (CCC≥0.9). Features with an ICC≥0.8 and CCC≥0.9 were considered the most robust. This step reduced the number of relevant features to 158. Among these, highly correlated features with correlation ≥ 0.9 were removed. This procedure yielded 23 features both robust and independent.

Conclusions

This study allows understanding feature stability and redundancy, and impact of pre-processing filters and gray scales levels. These steps are mandatory to subsequently use radiomics features for prediction of therapy response and outcome in oncology.

References

Bukowski RM. Natural history and therapy of metastatic renal cell carcinoma. Cancer. 1997;80:1198–220.

Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RGPM, Granton P, et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur J Cancer Oxf Engl 1990. 2012;48:441–6.

Zhao B, James LP, Moskowitz CS, Guo P, Ginsberg MS, Lefkowitz RA, et al. Evaluating Variability in Tumor Measurements from Same-day Repeat CT Scans of Patients with Non–Small Cell Lung Cancer1. Radiology. 2009;252:263–72.

Keywords: radiomics, reproducibility, imaging features, metastatic clear-cell renal carcinoma
# 014

Can Less Be More? Single Wavelength vs Multispectral Optoacoustic Imaging - Implications For Clinical Translation In Breast Cancer Patients (#331)

O. Abeyakoon1, T. Torheim4, R. Manavaki1, I. Mendichovszky2, N. Dalhaus3, S. Morscher3, N. Burton3, P. Moyle2, M. Wallis2, J. Joseph4, I. Quiros Gonzales4, S. Bohndiek4, F. Gilbert1

1 University of Cambridge, Department of Radiology, Cambridge, United Kingdom
2 Cambridge University Hospitals NHS Foundation Trust, Radiology, Cambridge, United Kingdom
3 iThera Medical GmbH, Munich, Germany
4 Li Ka Shing Cancer CRUK Centre, CRUK, Cambridge, United Kingdom

Introduction

Optoacoustic imaging can provide surrogate measures of tumour biology (vascularity/hypoxia), associated with adverse outcomes. These can be obtained using single wavelengths based on the absorption-spectra of oxy/deoxyhaemoglobin or a multiplewavelength acquisition followed by spectral unmixing. 

Spectral unmixing minimises the influence of other chromophores that may contribute to the overall signal. However, scanner cost is higher and images are more vulnerable to movement and depth.

Our study aimed to determine if single wavelengths can yield similar results

 

Methods

In this prospective study human breast cancers were evaluated invivo at 8 wavelengths (700nm-980nm). Images were acquired using MSOT EIP and OPUS Acuity iThera Medical scanner systems. Analysis was performed in MATLAB.

 

The single wavelengths were 800nm (isosobestic point of oxy/deoxyhaemoglobin) for total haemoglobin, 700nm for deoxyhaemoglobin and 850nm for oxyhaemoglobin. Mean whole-lesion signal intensity was calculated.

 

Using the multiple wavelength acquisition, a spectral unmixing algorithm was applied and three metrics derived: total haemoglobin(HbT), tissue oxygen saturation(SO2) and oxyhaemoglobin concentration(HbO2).

 

Linear regression analysis/Pearson’s correlation was performed to establish the relationship between HbT and 800nm, SO2 and 700nm, HbO2 and 850nm.

 

Results/Discussion

Forty three invasive breast cancers were evaluated. 13 were scanned using MSOT EIP. 30 were scanned using MSOT Acuity. The maximum lesion depth was 2.8cm. Lesion size ranged from 4 - 29mm. The cohort included 39 invasive breast cancers of no specific type NST, 3 invasive lobular cancers and 1 tubular cancer. 

There was a strong positive correlation between 800nm and HbT (R = 0.97, R2 = 0.94; p<0.0001), as well as 850nm and HbO2 (R=0.98, R2 = 0.97; p<0.0001).

However, although the expected negative trend between 700nm and SO2 was present, it showed poor correlation (R= -0.21, R2 = 0.05; p = 0.1). Please see Figure 1 for a graphical illustration of these results.

Conclusions

A single wavelength approach may provide comparable measures to spectral unmixing measures of HbT and HbO2. However, 700nm had a poor correlation with SO2. This suggests the potential of using single wavelength measures such as 700 and 850 nm and reducing the cost of future clinical scanners. 

 

 

Acknowledgement

Funding for this study came from CRUK and University of Cambridge Biomedical Centre

Figure 1: Correlation between single wavelength measures and values obtained by spectral unmoving
# 016

Cloud-based relational database for multimodal animal data – a proof of concept (#530)

N. Pallast1, F. Wieters1, G. Fink1, M. Aswendt1

1 University Hospital Cologne, Neurology, 62, North Rhine-Westphalia, Germany

Introduction

Data management becomes prone to user-errors when working with extensive multimodal and longitudinal (imaging) datasets. Although outlined in the GLP (Good Laboratory Practice) of the WHO1, most labs store their data not in a standardized way and lack way behind clinical standards such as GCP (Good Clinical Practice) compliant data management 2. However, the underestimated importance of a centralized and smart data handling is interfering with basic research and translational approaches 3.

Methods

We designed a cloud-based relational database with tools provided by commercial software (Ninox Software GmbH, Berlin, Germany) with the aim to record a multitude of different experimental procedures. In our case, we collect the following in vivo and ex vivo data from mice: type of surgery, behavioral test, MRI, electrophysiology, and histology. Each mouse is identified by an experiment-specific ID which relates to the project, the subproject, the mouse cage and the mouse number. The database includes an ex vivo part of managing tissue samples. The results of microscopy can be linked to the database as well. A timestamp and the user define each data entry.

Results/Discussion

The presented database enables efficient data collection and sorting. It is operating system independent and accessible via a web interface or an app – also simultaneously by many users. Our approach is different to existing data management tools such as REDCap4 and electronic laboratory notebooks (ELN)5. The data is entered and organized in a relational way. In our case, this means that a mouse and all related experiments are linked to each other. The database is indexed and enables filtering and a free text search, e.g., it is possible to list all T2-weighted MRI scans obtained during a specified period. We also collect animal permission-relevant data such as the time under anesthesia and details of surgery for documentation. As we are working with massive imaging datasets (up to 150 Gb per mouse) and file sizes, everything is stored on a central file server, and the database provides the link.

Conclusions

We have the database in operation for more than six months now and conclude that it improves data accessibility, quality, efficiency, validity, and reporting. There are only minimal programming skills necessary and our database design can be easily adapted to other labs and other experimental conditions.

References

1.      Organization, W. Handbook: good laboratory practice (GLP): quality practices for regulated non-clinical research and development. (2010).

2.       Step, I. Integrated addendum to ICH E6 (R1): Guideline for Good Clinical Practice E6 (R2). Current Step (2015).

3.       Lapchak,  PA & Zhang,  JH. Data standardization and quality management. Translational Stroke Research (2017).

4.       Harris, P. A. et al. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics 42, 377–381 (2009).

5.       Dirnagl, U. & Przesdzing, I. A pocket guide to electronic laboratory notebooks in the academic life sciences. F1000research 5, 2 (2016).

Keywords: data management, database, electronic laboratory notebook, cloud
# 017

Spectre - an open source tool for comprehensive Mass Spectrometry Imaging data analysis (#492)

G. Mrukwa1, 2, D. Kuchta1, 2, D. Babiak1, 2, S. Pustelnik1, 2, M. Gallus1, 2, M. Gamrat1, 2, R. Lisak1, 2, M. Wolny1, 2, W. Wilgierz1, 2, J. Polańska2

1 Silesian University of Technology, The Spectre Team, Data Mining Group, Gliwice, Poland
2 Silesian University of Technology, Data Mining Group, Gliwice, Poland

Introduction

MALDI-MSI is an emerging tool for capturing spatial distribution of proteins and small molecules across clinical samples. Possibility of such untargeted analysis, combined with space correlation supports knowledge discovery in studies of diverse medical focus.

While MALDI-MSI is assumed to be the low-precision method among MSI, even there is still lack of software capable of efficient data storage and processing. In this work we present an open-source environment, which main focus is to provide researchers with free software implementation of the cutting-edge methodology in MSI data processing.

Methods

We designed a modular system for MSI data processing. It exploits popular algorithms for preprocessing like Peak Alignment using Fast Fourier Transform or Baseline Removal. Peak picking module consist of Gaussian Mixture Modelling algorithm, that allows shape-aware peaks modelling. User can compress data to homogeneous regions representatives through DiviK algorithm. To analyze molecular signature similarity, exROI analysis has been embedded, that grows small pathologist-defined ROI by molecularly adjacent spectra. Cohen's D effect size can be used for feature importance evaluation.

Results/Discussion

Environment is tested for four head and neck cancer tissue slices, measured in ranges for lipids and peptides, and separately for 15 thyroid cancer preparations. Pipeline has been ran for up to 201 964 spectra at once. All of the computational stages in the software are coupled with appropriate visualizations. Combination of above methods allowed more than 98% of size reduction after change of data representation, while preserving at least 80% of information about tumor molecular heterogeneity. We validated it using Cohen's D effect size estimation, discovering 527, 69, 23 and 17 features with at least large effect size for sample HNC data.

Conclusions

Project Spectre supplies a simplistic yet powerful environment that can be easily extended into a greater system. Actual code base is a solid foundation for diverse, possibly hands-free MSI data analysis pipelines. Further development assumes implementation of imzML and vendor-specific data formats, more data transformation and peak-picking methods (t-SNE, PCA, etc.).

References

- Polanski, A., Marczyk, M., Pietrowska, M., Widlak, P., Polanska, J.: Signal partitioning algorithm for highly efficient Gaussian mixture modeling in mass spectrometry. PLoS ONE 10, e0134256 (2015)
- M. Pietrowska; H. Diehl; Grzegorz Mrukwa; M. Kalinowska-Herok; M. Gawin; M. Chekan; J. Elm; G. Drazek; A. Krawczyk; D. Lange; H. E. Meyer; J. Polanska; C. Henkel; P. Widlak: Molecular profiles of thyroid cancer subtypes: classification based on features of tissue revealed by mass spectrometry imaging. BIOCHIMICA ET BIOPHYSICA ACTA-PROTEINS AND PROTEOMICS, 2016, S1570-9639(16)30217-5, DOI: 10.1016/j.bbapap.2016.10.006

Acknowledgement

C++ and C# libraries are released as NuGet packages, whole source code is released on GitHub: https://github.com/spectre-team.

This study has been financially supported by NCN grant BITIMS no 2015/19/B/ST6/01736 (JP, GM), and PO WER NzN! 2.0 SPECTRE project (0303400-00-P009/16). Computations were carried out using the GeCONiI infrastructure (grant POIG.02.03.01 24 099).

Keywords: divik, maldi, msi, tool
# 018

Supervised clustering analysis for quantification of neuroinflammation using [11C]PK11195 PET in young and old controls, and atherosclerotic non-human primates (#52)

J. Debatisse1, 2, N. Makris3, V. Di Catalido1, K. Portier1, M. Verset4, O. Wateau4, J. - B. Langlois5, F. Lamberton5, D. Ibarolla5, F. Lavenne5, D. Le Bars5, T. Troalen2, H. Contamin4, T. - H. Cho3, 6, E. Canet-Soulas1

1 Univ Lyon, CarMeN Laboratory, INSERM, INRA, INSA Lyon, Université Claude Bernard Lyon 1, Lyon, France
2 Siemens Healthcare SAS, Saint-Denis, France
3 CREATIS, CNRS UMR 5220, INSERM U1206, Université Lyon 1, INSA Lyon, Université Jean Monnet Saint-Etienne, Lyon, France
4 Cynbiose SAS, Marcy-L'Etoile, France
5 CERMEP - Imagerie du vivant, Lyon, France
6 Department of Neurology, Hospices Civils de Lyon, Lyon, France

Introduction

Neuroinflammation in metabolic disease has not been well characterized yet, but its clinical relevance remains crucial, due to cardiovascular risk. PET with [11C]PK11195 targeting the activated microglia is the modality of choice for the in vivo imaging of neuroinflammation. Modeling [11C]PK11195 kinetics to compute non-displaceable binding potential (BPND) without arterial blood sampling is a challenging task, and supervised clustering analysis (SVCA) can be used to derive a suitable reference tissue devoid of specific binding.

Methods

Dynamic [11C]PK11195-PET data from young controls (YC, n=4), old controls (OC, n=2) and atherosclerotic (ATH, n=3) Macaca fascicularis were used in this study. Reference tissue time activity curves (TAC) were extracted either using a region defined by the SVCA algorithm based on four kinetic classes [1], or with a manually defined brain region with low-signal. Two methods were used to compute regional Distribution Volume Ratio (DVR=BPND+1) using the Logan graphical analysis: with SVCA reference TAC, and with manual reference TAC. A 75-labels Macaca fascicularis atlas [2] was used to extract regional DVR values. Statistics between animal groups were done using the Mann-Whitney test.

Results/Discussion

Box-plots of DVR in YC, OC and ATH animals are shown in Figure 1A. Figure 1B shows the relationship of DVR estimated with SVCA reference TAC, and with manual reference TAC with all regions and all animals pooled together (N=675). Values correlate well (r=0.84, p<0.0001). Bland-Altman plots shows a 5.2% difference of DVR estimates. SVCA had fewer negative BPND estimates (96 versus 180 with manual reference, among the 675 values), and higher dynamic.

The atlas labels were pooled into main anatomic brain areas, and between groups comparison was performed using the SVCA reference TAC modeling (Figure 1C). ATH animals showed higher DVR in frontal (p<0.0001), parietal (p<0.0001), and sensorimotor (p<0.005) cortices compared to YC animals and in frontal cortex (p<0.005) compared to OC animals.

Conclusions

This study confirms the relevance of the SVCA to quantify neuroinflammation in a pre-clinical metabolic model. SVCA method is observer-independant compared to manual method. ATH animals showed neuroinflammation in frontal, parietal and sensorimotor cortex compared to YC animals.

References

[1] Yaqub, M., van Berckel, B. N., Schuitemaker, A., Hinz, R., Turkheimer, F. E., Tomasi, G., … Boellaard, R. (2012). Optimization of Supervised Cluster Analysis for Extracting Reference Tissue Input Curves in ( R )-[ 11 C]PK11195 Brain PET Studies. Journal of Cerebral Blood Flow & Metabolism, 32(8), 1600–1608.

[2] Ballanger, B., Tremblay, L., Sgambato-Faure, V., Beaudoin-Gobert, M., Lavenne, F., Le Bars, D., & Costes, N. (2013). A multi-atlas based method for automated anatomical Macaca fascicularis brain MRI segmentation and PET kinetic extraction. NeuroImage, 77, 26–43.

Figure 1
Regional values of DVR in young controls, old controls and atherosclerotic animals obtained with Logan modeling using the SVCA reference tissue curve (left) and manual Reference tissue curve (right). Red cross is the mean, black dots and crosses are extreme values.
Figure 2

A: Linear regression (left) between DVR SVCA reference tissue curve and manual reference tissue curve, and Bland-Altman plots (right).

B: Between groups comparison of DVR obtained with Logan modeling using the SVCA reference tissue curve in the frontal cortex (left), parietal cortex (middle) and sensorimotor cortex (right). Mann-Whitney test was used to compare groups. ** p<0.005; *** p<0.0001

Keywords: Neuroinflammation, Atherosclerosis, Metabolic disease, PET, PK11195
# 019

Echo state network prediction of intrinsic signals' slow fluctuations of rat and human cortical vasculature (#215)

F. Sobczak1, X. Yu1

1 Max Planck Institute for Biological Cybernetics, High-Field Magnetic Resonance, Tübingen, Baden-Württemberg, Germany

Introduction

Resting-state fMRI signal has been coupled to brain-wide neuronal signal fluctuations, presenting large-scale integration1,2,3. Spontaneous signals display <0.1 Hz oscillatory patterns at varied brain states4. Recently, a single-vessel fMRI method has been developed to characterize the rs-fMRI fluctuation of individual vessels5,6. Here, we hypothesize that the spectral feature of the slow oscillatory pattern can be learnt by an artificial neural network. We used Echo State Networks7 (ESN) to encode single-vessel BOLD fluctuations and predict the <0.1 Hz slow temporal dynamics 10 seconds ahead.

Methods

Data from 6 rats were acquired with a 14.1T magnet using the bSSFP8 method. 6 adult subjects were scanned using an EPI sequence in a 3T scanner. In both cases one slice TR was 1 s and the duration of each trial was 15 minutes.

Using the A-V map5 and ICA9 single-vessel time courses were extracted only from veins exhibiting a strong slow fluctuation. After normalizing the data, the signals have been bandpass filtered in either 0.01-0.05 Hz (rat) or 0.01-0.1 Hz (human) frequency ranges to extract the slowly changing feature.

ESN, an artificial neural network belonging to the class of reservoir computing methods, has been used to encode the slow oscillations using supervised learning (Fig 1A).

Surrogate data10 served as controls verifying the degree of encoding performed by the chosen ESNs.

Results/Discussion

To encode the slow spontaneous dynamics, ESNs have been trained to predict the extracted features shifted by 10 seconds with regard to the normalized raw data. The networks were trained separately for human and rat data. ESN's hyperparameters have been optimized using random search11 and their performances have been evaluated by computing Pearson correlation coefficients between network-predictive outputs and input fresh data (Fig.1A,2B). In both human and rat cases the predictions of real data obtained significantly higher scores than those of controls (Fig.1CG,2B). The optimized ESN reservoir trained on 4 trials of one rat was used to predict slow fMRI signal fluctuations of other rats (Fig.1FH). Also, an ESN trained on human vessels has been employed to forecast time courses extracted from V1 ROIs of 250 subjects obtained from the Human Connectome Project12 data. The results demonstrate that it is possible to distinguish brain states based on the obtained prediction scores (Fig.2FG)

Conclusions

Using high-resolution imaging methods allowed us to extract data from individual venules and target a specific biological mechanism. We have shown that the vascular spontaneous slow oscillations are in many cases predictable and that vessels across subjects or specimen share common oscillatory features. By predicting V1 fluctuations of HCP subjects we demonstrated the connection between vascular and whole-brain dynamics. This paves the way for encoding intrinsic activity of the entire brain.

The low computational demands of the method make it a good fit for real-time neurofeedback applications.

References

  1. Scholvinck ML, Maier A, Ye FQ, Duyn JH & Leopold DA. Neural basis of global resting state fMRI activity. Proceedings of the National Academy of Sciences of the United States of America 107, 10238-10243, doi:10.1073/pnas.0913110107 (2010).
  2. Ma, Y. et al. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons. Proceedings of the National Academy of Sciences of the United States of America 113, E8463-E8471, doi:10.1073/pnas.1525369113 (2016).
  3. Biswal, B., Zerrin Yetkin, F., Haughton, V. M. & Hyde, J. S. Functional connectivity in the motor cortex of resting human brain using echo-planar mri. Magnetic resonance in medicine 34, 537–541 (1995).
  4. Obrig, H. et al. Spontaneous low frequency oscillations of cerebral hemodynamics and metabolism in human adults. NeuroImage 12, 623-639, doi:10.1006/nimg.2000.0657 (2000).
  5. Yu, X. et al. Sensory and optogenetically driven single-vessel fMRI. Nature methods 13, 337-340, doi:10.1038/nmeth.3765 (2016).
  6. He, Y. et al. under review.
  7. Jaeger, H. The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148, 34 (2001).
  8. Scheffler, K. & Lehnhardt, S. Principles and applications of balanced SSFP techniques. European Radiology 13, 2409–2418 (2003).
  9. Calhoun, V. D., Liu, J. & Adalı, T. A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data. NeuroImage 45, S163–S172 (2009).
  10. Schreiber, T. & Schmitz, A. Surrogate time series. Physica D: Nonlinear Phenomena 142, 346–382 (2000).
  11. Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. The Journal of Machine Learning Research 13, 281–305 (2012).
  12. Van Essen, D. C. et al. The Human Connectome Project: A data acquisition perspective. NeuroImage 62, 2222–2231 (2012).

Acknowledgement

This research was supported by the Internal funding from Max Planck Society and the Graduate Training Center of neuroscience, International Max Planck Research School.

ESN system setup and prediction of the spontaneous slow fluctuation of rat vascular dynamics.
A. Setup B. All prediction scores from one rat (blue dots) ordered by trials, matched with controls (red). CG. Higher mean scores of real data compared to controls (pB=0.004, pF=0.00004). F. Mean scores for different rat trials (blue) and their controls (red). DEH. Best prediction plots of: (C) training rat; (D) its controls; (G) real veins from other rats; (black–raw, blue–target, red–net output)
Prediction of the spontaneous slow fluctuation of human vascular and V1 ROI dynamics.
A.Scores of human signals(blue), with controls(red) B. Higher scores of real data compared to controls (p=0.000153) C. Lags with highest correlation of targets and net outputs. Hist. centered on 0– prediction isn’t the filtered and shifted input DEH. Best predictions of: (D)real signals; (E)controls; (H)V1 ROI (black–raw, blue–target, red–net output) FG. Top 5% signals display distinct brain state
Keywords: prediction, slow fluctuation, single-vessel fMRI, echo state network, intrinsic signal
# 026

Data Curation for Preclinical and Clinical Multimodal Imaging Studies (#359)

G. G. Yamoah1, L. Cao2, F. Kiessling1, F. Gremse1

1 Experimental Molecular Imaging, RWTH Aachen University Clinic, Applied Medical Informatics, Aachen, North Rhine-Westphalia, Germany
2 Inviscan SAS, Preclinical Imaging, Strasbourg cedex 2, France

Introduction

Data curation standards ensure that data are stored in well-defined structures allowing for efficient validation, reuse, and reproducibility [1]. In medical research, imaging modalities help discover pathological mechanisms, improve and evaluate novel diagnostic and theranostic approaches. While storage methods in medical imaging field are available, curation standards for biomedical research are not yet established. Medical imaging has special challenges due to potentially large amounts of data, multimodal images, need for anonymization, and the use of different proprietary image formats.

Methods

We develop a secured file format and software tool for multimodal imaging studies, supporting common in vivo imaging modalities and image data of up to five dimensions. It includes cryptographic timestamps based on hashes which has been suggested long ago [2]. The timestamps uniquely identify files and assure their integrity. Image data are compressed losslessly. Hashes and compression are computed in parallel to reduce writing time and enhance memory efficiency. Fields in the structure, compressed slices, hashes and timestamps are serialized for writing and reading from files. Internet connection is needed to connect with a trusted timestamp server for writing files. The template-based implementation is in two C++ header files, now integrated in a preclinical image analysis software [3].

Results/Discussion

The format has been tested with several imaging modalities such as CT-FMT, PET-CT, SPECT-CT and CT-PET. To assess performance, we measured the compression rate, ratio, and times spent in compression. Also, the time and rate of writing and reading on the network drive was measured. Our findings demonstrate that we achieve close to 50% reduction in storage space for µCT data at a good writing speed due to the use of parallelism which minimizes the writing overhead of files. The use of parallelization speeds up the hash computations by a factor of 4. We achieve a compression rate of 137MB/s for a file of size 354MB and a hashing speed of 822MB/s on the network drive. Using a raw file of size 100MB, the writing and reading rates on our Network drive, RAID and SSD was compared with the compression rate. Our results show that compression reduces IO by almost 30% time on almost all the drives used for the experiment.

Conclusions

The development of this file format is a step to archive, abstract and curate the common processes involved in preclinical and clinical multimodal imaging studies in a standardized, credible and secured way

References

[1]. P. Lord, A. Macdonald, L. Lyon, and D. Giaretta, “From Data Deluge to Data Curation,” presented at the In Proc 3th UK e-Science All Hands Meeting, 2004.

[2]. C. Anderson, “Easy-to-alter digital images raise fears of tampering,” Science, vol. 263, no. 5145, pp. 317–318, Jan. 1994.

[3]. F. Gremse, M. Stärk, J. Ehling, J. R. Menzel, T. Lammers, and F. Kiessling, “Imalytics Preclinical: Interactive Analysis of Biomedical Volume Data,” Theranostics, vol. 6, no. 3, pp. 328–341, Jan. 2016.

Acknowledgement

This research was supported by the German Academic Exchange Service (DAAD) and the German Higher Education Ministry (BMBF) (Biophotonics/13N13355).

GFF Memory layout
Memory layout of the file format showing the data pertaining to fields in the file structure, the compressed image slices, 256-bit hash and timestamps.
Keywords: data curation, multimodal imaging, reproducibility, cryptographic timestamp, file format
# 020

[18F]UCB-H BINDING QUANTIFICATION IN RAT BRAIN: FROM MODELLING TO SUV (#377)

M. E. Serrano Navacerrada1, M. A. Bahri1, G. Becker1, C. Warnier1, F. Mievis1, F. Giacomelli1, C. Lemaire1, A. Luxen1, A. Plenevaux1

1 ULiège, GIGA CRC - In vivo imaging, Liège, Belgium

Introduction

Image quantification in Positron Emission Tomography (PET) is usually achieved through the invasive and sometimes infeasible arterial blood sampling [1, 2]. Alternative methods have been proposed, but a validation of their results is necessary [3, 4].

In the scope of improving the use of [18F]UCB-H, a specific biomarker for the Synaptic Vesicle protein 2A (SV2A) [5, 6, 7, 8], we have compared the distribution volume (VT) obtained through full kinetic modelling using a Population Based Input Function (PBIF) [9], and the Standardized Uptake Value (SUV).

Methods

Twelve Sprague Dawley male rats were pre-treated with vehicle (saline), 1 or 10 mg/kg of SV2A ligand (Keppra®, IP). Thirty minutes later, [18F]UCB-H was injected (IV) and a 90 min microPET dynamic acquisition was started followed by a T2 structural MRI. Primary image analysis was focused in examining tracer measurement stability through 10 min time windows. Subsequently, we calculated the correlation between VT (90 minutes) and SUV values over consecutive 20-minute time frames searching for the optimal frame to perform a static acquisition [10]. Finally, we did a supplementary test-retest static acquisition, from 60 to 80 minutes, in order to test group differences in SUV.

Results/Discussion

Evaluation of ten minutes time windows showed more stability in VT than in SUV measures, for all the groups. This change in signal seems to decrease in late time frames. We found also a strong correlation (R2>0.6) between dynamic VT and twenty minutes frame SUV, especially between 20 min and 60 min. From this, we can infer that an optimal frame to perform a static acquisition with [18F]UCB-H would be between 50 and 80 minutes. Using a static acquisition from 60 to 80 minutes, the SUV highlighted statistically significant differences between the group injected with vehicle and the other groups (p<0.01), but not between groups pre-treated with 1mg/kg and 10mg/kg of Keppra®.

Conclusions

Our work shows that a strong correlation between the SUV and the VT parameter based on a PBIF does exist. This opens the way to a possible simplification for SV2A in vivo imaging with [18F]UCB-H. Despite the fact that SUV is affected by many factors [11] and that it can overestimate results relative to VT [10], it is able to detect important differences in SV2A expression. Based on these results, SUV could become an interesting and easy to obtain parameter to study group differences in the context of several diseases.

References

1. Acton PD et al. Radiologic Clinics of North America. 2004; 42(6):1055.

2. Kinahan PE & Fletcher JW. Seminars in Ultrasound, CT and MRI 2010; 31(6): 496.

3. Heurling K et al. Brain Res. 2017; 1670:220.

4. Tomasi G et al. Molecular Imaging and Biology. 2012; 14(2):131.

5. Bretin F et al. EJNMMI res. 2013; 3(1):35.

6. Warnock GI et al. J Nucl Med. 2014; 55(8):1336.

7. Bretin F et al. Molecular Imaging and Biology. 2015; 17(4):557.

8. Salmon E et al. Alzheimer's & Dementia. 2017; 13(7):781.

9. Becker G et al. Molecular Pharmaceutics. 2017; 14(8):2719.

10. Lockhart SN et al. PLoS One. 2016; 11(6):e0158460.

11. Boellaard R. J Nucl Med. 2009; 50(Suppl 1):11S-20S.

Acknowledgement

This work was funded by University of Liège, F.R.S.-FNRS, Walloon Region and UCB Pharma. Alain Plenevaux is research director from F.R.S.-FNRS.

Keywords: Distribution Volume, SUV, Quantification, PET, PBIF
# 021

Comparison of autoencoders for tissue classification in histopathology images (#379)

P. Katiyar1, M. R. Divine1, U. Kohlhofer2, L. Quintanilla-Martinez2, B. J. Pichler1, J. A. Disselhorst1

1 Eberhard Karls University Tuebingen, Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Tuebingen, Baden-Württemberg, Germany
2 Eberhard Karls University Tuebingen, Institute of Pathology, Tuebingen, Baden-Württemberg, Germany

Introduction

Identifying features in histology images of tumors is a routine and important aspect of many oncology examinations. As the spatial dimensions of histology images at high magnifications can be overwhelmingly large, their manual annotation is often time consuming and expensive. Moreover, due to varying levels of image complexity, such an analysis is prone to errors and unwarranted bias. Deep convolutional autoencoders (DCAEs) are excellent for image-understanding tasks, because they provide a data driven end-to-end architecture for extracting salient image features. Therefore, in this work we compare 4 different DCAEs and use the most optimal model to discover natural groupings among the histology patches of control and therapy tumors.

Methods

NMRI nu/nu mice (n=15) bearing subcutaneous colon cancer (COLO205) were divided into control (n=7) and therapy groups (n=8). Before being studied in separate imaging experiments, the therapy groups were injected i.v. with an apoptosis-inducing therapy, whereas the control groups received equal volumes of vehicle. After the imaging experiments, the tumors were cut into 4 slices and Caspase-3 staining from each slice was obtained. The stained slides were digitized and regions of interests (ROIs) were drawn around tumor tissue at 20× magnification. All ROIs were partitioned into 50×50 non-overlapping patches, which were pooled into a single dataset. The combined dataset was further split into training (n=227102), validation (n=97330) and test (n=81108) sets while keeping the same proportions of the control and therapy patches. Four models (architectures shown in figure 1) were trained on the training set with an encoding dimension of 144 and pixel-wise mean squared reconstruction error (mse) loss: standard DCAE, denoising DCAE, split-encoder DCAE and split-encoder-split-decoder DCAE. The validation set mse loss was used for model selection and the final evaluation was performed on the test set. The encoded test patches from the best architecture were visualized using the t-distributed stochastic neighbor embedding (t-SNE) algorithm.

Results/Discussion

Although no significant volume difference was found between the therapy and control groups, a high degree of apoptosis was present in the therapy tumors. The control tumors mainly consisted of viable tissue with focally visible apoptotic regions. The patch-wise average test mse of the split-encoder-split-decoder DCAE was the lowest (10.52), as compared to the standard, denoising and split-encoder DCAEs (11.65, 23.21 and 11.63). The model performance was also verified by visually assessing the reconstructed patches obtained from all four architectures (figure 2A). The t-SNE embedding revealed 3 distinct clusters among the encoded test patches with none, small and large amounts of apoptosis (figure 2B-E).

Conclusions

At no additional computational cost, the split DCAEs provide an appealing unsupervised approach to extract multi-scale features for analyzing large histology images.

Figure 1. Architectures of different autoencoders.
Figure 2. Reconstructed patches and t-SNE embedding.
Keywords: Deep learning, Machine learning, Histology, Tumor heterogeneity, Convolutional autoencoders
# 022

PyNIT: Easy to use, interactive and BIDS friendly data processing framework for neuroimaging (#330)

S. Lee1, 3, M. A. Broadwater2, 3, Y. - Y. I. Shih1, 3, 4

1 University of North Carolina at Chapel Hill, Department of Neurology, Chapel Hill, North Carolina, United States of America
2 University of North Carolina at Chapel Hill, Bowles Center for Alcohol Studies, Chapel Hill, North Carolina, United States of America
3 University of North Carolina at Chapel Hill, Biomedical Research Imaging Center, Chapel Hill, North Carolina, United States of America
4 University of North Carolina at Chapel Hill, Department of Biomedical Engineering, Chapel Hill, North Carolina, United States of America

Introduction

PyNIT is the project to develop the framework for assisting researchers who are not familiar with programming with the ability to easily process experimental neuroimaging datasets and reproduce analysis results with concise code in an interactive environment, Jupyter notebook[1-2]. The main goals of the project are to provide 1) a tool that can easily perform a series of processing over a structured dataset, 2) solution to simplify documentation for the reproducibility, and 3) easily customizable framework for standardized pipeline development.

Methods

Data analysis using PyNIT begins by organizing the data with the Brain Imaging Data Structure (BIDS) [3]. The brk2nifti converter provided by PyNIT helps to organize and convert the raw data obtained with the Bruker scanner. The handler that helps to perform the pipeline is initiated with the path of the dataset and template image for spatial normalization. The interactive help guide allows you to set up and execute pipelines with a few lines of instructions (Figure 1). An environment for developing new processing steps and pipelines in the form of plugins is also provided (Figure 2). 

Results/Discussion

This package is designed to work on a Jupyter notebook, which allows documentation to be run simultaneously with the process[1-2]. Figure 1 shows how easy it is to see the process performed without being exposed to the complex programming code according to the interactive help function that accompanies the input command. The structure of the data generated in the middle of the pipeline remains constant. Therefore, the data generated during the intermediate process during the development of a new pipeline provides convenience for management. The most basic processing steps to be used in the new pipeline can be defined with up to 10 lines of code, as shown in Figure 2. This step method converts the stying of the input template type into a command that can be executed in a linux or unix shell environment that can be reflected in the entire dataset, and performs processing and maintains consistency of output dataset structure (Figure 1, bottom).

Conclusions

In this study, we introduced a python-based pipeline framework that allows researchers to easily execute, customize, and document the data processing workflow. The PyNIT is expected to provide an ecosystem that can package and share the newly developed pipeline with the documented tutorial materials in the form of a plug-in and Jupyter notebook. The inclusion of the BIDS standard in this project will help to apply the datasets from animal model studies for translational approach with human datasets that provided with same data structure.

References

  1. Shen., “Interactive notebook: Sharing the code”, Nature, 515:151-152, 6, November 2014.
  2. Piccolo et al., “Tools and techniques for computational reproducibility”, GigaScience20165:30 https://doi.org/10.1186/s13742-016-0135-4
  3. Gorgolewski  et al., “The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments”, Scientific Data 3, Article number: 160044 (2016) doi:10.1038/sdata.2016.44

Acknowledgement

We thank members of the Shih lab for valuable discussions concerning the studies described in thie abstract. Our team is supported by  NIMH R01MH111429, R41MH113252, R21 MH106939, NINDS R01NS091236, NIAAA U01AA020023, R01AA025582, NICHD U54HD079124,  American Heart Association 15SDG23260025, and The Brain & Behavior Research Foundation.

Figure 1. Overview of data processing workflow of PyNIT framework
Perform interactive semi-automated pipelines through organized datasets and simple configuration on a Jupiter notebook. Interactive help facilitates pipeline documentation. In addition, the data structure of the intermediate products and the resultant data is maintained consistently, which is easy to manage.
Figure 2. Example for creating a custom processing step
It is possible to develop the custom plug-in that can easily apply processing command of neuimage processing package such as AFNI to whole dataset by using concise code and intuitive template-style string.
Keywords: BIDS, MRI data processing, Pipeline framework, Jupyter notebook, Interactive, Python
# 023

User-friendly interactive workflow for data preprocessing in sensorless retrospectively gated cardiac 4D micro-CT (#417)

D. Panetta1, L. Cao2, S. Burchielli3, P. A. Salvadori1

1 CNR Institute of Clinical Physiology, Pisa, Italy
2 Inviscan SaS, Strasbourg, France
3 CNR/Tuscany Foundation, Pisa, Italy

Introduction

Retrospectively gated (RG) cardiac 4D micro-CT has become a powerful tool for cardiac function assessment in small animal models of heart disease. It relies on the correct assignment of each X-ray projection on a given phase of the cardiac cycle, based on either concurrent ECG measurement or projection-derived motion estimation. Fully automated methods can be attractive for most users, but no reports are available on their reliability. This work aims at evaluating the usefulness of an interactive workflow for the data preprocessing step in sensorless retrospectively gated 4D cardiac micro-CT.

Methods

A user-friendly workflow for sensorless RG micro-CT data rebinning with user-friendly GUI has been implemented on the IRIS scanner (Inviscan SaS, Strasbourg, France). Upon uCT acquisition, the GUI allows to (i) select the ROI for the estimation of cardiac and respiratory motion in anterior or lateral projections, (ii) evaluate the motion curve and its FFT, (iii) selecting the desired FFT bands for the respiratory and cardiac motion, (iv) selecting respiratory window and number of cardiac bins. The rebinned data is then sent to reconstruction and the resulting CT images are exported in DICOM format for 4D visualization. Several acquisition/reconstruction schemes have been tested, ranging from 800 to 20000 views/rotation and 4 to 20 bins per cardiac cycle.

Results/Discussion

The RG cardiac imaging workflow has been tested on micro-CT scans of mice and rats weighting 60-500 g, injected with Iomeron (Bracco, Italy) at the density of 200 mgI/mL. Various acquisition protocols have been tested, with exposure time per frame in the range of 11-50 ms. The interactive workflow (Fig 1) was helpful in assessing the reliability in the identification of the cardiac and respiratory events, which strongly influences the resulting image quality. A useful feature of the GUI is the possibility to quickly go backwards in the workflow if the user is not satisfied with the event identification or with the quality of the motion curve. The estimated average RR and respiration interval also provide a quick indicator of the correctness of estimated parameters. Reconstructed 4D images have been qualitatively evaluated in OsiriX. A strong correlation between image quality and correctness of physiological event identification has been observed.

Conclusions

Advanced cardiac imaging with multi-phase reconstruction in cardiac 4D micro-CT from highly heterogeneous acquisition protocols relies on the selection of several parameters, leading to intermediate results that are difficult to check in a fully automated processing workflow. Our interactive manual workflow for sensorless 4D micro-CT was shown to be both user-friendly and reliable for a wide range of acquisition protocols, animal sizes and cardiac bin number.

Figure 1
Conceptual scheme of the data preprocessing workflow for sensorless retrospectively gated 4D cardiac micro-CT implemented in the IRIS CT scanner. Steps 1-2 can be repeated iteratively. This user-friendly manual workflow can be advantageous with respect to a fully automated scheme allowing flexibility in the protocol choice while preserving reliability in all the intermediate preprocessing steps.
Keywords: Micro-CT, cardiac imaging, retrospective gating, data preprocessing, graphical user interface
# 024

An Iterative Algorithm for Sampling Recovery in PET (#219)

P. Galve1, A. López Montes1, J. M. Udías1, J. López Herraiz1

1 Universidad Complutense de Madrid, Grupo de Física Nuclear and UPARCOS, Facultad de Ciencias Físicas, Madrid, Spain

Introduction

Iterative image reconstruction methods in PET such as the OSEM algorithm may obtain resolution recovery by using a realistic system response matrix that includes all the physical effects involved [1-3]. Nevertheless, this resolution recovery is often limited by the reduced sampling in the projection space. In this work, we propose a method to further improve resolution recovery in the PET image reconstruction process by iteratively refining the measurements with data-driven increased sampling. This method was already successfully applied in multiplexed SPECT [4].

Methods

We start with a PET acquisition with Nlines-of-response (LORs) and we use them to obtain the first reconstructed image by standard OSEM methods.After that, we increase the sampling by defining foursubLORs around each initially measured LOR. A maximum-likelihood estimation of the counts in these subLORs [4, 5] is obtainedweighting the counts of their original LOR with the relative value of the projectionsin each subLORwith respect to the foursubLORs. We reconstruct the new image again using the new4N data elements with the standard OSEM algorithm.We call this step a superiteration, which may be repeated 2-3 times until convergence is achieved. We have evaluated the improvements in image quality obtained with this method using data acquired in the Argus PET/CT scanner [6].

Results/Discussion

We measured resolution and noise for different number of iterations and superiterationsof OSEM and MAP-OSEM [7] reconstruction algorithms using a NEMA NU4 Image Quality (IQ) phantom [8] filled with 18F-FDG (Fig. 1).

For OSEM algorithm, with three superiterations we see an improvement of 15% of the resolution just before the noise saturation knee without noise raise. We observe similar results when MAP-OSEM is applied, the superiterations allowa 7% lower noise level maintaining resolution.

We also applied the proposed method to a cardiac study in rats with 18F-FDG. In this case also, after the second superiteration, the reconstructed images get a myocardium wall size reduction of 12% compared to the standard one (Fig. 2).

These are just two examples of all the probes tried.The proposed method is quite general and it can be applied to any other scanner.The improvements in image resolution will be more significant in cases in which a reduced sampling is the resolution limiting factor.

Conclusions

The image quality improvement achieved by the new superiterative algorithm has been shown (figs. 1-2). Superiterations introduce additional degrees of freedom in the SRM that are consistently relaxed until a converged image, with projections more consistent with data, is obtained. On the other hand, the superiterative method increases several times the reconstruction time, but this is a minor concern with current high-performance computers and GPUs. In this method the number of subLORs, and the way the LOR is subdivided is not fixed. Further studies on the impact of image quality are ongoing.

References

[1]| Physics in Nuclear Medicine, Simon R. Cherry, James A. Sorenson and Michael E. Phelps – Elsevier 4th edition.

[2] J.L. Herraiz, S. España, J.J. Vaquero, M. Desco, J.M. Udías. Physics in Medicine andBiology, Volume 51, Number 18, (2006).

[3] K. Gong, J. Zhou, M. Tohme, M. Judenhofer, Y. Yang, and J. Qi.IEEE Transactions on Medical Imaging, VOL. 36, No. 10, october 2017.

[4] S.C. Moore, M. Cervo, S.D. Metzler, J.M. Udías, J.L. Herraiz, “An IterativeMethod for Eliminating Artifacts from Multiplexed Data in Pinhole SPECT”. Fully 3D Conference 2015.

[5] E. Lage, V. Parot, S.C. Moore,A.Sitek, J.M. Udías, S.R. Dave, M.A. Park, J.J. Vaquero, J.L. Herraiz. Med. Phys. 2015 Mar;42(3):1398-410.

[6] Y. Wang, J. Seidel, B.M.W. Tsui, J.J. Vaquero, and M.G. Pomper. Journal of Nuclear Medicine, 47:1891-1900, (2006).

[7] V. Bettinardi, E. Pagani, M.C. Gilardi, S. Alenius, K. Thielemans, M. Teras, F. Fazio. Eur. J. Nucl. Med. (2002) 29:7–18.

[8] National Electrical Manufacturers Association (NEMA). 2008. Performance Measurements of Small Animal Positron Emission Tomographs. NEMA Standards Publication NU4-2008. Rosslyn, VA. National Electrical Manufacturers.

Acknowledgement

This work was supported by Comunidad de Madrid (S2013/MIT-3024 TOPUS-CM), Spanish Ministry of Science and Innovation, Spanish Government (FPA2015-65035-P, RTC-2015-3772-1). This is a contribution for the Moncloa Campus of International Excellence. Grupo de Física Nuclear-UCM, Ref.:  910059. This work acknowledges support by EU's H2020 under MediNet a Networking Activity of ENSAR-2 (grant agreement 654002).

J. L. Herraiz is also funded by the EU Cofund Fellowship Marie Curie Actions, 7th Frame Program.

P. Galve is supported by a Universidad Complutense de Madrid, Moncloa Campus of International Excellence and Banco Santander predoctoral grant, CT27/16-CT28/16.

Noise-resolution improvement for the superiterative method.

Fig. 1. Noise-resolution curves for an IQ phantom acquired with the preclinical Argus scanner reconstructed using standard OSEM (10 subsets) (up) and MAP-OSEM with 10 subsets, β=0.08 (down). We projected always the 10th iteration image to compute the next superiteration weights. The number of iterations of each point is indicated.

Transverse view of a rat heart injected with FDG and acquired with the Argus scanner.

Fig. 2. (A) Standard OSEM reconstruction (10 iterations, 10 subsets). (B) After 2 superiterations with same parameters, recovering transversal information. The line profile along the blue line, crossing the heart, is shown below the images.  A significant improvement in the resolution and reduction of the spill-over of the myocardium activity into the left ventricle is seen.

Keywords: PET, iterative methods, image reconstruction
# 025

Differentiation of soft and hard mouse tissue by Laser Induced Breakdown Spectroscopy (#151)

F. Mehari2, 3, O. - M. Thoma1, 2, F. Klämpfl2, 3, M. Schmidt2, 3, M. J. Waldner1, 2

1 Department of Medicine 1, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
2 Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
3 Institute of Photonic Technologies, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany

Introduction

The analysis of a suspected tissue is generally performed in ex vivo by a pathologist. However, the detection of cancerous samples by the surgeons is subjective and can lead to false negative results. Thus, the development of new techniques for in vivo analysis of tissues is of great relevance. In this regard, laser induced breakdown spectroscopy is a good candidate for minimally invasive differentiation of healthy and cancerous tissues.

Methods

In this work, we aim to lay a ground for cancer identification from healthy tissue by first investigating different murine healthy tissues. Six fat, hin limb muscle and sciatic nerve samples were collected from mice and investigated with LIBS under ex vivo conditions. A short laser pulse was used to ionize a few microns volume of each tissue. The generated plasma plume which is representative of the sample is then analyzed in order to determine its elemental composition. The spectra of each sample were analyzed in the 200-975 nm wavelength range using statistical methods. Discrimination among the tissues is done using principal component analysis (PCA) followed by linear discriminant analysis (LDA). Measurements from previously generated porcine tissues were used as training data.

Results/Discussion

At a first glance, common elements in the tissues such as carbon (C), calcium (Ca), magnesium (Mg), oxygen (O), hydrogen (H), nitrogen (N), potassium (K) and sodium (Na) were easily identifiable. However, the intensity of the elements varies within the measurements of each sample. Using PCA, each spectrum could be represented by only 10 principal components containing more than 61% (PC10 = 0.49%) of the total variance in the measurements. LDA was then performed on the 10 PCs to classify the tissues. The performance of the classifier was then evaluated using receiver operating characteristics (ROC) yielding an average sensitivity of 93.03% (nerve-fat), 80.78% (nerve-muscle) and 98.67% (muscle-fat) at the cut-off point. Our LIBS experiments successfully discriminate between fat, muscle and nerve tissues from mice showing the potential of the technique for in vivo tissue discrimination.

Conclusions

In conclusion, laser induced breakdown spectroscopy is a feasible technique that can be used for soft tissue analysis to easily discriminate among them. Further steps include ex vivo and in vivo investigation of healthy and cancerous samples.

References

1. Rajesh Kanawade, Fanuel Mahari,  Florian Klämpfl, Maximilian Rohde, Christian Knipfer, Katja Tangermann-Gerk, Werner Adler, Michael Schmidt and Florian Stelzle, J. Biophotonics. 8, No. 1–2, 153–161 (2015)

2. Rohde M., Mehari F., Klämpfl F., Adler W., Neukam FW., Schmidt M., Stelzle F., J. Biophotonics 10, 1250–1261 (2017)

3. F. Mehari, M. Rhode, C. Knipfer, R. Kanawade, F. Klämpfl, W. Adler, W. Oetter, F. Stelzle, M. Schmidt, 2016, Plasma Sci. Technol. 18, 654

Acknowledgement

The authors gratefully acknowledge funding of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the German Research Foundation (DFG) in the framework of the German excellence initiative.