Session: Research Data Management I: AI for Image Analysis

Session Chair: Dr. Jianxu Chen
Englisch

Automated Cell Segmentation, Classification, and Tracking in Large-scale Microscopy Data

Dr. Dagmar Kainmueller, MDC
I will present state of the art accurate machine learning models for cell segmentation, classification, and tracking in microscopy data from a range of modalities, including fluorescence microscopy, electron microscopy, and histology images. Furthermore, I will present a theory for sound application of such models to large microscopy data. To cope with GPU memory constraints, large data, when processed with popular convolutional neual network (CNN) architectures, has to be tiled into smaller pieces which are processed individually, and results are stitched back together afterwards. In this context, issues with inconsistencies at stitching seams have been reported. However, a formal analysis of the causes has been lacking. Our work shows that the potential for inconsistencies to arise is intricately tied to the shift equivariance properties of the employed CNNs. Our theoretical analysis entails simple rules for designing CNNs that are necessary to avoid inconsistencies in tile-andstitch processing of large data.
23.06.2022 09:30 (30 Minuten) ICM/Hall 3
Englisch

Large-Scale Datasets for AI: How to achieve quantity and quality

Prof. Dr. Marc Aubreville, Technische Hochschule Ingolstadt
Clinical decision support systems based on artificial intelligence (AI) are said to be a key component in the diagnostic process in the foreseeable future. The development of such systems depends on the availability of an abundance of clinical data. While the demands on dataset size for deep learning-based systems have decreased in the past, thanks to the use of transfer-learning, fine-tuning and un/self-supervised techniques, systems that can live up to clinical standards still largely require huge amounts of high quality training data [1,2]. Computer-aided labeling methods have been designed for various applications to reduce labor cost for this process. One particular challenge in annotation is the selection. i.e. to cover the full variability of real-world data, for which active learning methods have been designed [3]. On the other hand, effort can be reduced by the use of interactive labeling methods (expert-algorithm-collaboration), where AI-models are used to generate a coarse label set which is later corrected by the expert [4]. Yet, both methodologies induce a potential bias into the data set, which has to be carefully monitored in the annotation process and is also subject to the annotation task itself. In the talk, we will present concepts and ideas how the cost of labor can be practically reduced for a range of typical pathology tasks, while at the same time controlling for biases in the datasets.
23.06.2022 10:00 (30 Minuten) ICM/Hall 3
Englisch

Deep Learning for Congenital Heart Disease Modeling, Diagnosis and Intervention Planning

Prof. Dr. Yiyu Shi, University of Notre Dame
Congenital heart disease (CHD) is the most common type of birth defects, which occurs 1 in every 110 births in the United States. CHD usually comes with severe variations in heart structure and great vessel connections that require highly specialized domain knowledge and time-consuming manual effort to diagnose. While various deep learning frameworks have demonstrated their power in medical image computing, the large structural variations, the significant labeling effort, and the limited dataset size render them ineffective in CHD. In addition, although cardiac surgeries can effectively tackle many types of CHD and lead to decreased mortality rate, the prognosis often depends on a number of known or unknown factors, making it a guessing game for less experienced surgeons. In this talk, I will demonstrate a series of algorithms including graph matching assisted deep learning, positional contrastive learning, as well as medical image and clinical data fusion. I will show how they help in the diagnosis and treatment planning of CHDs, and how they enabled the world’s first surgical telemonitoring of CHD in April 2019.
23.06.2022 10:30 (30 Minuten) ICM/Hall 3
Englisch

Self-configuring Methods for Deep Learning-based Biomedical Image Analysis

Michael Baumgartner, Deutsches Krebsforschungszentrum
Biomedical imaging is an essential part of current health care and is used on a daily basis by many medical experts. Fueled by the ever-increasing amounts of available digital data and availability of large-scale compute resources, data-driven approaches emerged as a cornerstone of current computer-driven image analysis. Especially deep learning-based methods provide a generic framework to build algorithms that can be applied to vastly differing biomedical problems. Nevertheless, manual design choices such as the correct configuration of those algorithms remain an open problem that consumes large amounts of time and requires expert knowledge. Most importantly, this process needs to be reiterated for every single application resulting in many specialized solutions which do not generalize to different problems and datasets. Due to the high variability of biomedical image datasets, self-configuring methods such as nnU-Net [1] and nnDetection [2] established a new paradigm for the development of deep learning-based algorithms. Each method was developed on a set of diverse semantic segmentation and object detection problems in order to derive heuristic rules for a systematic and automated configuration process. As a result, they can be applied to previously unseen problems without any user interaction. Both publicly available frameworks demonstrated state-of-the-art results in international competitions and on public benchmarks by outperforming highly specialized and domain-specific solutions.
23.06.2022 11:00 (30 Minuten) ICM/Hall 3