DeepSynth: Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data

Kenneth W. Dunn, Chichen Fu, David Joon Ho, Soonam Lee, Shuo Han, Paul Salama, Edward J. Delp

Research output: Contribution to journalArticle

Abstract

The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.

Original languageEnglish (US)
Article number18295
JournalScientific reports
Volume9
Issue number1
DOIs
StatePublished - Dec 1 2019

Fingerprint

Learning
Three-Dimensional Imaging
Workflow
Microscopy
Fluorescence

ASJC Scopus subject areas

  • General

Cite this

DeepSynth : Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data. / Dunn, Kenneth W.; Fu, Chichen; Ho, David Joon; Lee, Soonam; Han, Shuo; Salama, Paul; Delp, Edward J.

In: Scientific reports, Vol. 9, No. 1, 18295, 01.12.2019.

Research output: Contribution to journalArticle

Dunn, Kenneth W. ; Fu, Chichen ; Ho, David Joon ; Lee, Soonam ; Han, Shuo ; Salama, Paul ; Delp, Edward J. / DeepSynth : Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data. In: Scientific reports. 2019 ; Vol. 9, No. 1.
@article{fd00df45fc9d4a23b445a7e06e38d29c,
title = "DeepSynth: Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data",
abstract = "The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.",
author = "Dunn, {Kenneth W.} and Chichen Fu and Ho, {David Joon} and Soonam Lee and Shuo Han and Paul Salama and Delp, {Edward J.}",
year = "2019",
month = "12",
day = "1",
doi = "10.1038/s41598-019-54244-5",
language = "English (US)",
volume = "9",
journal = "Scientific Reports",
issn = "2045-2322",
publisher = "Nature Publishing Group",
number = "1",

}

TY - JOUR

T1 - DeepSynth

T2 - Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data

AU - Dunn, Kenneth W.

AU - Fu, Chichen

AU - Ho, David Joon

AU - Lee, Soonam

AU - Han, Shuo

AU - Salama, Paul

AU - Delp, Edward J.

PY - 2019/12/1

Y1 - 2019/12/1

N2 - The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.

AB - The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.

UR - http://www.scopus.com/inward/record.url?scp=85076000332&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85076000332&partnerID=8YFLogxK

U2 - 10.1038/s41598-019-54244-5

DO - 10.1038/s41598-019-54244-5

M3 - Article

C2 - 31797882

AN - SCOPUS:85076000332

VL - 9

JO - Scientific Reports

JF - Scientific Reports

SN - 2045-2322

IS - 1

M1 - 18295

ER -