top of page

Deep learning deconvolution

Introduction

Quantitative imaging is the foundation of fluorescence microscopy and has led to major discoveries in biomedical sciences leading to an increase in human longevity and quality of life. However, the imaging properties and measurement imperfections of fluorescence microscopy distort the image and reduce the maximum resolution that can be obtained by the imaging system. Hence, researchers are limited by spatial and temporal resolution, light exposure, signal-to-noise ratio (SNR) and need to routinely trade off these factors.


Deep Learning is a type of Artificial Intelligence (AI) which has been well suited for image-based problems and have been applied to image restoration applications like denoising and resolution enhancement as well as image segmentation. These AI applications have tremendous potential for microscopy experiments and could pave the way for a quantum leap forward in microscopy-based discoveries that can decode biological functions and mechanisms of disorders.

In this white paper we discuss practical limitations in fluorescence microscopy, deep learning enabled image enhancement and example datasets demonstrating deep learning deconvolution for confocal microscopy.



Challenges in Imaging Biological Samples and Microscope Point Spread Function (PSF)

The intrinsic thickness of cells and tissues poses challenges in imaging biological samples. While objective lenses with high numerical aperture have high resolving power they have a relatively narrow depth of field resulting in blurred, out-of-focus information that interferes with the image in the focal plane. This blurring decreases image contrast and resolution and becomes a significant problem with increasing thickness of the sample.


Point spread function (PSF) represents the blurring of a sample caused by diffraction at the objective lens [1]. Measuring the PSF is a laborious process and can be done by imaging a subresolution fluorescent bead ideally smaller than the resolution of the optical setup that will be used for samples or the PSF can be calculated by different formulae. In the former, although the PSF closely matches the experimental setup, the images obtained have very poor SNR. Additionally, PSF measurements can vary substantially [2] due to degradation of sample quality over time due to photobleaching, and problems in the optical system like temperature drifts, spherical aberrations, and defective relay lenses. Imaging equations are used to calculate the PSF in an attempt to reverse the effects of convolution such as blur, and loss of contrast of small features. These solutions are not ideal as they require time-consuming manual measuring processes and/or demand expert knowledge of many hardware components and estimation of the PSF that affect modeling functions.


Compared to PSF deconvolution which is an iterative process for every image, DL deconvolution is only iterative for the training of the model. After that it applies directly. Deconvolution algorithms help to remove out-of-focus data and can be categorized into two classes, deblurring and image restoration [3]. Deblurring algorithms are applied plane by plane to each 2D plane of a 3D image stack, and an estimate of the image blur is removed from each pla