Team:Berkeley/Project/Automation

From 2012.igem.org

(Difference between revisions)
Line 100: Line 100:
<p>
<p>
The segmentation of cells from an image allows for cell-by-cell analysis in downstream steps. In order to perform such analyses, we must be able to recognize an individual cell from its background. We first used edge detection with the Sobel operator and several filtering options to approximate possible cell outlines in the image. Then we performed a series of dilation and erosion steps to clear background pixels. We also added an additional filtering routine to refine the overall cell segmentation algorithm. This refining algorithm uses several geometric criteria, which we collected through testing of sample images.  
The segmentation of cells from an image allows for cell-by-cell analysis in downstream steps. In order to perform such analyses, we must be able to recognize an individual cell from its background. We first used edge detection with the Sobel operator and several filtering options to approximate possible cell outlines in the image. Then we performed a series of dilation and erosion steps to clear background pixels. We also added an additional filtering routine to refine the overall cell segmentation algorithm. This refining algorithm uses several geometric criteria, which we collected through testing of sample images.  
 +
</p>
 +
<p>
Images of each individual cell were saved at the end of this process for use with downstream analysis. These saved images have several uses. For example, in the cell images can be used as a mask to sort out the organelles of a cell for several fluorescent channels. The number of images saved can also be used to count the number of cells identified from a certain population.
Images of each individual cell were saved at the end of this process for use with downstream analysis. These saved images have several uses. For example, in the cell images can be used as a mask to sort out the organelles of a cell for several fluorescent channels. The number of images saved can also be used to count the number of cells identified from a certain population.
</p>
</p>

Revision as of 01:11, 3 October 2012

header
iGEM Berkeley iGEMBerkeley iGEMBerkeley

Mercury

We can accommodate libraries of over a million members through our MiCodes design scheme. With a data set this large, we needed to develop methods to make MiCodes a high-throughput screening technique. We realized that two aspects of microscopy needed to be automatable: image acquisition and image processing.

If a MiCode library size is very large, it is important to find a time-efficient method of taking many pictures of yeast cells. This summer we dealt with libraries on the order of 10^3 members, and our images were taken with a regular fluorescence microscope. Larger libraries would most likely make use of automated stages to speed up image acquisition.


The segmentation of cells from an image allows for cell-by-cell analysis in downstream steps. In order to perform such analyses, we must be able to recognize an individual cell from its background. We first used edge detection with the Sobel operator and several filtering options to approximate possible cell outlines in the image. Then we performed a series of dilation and erosion steps to clear background pixels. We also added an additional filtering routine to refine the overall cell segmentation algorithm. This refining algorithm uses several geometric criteria, which we collected through testing of sample images.

Images of each individual cell were saved at the end of this process for use with downstream analysis. These saved images have several uses. For example, in the cell images can be used as a mask to sort out the organelles of a cell for several fluorescent channels. The number of images saved can also be used to count the number of cells identified from a certain population.


How we created pipelines in Cell Profiler to detect organelles.

We quickly realized that the most essential portion of automation was determining features. In the context of organelle detection, we developed pipelines to collect geometric measurements characteristic of each organelle (actin, cell periphery, vacuolar membrane, and nucleus), and defined specific measurements as our feature set for each organelle.

To acheive this, we imaged a small sample size of about 100 cells with varying phenotypes, and determined feature sets associated with each of the four organelles. We used CellProfiler, an open-source software designed for quantitative analysis of cellular phenotypes. The software is available at their website.

We used this software to build pipelines that demonstrate the feasibility of automated identification of cellular phenotypes.

Pipeline procedure:



1. Separate cell image into its constituent fluorescent channels. Convert cells to grayscale.




2. Circle and identify blobs of high intensity pixels. These become unspecific "objects", that are converted to a psuedocolor image.

3. Determine the features associated with each "object." Measure geometric properties, as well as intensity.

4. Use the measurements taken to classify each "object" as a particular organelle.



Our software proved capable of successfully identifying MiCoded nuclei 95% of the time when processing our sample image size. In the future, with a large, already-constructed dataset (a complete MiCode library), we would likely employ machine learning techniques to improve the automated identification.