The simplest way to use StomataCounter is to upload a jpeg using the upload button below, refresh your browser on the Dataset page after a few moments, then export the results using the Dataset operations pulldown menu.
Most users will have several hundred images to measure and uploading zip files of jpegs is more convencient. You can add more zipfiles or individual images to a dataset by navigating to that dataset's page and following the instructions to add more images. Once StomataCounter has finished detecting and counting stomata, you should view the results of all or a set of images to determine how well the method performed. Click on an image to view the result. You should annotate 50 or 100 images (or whatever number you're comfortable with) and view the correlation of human to automatic stomata counts. This is done by clicking on the Dataset operations drop down menu and selecting "Export correlation graph".
Stomata annotations are added to an image by clicking on image in a dataset and clicking the annotate button. There are two annotation modes. Enter the basic annotation mode by clicking "Annotate" in the list of actions. In this mode, you'll have to add an annotation for each stomata by clicking once to add, and twice to remove. We provided a faster method of annotating using the "Annotate from automatic" annotation mode. In this mode, StomataCounter's best quess of where a stoma is located is provided as an annotation. You then choose to accept, reject, or modify annotations. The automatic annotation mode is a great time saver and you'll only spend a few seconds annotating each image.
When you are ready to download and analyze your output data, click on the "Export results as csv" in the Dataset operations drop down menu. Along with the results visible in the dataset table, each image is scored for nine image quality scores. These quality scores are very informative and can be helpful for you to filter results from low quality images. You should read more about these quality scores from PyImq in their original publication.
StomataCounter was designed to minimize user interface with the analytical components of the method. Your control over the performance is largely determined by your choice of input images. Users should design a pilot experiment before generating a large collection of images to determine which microscopy technique and/or tissue preperation techniques will minimize count error. We recommend that users take the following actions:
If a large collection of images is already available and re-imaging is not feasible, we recommend the users take the following actions:
If you do not wish to throw out images, please contact us and we will retrain the model and incorporate your 100 annotated images.
We have found that micrographs genereated from SEM, phase contrast, and DIC have enough contrast for the method to detect stomata. Brightfield images usually don't have enough contrast.
Here is some code that is useful for 1) randomly sampling images in a directory, and 2) evenly dividing images into zipfiles for upload. This code works in a linux environment. Insert your paths and file names where appropriate.
1) Randomly select 100 images in a directory
Now, upload those images to StomataCounter and annotate them!
2) Divide a large set of images into smaller zipfiles for uploading.
Other useful code.
If StomataCounter is not performing well, or it is not uploading your zipfile, try taking these steps:
The output file contains information about the sample and data set names, manual stomata counts (if annotations are made by the user), automatic counts from StomataCounter, and image quality scores.
As one may predict, image quality has a large influence on detection results. Users are encouraged to try several imaging techniques to discover which microscopy or tissue preparation method works best for their material. We have found that z-stacked DIC micrographs perform the best for nail polish peels.
Image quality scores from PyImq are provided in the output file for users to filter their own images with. Image quality scores are provided without standardization, and users should standardize values between zero and one to match Fetter et al. (2019) and Koho et al. (2016). PyImq scores include:
See Koho et al. (2016) for complete descriptions of the image quality scores.