FAQ - Pixel Classifier
top of page

FAQ - Pixel Classifier

Updated: Dec 13, 2021

Pixel Classifier is one of Aivia's most popular features due to its easy-to-use interface, machine learning-driven algorithms and ability to handle a wide range of microscopy techniques.


Below are the most frequently asked questions about the pixel classifier to help you maximize your work.



Why use machine learning (ML) algorithms?


While there are multiple benefits to ML, here are the top 3 benefits:

  1. Easier workflow since users only have to provide examples of the objects of interest instead of the rules that define the objects (size range, intensity threshold, etc)

  2. Complex objects can be detected as the model will contain characteristics beyond common measurements

  3. Ability to adapt as experiment/imaging conditions are changed. The algorithms can be modified by adding more examples


How do I create a good pixel classifier?

  • Provide examples of the variety in your objects - small and large, bright and dim, round and funky-shaped objects. The more variations you capture, the better the algorithm will understand your objects

  • Preview, preview, preview - it's difficult to predict exactly how your examples will change the model so preview frequently to make sure the training is going in the right direction

  • Save your pixel training file - this will allow you to adjust the model if your project/imaging settings change. More on this below.



Can I train on multiple, 3D or time-lapse images?


Yes, we have upgraded the pixel classifier to make training on these 3 situations easier and faster.



I'm done painting examples, how do see my results?


After you are done with providing examples, click the "Teach" button to create the ML model. Then click "Apply" to process your image.


For the confidence or mask output, a new 'Want' channel (or the name you designated) will be created in the channel setting section.


For segmentation and smart segmentation output, a new object group will be created in the object set setting section. Results for 2D images can be seen in the main view and 3D results will be shown in the 3D view. Below is an example of 3D cell detection using the smart segmentation option.





What is the best way to handle large data?


While the pixel classifier can process large datasets, we suggest initially working with a small subset to train the best ML model. Once this is accomplished you can applied the ML model to the whole image.


There are many ways create small subsets in Aivia:

  • Create a crop of the data by going to View -> ROI Tool. Left click and draw the region you want to crop, it will be outlined in pink. Right click and crop ROI, this will create a new file with your cropped data

  • Preview or apply on pre-established ROIs. This can be done directly in the pixel classifier using the ROI Processing Options (outlined in blue below). You can create multiple ROIs for different regions of the image, just remember to check the ROIs you want to work with (outlined in yellow).


  • Apply to 1 z and/or time frame at a time. This is especially useful when your objects vary in x, y or you have large time lapses. These options are list in the Apply drop-down menu.



Can I add more training later?


Yes. Once you are done with your initial training session you will need save the training file (.pxtraining). This file contains the examples you provided and a pointer to the location of your images.


You can do this using either the "Save Training Set" or "Copy all the training fields and training data to one spot". The latter option will copy all the images along with the .pxtraining file to a new folder. This is the best option to use if you are concern about your images being moved or deleted (such as on a shared computer).


When you are ready to work on the pixel classifier again, simply open the .pxtraining file in the pixel classifier. Aivia will automatically load all your training images and examples. At this time you can add more examples to the existing images or add more images to the training set.



What is the confidence output and how can I use it?


The confidence output creates a confidence map and places it as a new channel for you.


The confidence map is the direct result of the ML model with each pixel intensity indicating how confident the model is that this is your object. A intensity of 255 indicates high confidence and lower values indicate less confidence.


There are many ways to use the confidence map, the image below captures just a few examples:




What are the other output options in the pixel classifier?


The Masked Channel output uses the confidence output to threshold the input image. Anything below your specified detection level will be suppressed in the new channel it creates; anything above the detection level will retain the source intensity of the input.


The Segmentation output generates outlines (2D) and meshes (3D) using the confidence map and the detection level you specify. The outline and mesh output will allow you to measure and count your objects of interest.


The Smart Segmentation output also creates outlines and meshes. It has additional parameters - detection, partitioning, and size - to determine the final output. The size and partitioning parameters are determined based on the regions you've provided during training.



Are there hot key shortcuts?


Yes, there are quite a few.

  • Change the size of the paintbrush: Shift + Control + mouse wheel

  • Move to a different class: Control + D

  • Change drawing tool: Control + Q

  • Turn preview on and off: Control + P

  • Reduce magic wand sensitivity parameter: W

  • Increase magic wand sensitivity parameter: R

  • Paint while using the magic wand: E



Where can I find more information about the Pixel Classifier?

Image credits for the images above:





bottom of page