Image segmentation - general superpixel segmentation & center detection & region growing
Image segmentation is widely used as an initial phase of many image processing tasks in computer vision and image analysis. Many recent segmentation methods use superpixels because they reduce the size of the segmentation problem by order of magnitude. Also, features on superpixels are much more robust than features on pixels only. We use spatial regularisation on superpixels to make segmented regions more compact. The segmentation pipeline comprises (i) computation of superpixels; (ii) extraction of descriptors such as colour and texture; (iii) soft classification, using a standard classifier for supervised learning, or the Gaussian Mixture Model for unsupervised learning; (iv) final segmentation using Graph Cut. We use this segmentation pipeline on real-world applications in medical imaging (see sample images. We also show that unsupervised segmentation is sufficient for some situations, and provides similar results to those obtained using trained segmentation.
Sample ipython notebooks:
Illustration
Reference: Borovec J., Svihlik J., Kybic J., Habart D. (2017). Supervised and unsupervised segmentation using superpixels, model estimation, and Graph Cut. In: Journal of Electronic Imaging.
An image processing pipeline to detect and localize Drosophila egg chambers that consists of the following steps: (i) superpixel-based image segmentation into relevant tissue classes (see above); (ii) detection of egg center candidates using label histograms and ray features; (iii) clustering of center candidates and; (iv) area-based maximum likelihood ellipse model fitting. See our Poster related to this work.
Sample ipython notebooks:
Illustration
Reference: Borovec J., Kybic J., Nava R. (2017) Detection and Localization of Drosophila Egg Chambers in Microscopy Images. In: Machine Learning in Medical Imaging.
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed approach differs from standard region growing in three essential aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speedup. Second, our method uses learned statistical shape properties which encourage growing leading to plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as energy minimisation and is solved either greedily, or iteratively using GraphCuts.
Sample ipython notebooks:
Illustration
Reference: Borovec J., Kybic J., Sugimoto, A. (2017). Region growing using superpixels with learned shape prior. In: Journal of Electronic Imaging.
Configure local environment
Create your own local environment, for more see the User Guide, and install dependencies requirements.txt contains list of packages and can be installed as
@duda:~$ cd pyImSegm
@duda:~/pyImSegm$ virtualenv env
@duda:~/pyImSegm$ source env/bin/activate
(env)@duda:~/pyImSegm$ pip install -r requirements.txt
(env)@duda:~/pyImSegm$ python ...
and in the end terminating…
(env)@duda:~/pyImSegm$ deactivate
Compilation
We have implemented cython
version of some functions, especially computing descriptors, which require to compile them before using them
python setup.py build_ext --inplace
If loading of compiled descriptors in cython
fails, it is automatically swapped to use numpy
which gives the same results, but it is significantly slower.
Installation
The package can be installed via pip
pip install git+https://github.com/Borda/pyImSegm.git
or using setuptools
from a local folder
python setup.py install
Short description of our three sets of experiments that together compose single image processing pipeline in this order:
We introduce some useful tools for work with image annotation and segmentation.
python handling_annotations/run_image_color_quantization.py \
-imgs "./data-images/drosophila_ovary_slice/segm_rgb/*.png" \
-m position -thr 0.01 --nb_workers 2
python handling_annotations/run_image_convert_label_color.py \
-imgs "./data-images/drosophila_ovary_slice/segm/*.png" \
-out ./data-images/drosophila_ovary_slice/segm_rgb
python handling_annotations/run_overlap_images_segms.py \
-imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \
-segs ./data-images/drosophila_ovary_slice/segm \
-out ./results/overlap_ovary_segment
python handling_annotations/run_segm_annot_inpaint.py \
-imgs "./data-images/drosophila_ovary_slice/segm/*.png" \
--label 4
python handling_annotations/run_segm_annot_relabel.py \
-out ./results/relabel_center_levels \
--label_old 2 3 --label_new 1 1
We utilise (un)supervised segmentation according to given training examples or some expectations.
python experiments_segmentation/run_eval_superpixels.py \
-imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \
-segm "./data-images/drosophila_ovary_slice/annot_eggs/*.png" \
--img_type 2d_split \
--slic_size 20 --slic_regul 0.25 --slico
python experiments_segmentation/run_segm_slic_model_graphcut.py \
-l ./data-images/langerhans_islets/list_lang-isl_imgs-annot.csv -i "" \
-cfg experiments_segmentation/sample_config.yml \
-o ./results -n langIsl --nb_classes 3 --visual --nb_workers 2
OR specified on particular path:
python experiments_segmentation/run_segm_slic_model_graphcut.py \
-l "" -i "./data-images/langerhans_islets/image/*.jpg" \
-cfg ./experiments_segmentation/sample_config.yml \
-o ./results -n langIsl --nb_classes 3 --visual --nb_workers 2
python experiments_segmentation/run_segm_slic_classif_graphcut.py \
-l ./data-images/drosophila_ovary_slice/list_imgs-annot-struct.csv \
-i "./data-images/drosophila_ovary_slice/image/*.jpg" \
--path_config ./experiments_segmentation/sample_config.yml \
-o ./results -n Ovary --img_type 2d_split --visual --nb_workers 2
python experiments_segmentation/run_compute-stat_annot-segm.py \
-a "./data-images/drosophila_ovary_slice/annot_struct/*.png" \
-s "./results/experiment_segm-supervise_ovary/*.png" \
-i "./data-images/drosophila_ovary_slice/image/*.jpg" \
-o ./results/evaluation --visual
The previous two (un)segmentation accept configuration file (YAML) by parameter -cfg
with some extra parameters which was not passed in arguments, for instance:
slic_size: 35
slic_regul: 0.2
features:
color_hsv: ['mean', 'std', 'eng']
classif: 'SVM'
nb_classif_search: 150
gc_edge_type: 'model'
gc_regul: 3.0
run_LOO: false
run_LPO: true
cross_val: 0.1
In general, the input is a formatted list (CSV file) of input images and annotations. Another option is set by -list none
and then the list is paired with given paths to images and annotations.
Experiment sequence is the following:
python experiments_ovary_centres/run_create_annotation.py
python experiments_ovary_centres/run_center_candidate_training.py -list none \
-segs "./data-images/drosophila_ovary_slice/segm/*.png" \
-imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \
-centers "./data-images/drosophila_ovary_slice/center_levels/*.png" \
-out ./results -n ovary
python experiments_ovary_centres/run_center_prediction.py -list none \
-segs "./data-images/drosophila_ovary_slice/segm/*.png" \
-imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \
-centers ./results/detect-centers-train_ovary/classifier_RandForest.pkl \
-out ./results -n ovary
python experiments_ovary_centres/run_center_evaluation.py
python experiments_ovary_centres/run_center_clustering.py \
-segs "./data-images/drosophila_ovary_slice/segm/*.png" \
-imgs "./data-images/drosophila_ovary_slice/image/*.jpg" \
-centers "./results/detect-centers-train_ovary/candidates/*.csv" \
-out ./results
python experiments_ovary_detect/run_ellipse_annot_match.py \
-info "~/Medical-drosophila/all_ovary_image_info_for_prague.txt" \
-ells "~/Medical-drosophila/RESULTS/3_ellipse_ransac_crit_params/*.csv" \
-out ~/Medical-drosophila/RESULTS
python experiments_ovary_detect/run_ellipse_cut_scale.py \
-info ~/Medical-drosophila/RESULTS/info_ovary_images_ellipses.csv \
-imgs "~/Medical-drosophila/RESULTS/0_input_images_png/*.png" \
-out ~/Medical-drosophila/RESULTS/images_cut_ellipse_stages
python experiments_ovary_detect/run_egg_swap_orientation.py \
-imgs "~/Medical-drosophila/RESULTS/atlas_datasets/ovary_images/stage_3/*.png" \
-out ~/Medical-drosophila/RESULTS/atlas_datasets/ovary_images/stage_3
In case you do not have estimated object centres, you can use plugins for landmarks import/export for Fiji.
Note: install the multi-snake package which is used in multi-method segmentation experiment.
pip install --user git+https://github.com/Borda/morph-snakes.git
Experiment sequence is the following:
python experiments_ovary_detect/run_RG2Sp_estim_shape-models.py \
-annot "~/Medical-drosophila/egg_segmentation/mask_2d_slice_complete_ind_egg/*.png" \
-out ./data-images -nb 15
python experiments_ovary_detect/run_ovary_egg-segmentation.py \
-list ./data-images/drosophila_ovary_slice/list_imgs-segm-center-points.csv \
-out ./results -n ovary_image --nb_workers 1 \
-m ellipse_moments \
ellipse_ransac_mmt \
ellipse_ransac_crit \
GC_pixels-large \
GC_pixels-shape \
GC_slic-large \
GC_slic-shape \
rg2sp_greedy-mixture \
rg2sp_GC-mixture \
watershed_morph
python experiments_ovary_detect/run_ovary_segm_evaluation.py --visual
python experiments_ovary_detect/run_cut_segmented_objects.py \
-annot "./data-images/drosophila_ovary_slice/annot_eggs/*.png" \
-img "./data-images/drosophila_ovary_slice/segm/*.png" \
-out ./results/cut_images --padding 50
python experiments_ovary_detect/run_export_user-annot-segm.py
For complete references see BibTex.