Home > Backend Development > Python Tutorial > A Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python

A Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python

Barbara Streisand
Release: 2024-09-24 06:20:15
Original
507 people have browsed it

A Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python

Introduction

Image segmentation plays a vital role in understanding and analyzing visual data, and Normalized Cuts (NCut) is a widely used method for graph-based segmentation. In this article, we will explore how to apply NCut for unsupervised image segmentation in Python using a dataset from Microsoft Research, with a focus on improving segmentation quality using superpixels.
Dataset Overview
The dataset used for this task can be downloaded from the following link: MSRC Object Category Image Database. This dataset contains original images as well as their semantic segmentation into nine object classes (indicated by image files ending with "_GT"). These images are grouped into thematic subsets, where the first number in the file name refers to a class subset. This dataset is perfect for experimenting with segmentation tasks.

Problem Statement

We perform image segmentation on an image in the dataset using the NCut algorithm. Segmentation at the pixel level is computationally expensive and often noisy. To overcome this, we use SLIC (Simple Linear Iterative Clustering) to generate superpixels, which groups similar pixels and reduces the problem size. To evaluate the accuracy of the segmentation different metrics (e.g., Intersection over Union, SSIM, Rand Index) can be used.

Implementation

1. Install Required Libraries
We use skimage for image processing, numpy for numerical computations, and matplotlib for visualization.

pip install numpy matplotlib
pip install scikit-image==0.24.0
**2. Load and Preprocess the Dataset**
Copy after login

After downloading and extracting the dataset, load the images and ground truth segmentation:

wget http://download.microsoft.com/download/A/1/1/A116CD80-5B79-407E-B5CE-3D5C6ED8B0D5/msrc_objcategimagedatabase_v1.zip -O msrc_objcategimagedatabase_v1.zip
unzip msrc_objcategimagedatabase_v1.zip
rm msrc_objcategimagedatabase_v1.zip
Copy after login

Now we are ready to start coding.

from skimage import io, segmentation, color, measure
from skimage import graph
import numpy as np
import matplotlib.pyplot as plt

# Load the image and its ground truth
image = io.imread('/content/MSRC_ObjCategImageDatabase_v1/1_16_s.bmp')
ground_truth = io.imread('/content/MSRC_ObjCategImageDatabase_v1/1_16_s_GT.bmp')

# show images side by side
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(image)
ax[0].set_title('Image')
ax[1].imshow(ground_truth)
ax[1].set_title('Ground Truth')
plt.show()
Copy after login

3. Generate Superpixels using SLIC and create a Region Adjacency Graph

We use the SLIC algorithm to compute superpixels before applying NCut. Using the generated superpixels, we construct a Region Adjacency Graph (RAG) based on mean color similarity:

from skimage.util import img_as_ubyte, img_as_float, img_as_uint, img_as_float64

compactness=30 
n_segments=100 
labels = segmentation.slic(image, compactness=compactness, n_segments=n_segments, enforce_connectivity=True)
image_with_boundaries = segmentation.mark_boundaries(image, labels, color=(0, 0, 0))
image_with_boundaries = img_as_ubyte(image_with_boundaries)
pixel_labels = color.label2rgb(labels, image_with_boundaries, kind='avg', bg_label=0
Copy after login

compactness controls the balance between the color similarity and spatial proximity of pixels when forming superpixels. It determines how much emphasis is placed on keeping the superpixels compact (closer in spatial terms) versus ensuring that they are more homogeneously grouped by color.
Higher Values: A higher compactness value causes the algorithm to prioritize creating superpixels that are spatially tight and uniform in size, with less attention to color similarity. This might result in superpixels that are less sensitive to edges or color gradients.
Lower Values: A lower compactness value allows the superpixels to vary more in spatial size in order to respect the color differences more accurately. This typically results in superpixels that follow the boundaries of objects in the image more closely.

n_segments controls the number of superpixels (or segments) that the SLIC algorithm attempts to generate in the image. Essentially, it sets the resolution of the segmentation.
Higher Values: A higher n_segments value creates more superpixels, which means each superpixel will be smaller and the segmentation will be more fine-grained. This can be useful when the image has complex textures or small objects.
Lower Values: A lower n_segments value produces fewer, larger superpixels. This is useful when you want a coarse segmentation of the image, grouping larger areas into single superpixels.

4. Apply Normalized Cuts (NCut) and Visualize the Result

# using the labels found with the superpixeled image
# compute the Region Adjacency Graph using mean colors
g = graph.rag_mean_color(image, labels, mode='similarity')

# perform Normalized Graph cut on the Region Adjacency Graph
labels2 = graph.cut_normalized(labels, g)
segmented_image = color.label2rgb(labels2, image, kind='avg')
f, axarr = plt.subplots(nrows=1, ncols=4, figsize=(25, 20))

axarr[0].imshow(image)
axarr[0].set_title("Original")

#plot boundaries
axarr[1].imshow(image_with_boundaries)
axarr[1].set_title("Superpixels Boundaries")

#plot labels
axarr[2].imshow(pixel_labels)
axarr[2].set_title('Superpixel Labels')

#compute segmentation
axarr[3].imshow(segmented_image)
axarr[3].set_title('Segmented image (normalized cut)')
Copy after login

5. Evaluation Metrics
The key challenge in unsupervised segmentation is that NCut doesn't know the exact number of classes in the image. The number of segments found by NCut may exceed the actual number of ground truth regions. As a result, we need robust metrics to assess segmentation quality.

Intersection over Union (IoU) is a widely used metric for evaluating segmentation tasks, particularly in computer vision. It measures the overlap between the predicted segmented regions and the ground truth regions. Specifically, IoU calculates the ratio of the area of overlap between the predicted segmentation and the ground truth to the area of their union.

Structural Similarity Index (SSIM) is a metric used to assess the perceived quality of an image by comparing two images in terms of luminance, contrast, and structure.

To apply these metrics we need that the prediction and the ground truth image have the same labels. To compute the labels we compute a mask on the ground and on the prediction assign an ID to each color found on the image
Segmentation using NCut however may find more regions than ground truth, this will lower the accuracy.

def compute_mask(image):
  color_dict = {}

  # Get the shape of the image
  height,width,_ = image.shape

  # Create an empty array for labels
  labels = np.zeros((height,width),dtype=int)
  id=0
  # Loop over each pixel
  for i in range(height):
      for j in range(width):
          # Get the color of the pixel
          color = tuple(image[i,j])
          # Check if it is in the dictionary
          if color in color_dict:
              # Assign the label from the dictionary
              labels[i,j] = color_dict[color]
          else:
              color_dict[color]=id
              labels[i,j] = id
              id+=1

  return(labels)
def show_img(prediction, groundtruth):
  f, axarr = plt.subplots(nrows=1, ncols=2, figsize=(15, 10))

  axarr[0].imshow(groundtruth)
  axarr[0].set_title("groundtruth")
  axarr[1].imshow(prediction)
  axarr[1].set_title(f"prediction")
prediction_mask = compute_mask(segmented_image)
groundtruth_mask = compute_mask(ground_truth)

#usign the original image as baseline to convert from labels to color
prediction_img = color.label2rgb(prediction_mask, image, kind='avg', bg_label=0)
groundtruth_img = color.label2rgb(groundtruth_mask, image, kind='avg', bg_label=0)

show_img(prediction_img, groundtruth_img)
Copy after login

Now we compute the accuracy scores

from sklearn.metrics import jaccard_score
from skimage.metrics import structural_similarity as ssim

ssim_score = ssim(prediction_img, groundtruth_img, channel_axis=2)
print(f"SSIM SCORE: {ssim_score}")

jac = jaccard_score(y_true=np.asarray(groundtruth_mask).flatten(),
                        y_pred=np.asarray(prediction_mask).flatten(),
                        average = None)

# compute mean IoU score across all classes
mean_iou = np.mean(jac)
print(f"Mean IoU: {mean_iou}")
Copy after login

Conclusion

Normalized Cuts is a powerful method for unsupervised image segmentation, but it comes with challenges such as over-segmentation and tuning parameters. By incorporating superpixels and evaluating the performance using appropriate metrics, NCut can effectively segment complex images. The IoU and Rand Index metrics provide meaningful insights into the quality of segmentation, though further refinement is needed to handle multi-class scenarios effectively.
Finally, a complete example is available in my notebook here.

The above is the detailed content of A Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template