Date of Award

2008

Degree Type

Thesis

Degree Name

Master of Science

Program

Computer Science

Supervisor

Dr. Olga Veksler

Abstract

In recent years, the graph cut algorithm has been successfully applied to image segmentation because it offers numerically robust global minimum. In the graph cut framework, a parameter is often used to weight the importance of the different terms of the energy function. Usually, a fixed setting of parameters is given by the developers of the segmentation algorithm, and they are expected to give satisfactory segmentations for the images similar to those that were used to tune the parameters. But when given a different class of images, the results might not be satisfactory. In fact, there is no fixed choice of parameters that will work for all images. For each particular image, parameters must be tuned to achieve best results. The goal of this thesis is to develop a measure of segmentation quality based on different features of segmentation. Then we can run the graph cut algorithm for different values of the parameter and choose the one that gives segmentation of the highest quality. Segmentation evaluation is closely tied to the question of what constitutes a good segmentation. While evaluating segmentation results is an important task in itself, in this thesis, segmentation evaluation is a crucial task since it forms an integral part of the proposed parameter selection method. We investigate several measures of segmentation quality and our measure of segmentation quality is based on intensity, gradient, contour continuity, and texture features. We approach the problem of segmentation quality as a binary classification problem (good segmentation vs. bad segmentation), and train a classifier using the AdaBoost algorithm. AdaBoost, in addition to the class label, provides confidence estimates. A high positive value indicates that the classifier is very confident that is in the positive class (i.e. a good segmentation). Thus instead of just a binary decision, namely a good or a bad segmentation, we take the confidence value as the final measure of segmentation goodness. A new way to normalize feature weights for the AdaBoost based classifier is developed, which is particularly suitable for our framework. Our approach to feature normalization is uniquely appropriate for the parameter selection problem, and leads to a big improvement in performance. The leave- one-out cross-validation error rate is 4.4%, meaning the top quality segmentation chosen for an image is a bad segmentation in only 4.4% of cases.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.