Carlos López Molina

carlos

Ambiguity and Hesitancy in Quality Assesment: The case of Image segmentation


Abstract: Automatic information processing regularly deals with plenty of sources of uncertainty. Many of them are born from the data gathering itself, while many others stem from imprecise computations or algorithmic needs. In general, they can all be solved with advanced machinery,  dedicated mathematical models or extra computational resources. In this effort, Fuzzy Set theory has played a relevant role in the past 40 years. However, there is a source of ambiguity and hesitancy which cannot be removed from information processing: that due to the ambiguous, human-like definition of information processing problems.
Human beings generally make variable interpretation of the goals and needs of an information processing task, whichever context it is carried out in. Hence, the perceived quality of one single result will be heterogeneous, depending on the human expert evaluating it. This poses significant problems in various stages of information processing. For example, it is damaging when it comes to algorithmic setting or optimization, since the perceived improvement (for one human) might be coupled to the perceived quality loss (according to another human). Also, it becomes damaging when scientists intend to select the best performing algorithm for a given task, as the opinions by different human experts might differ. In general, we find that the perceived quality of one single result is a conglomerate of opinions, often hesitant or contradictory.


The problem of ambiguous data for quality assessment is common in Image Processing. One clear example is the task of Image segmentation, which lacks a mathematical definition, and hence automatic methods are bound to be evaluated according to how similar their results are to human-made solutions. Unfortunately, different humans routinely produce different interpretations of an image. As a consequence, the evaluation of a segmented image becomes some sort of comparison with a list of human-made, spatially imprecise segmentations. This requires a significant mathematical apparatus which is able to cope with multivariate data involving hesitancy, ambiguity and contradiction.
   
In this talk we analyze, from a historical perspective, the problem of quality evaluation for image segmentation. Specifically, we focus on how to handle the variable interpretation by different humans. This will lead to an analysis of the general quality evaluation problem in the presence of multivariate ground truth, including its semantics, the technical challenges it poses and the relationship with some mathematical disciplines involved in its solution.

C. Lopez-Molina received the Ph.D. degree from the Universidad Publica de Navarra, in 2012, where he is currently an Assistant Professor. His research interests are in low-level feature extraction/treatment for computer vision and automated bioimagery processing. He has developed most of his work around edge and boundary detection, and in soft computing techniques for computer vision.