Challenging vision :
At the edge of images processing in sciences
11 March 2022 - 12h15-14h00
Uni Mail - MR 060 and Online, Zoom
Registration mandatory - Under this link
The Data Science Competence Center (CCSD) of the University of Geneva is pleased to invite you to the seventh edition of the Data Science Seminars, exploring social biases in data.
Our society produces an unprecedented and growing mass of images, which constitute a fascinating source of scientific information. Whether in fields as diverse as astronomy, medicine, or the humanities, the processing and analysis of these images opens up new fields of research. The use of computational approaches and data science solutions allow to efficiently navigate through this deluge of data, providing innovative and fast methods. Yet images are not data like any other and raise methodological challenges that should not be underestimated.
Through concrete examples drawn from their research, the speakers at this seminar will present their use of image processing techniques, emphasizing the opportunities offered by the use of these techniques, but also the challenges they face. These presentations will notably present how the arrival of digital images has revolutionized archaeological practices, particularly with the use of photogrammetry. They will also show how by adapting a series of algorithms primarily designed for image segmentation, it has been possible to characterise chemical zonation in 2D chemical maps, which permits statistical comparison and correlation of zonation both within and between multiple geological samples. Finally, the speakers will also address problems related to modern imaging systems based on machine learning methods, advocating a new framework based on information-theoretic principles.
Program
Image processing to study and preserve the archaeological heritageias
Florian Cousseau, Laboratory of prehistoric archeology and anthropology.
Images have been at the heart of archaeological research since the creation of the discipline in the 19th century. As archaeology is the only science that destroys its data by not being able to replicate it, photography has always served as a means of archiving. Furthermore, the arrival of digital images has revolutionized archaeological practices, particularly with the use of photogrammetry (Structure For Motion). It consists of making a 3D model of an object or architecture with 2D images of this element, following a well-defined protocol. The archaeological community has widely adopted this new documentation, thanks to the ease of acquisition, archiving, and consultation of data that are destroyed during archaeological excavations. This digitization of archaeological remains also plays an important role in heritage preservation, particularly in conflict areas. However, the use of these newly acquired data remains in the heritage sphere, whereas they could be the subject of computer developments to bring out their full potential.
Characterising 2D chemical maps using image segmentation methods
Tom Sheldrake, Department of Earth Sciences.
In nature, many crystals exhibit chemical zonation that reflects changes in the state of the local environment from which they grew. To understand the genesis of an individual crystal, it is therefore important to characterise each of these different chemical zones and their relationship to each other. Whilst the human eye is adept at identifying this zonation, automation of this process is complicated by chemical variability within distinct zones. By adapting a series of algorithms primarily designed for image segmentation, we have been able to characterise chemical zonation in 2D chemical maps containing hundreds of crystals, which permits statistical comparison and correlation of zonation both within and between multiple geological samples.
Information-theoretic imaging and machine learning
Slava Voloshynovskiy, Stochastic Information Processing group, Department of Computer Science.
This presentation addresses a problem of modern imaging systems based on machine learning methods. At the first stage, the traditional imaging pipelines, deployed in astronomy and medical imaging applications, use the sensors that attempt at collecting as much data as possible. In many cases, such measurements are performed in the Fourier domain. At the second stage, the image formation algorithm converts all observed data from the Fourier domain into intermediate images known as “dirty images” that can have several thousand dimensions in multiple channels or slices. Further, the image reconstruction algorithm enhances these dirty images trying to remove the image formation artifacts and degradations and to produce interpretable images. Finally, the image analysis tools from computer vision and machine learning are applied to process these enormous volumes of data for object detection, localization and classification. Given the resolution of modern imaging instruments, multi-channel and multi-band nature along the time component, the complexity of such image reconstruction and machine learning algorithms represents a great practical challenge. For example, a new astronomical radio-interferometer SKA will collect about 1 Petabytes per day.
In contrast to the above traditional imaging pipelines used in astronomy and medical imaging applications, we advocate a new framework based on information-theoretic principles. According to this framework, the collected data are mapped from the Fourier transform domain directly to the reconstructed image thus skipping the so-called “dirty image” stage or even directly to the targeted downstream computer vision tasks such as object localization or classification. Finally, the proposed framework allows to optimize the sensor planning that leads to a considerable reduction of the number of needed samples to produce the results of the same accuracy and quality as in the traditional pipeline methods but with the essentially smaller number of observations. Our first experiments show very encouraging results that will be demonstrated during the presentation.