Microtomography is an X-ray imaging technique based on the same principle as the medical scanner. By using synchrotron radiation instead of a conventional X-ray source, it can be used to capture volumetric scans at much higher resolution and quality. This technology is able to provide three dimensional (3D) images to visualise the internal structures of objects in a non-invasive and non-destructive way. A distinguishing feature is that it can be used to image soft tissues even in the absence of contrast agents thanks to the phase contrast effect. At the ID19 microtomography beamline of the European Synchrotron Radiation Facility (ESRF), scientists use this technology for a variety of applications, including cultural heritage, palaeontology, materials research, life sciences and biomedical research. One of the most recent applications is in Egyptology, through the investigation of animal mummies. In this application, the intensity of the voxels and the 3D structure of the textures are used to manually segment the volume into textiles, organic tissues, balm resin, ceramics and bones. Depending on the complexity and size of the data set, this process can be very time consuming, taking several weeks for a small animal mummy. In the near future, the same process is expected to be applied to human mummies, which would take considerably longer.
One possible solution is to use existing 3D image segmentation algorithms. Unfortunately, most of these algorithms are designed to segment indoor or street scenes. Moreover, computed microtomography of mummies present unique challenges for image segmentation. Specifically, while bones and tissues have very different X-ray absorption and are generally easy to segment, organic tissues and textile tissues have very similar absorption and are therefore much more difficult to distinguish.
The main objective of this work is to develop and use artificial intelligence techniques to automatically segment volumetric microtomography images and label each voxel as textiles, organic tissues or bones. The developed method will use engineered and/or learned features that combine the voxel intensity, 3D texture and shape to determine the class of every voxel in the volumetric image while enforcing continuity across slices. Our method will be designed to work well at any operational setting, such as spatial resolution, slice depth, and volume size, as long as the texture of the different materials remains resolvable. The breakthrough character of this project is the use of deep learning to extract a descriptive feature vector for each point to be segmented, where similar features indicate the same material. Statistical methods will then be used to further refine the classification while enforcing continuity across slices. The developed algorithm will be validated on a large set of computed microtomography images and the resulting data and results will be made available to the general public.