Matching, Archiving and Visualizing Cultural Heritage Artifacts Using Multi-Channel Images (thesis)
Report ID: TR-895-11Author: Toler-Franklin, Corey
Date: 2011-05-00
Pages: 125
Download Formats: |PDF|
Abstract:
Recent advancements in low-cost acquisition technologies have made it more practical to acquire real-world datasets
on a large scale. This has lead to a number of computer-based solutions for reassembling, archiving and visualizing
cultural heritage artifacts. In this thesis, we combine aspects of these technologies in novel ways and introduce
algorithms to improve upon their overall efficiency and robustness. First, we introduce a 2-D acquisition
system to address the challenge of acquiring higher resolution color and normal maps for large datasets
than those available with 3-D scanning devices. Next, we incorporate our normal maps into a novel multi-cue matching system for reassembling small
fragments of artifacts. We then present a non-photorealistic rendering pipeline for illustrating geometrically
complex objects using images with multiple channels of information.
State-of-the-art 3-D acquisition systems capture 3-D geometry at archeological sites using affordable,
off-the-shelf scanners. Although multiple scans at varying viewpoints are required to assemble a complete model,
robust registration and alignment algorithms, as well as new work-flow methodologies, significantly reduce the
post-processing time. However, the color and normal maps obtained from these systems lack the subtle sub-millimeter
details necessary for careful analysis, and high fidelity documentation. We introduce an algorithm that generates
higher resolution normal maps and diffuse reflectance (true color texture), while minimizing acquisition time.
Using shape from shading, we compute our normal maps from high resolution color scans of the object taken at
four orientations on a 2-D flatbed scanner. A key contribution of our work is a novel calibration
process to measure the observed brightness as a function of the surface normal. This calibration is important
because the scanners light is linear (rather than a point), and we cannot solve for the surface normal using
the traditional formulation of the Lambertian lighting law. High resolution digital SLR cameras provide alternative
solutions when objects are too large or fragile to place on a scanner. However, they require more control over the
ambient light in the environment and additional manual effort to continually re-position a hand-held flash. They
lack the high resolutions we obtain from the scanner.
Several projects have been explored to leverage these newly acquired datasets for
digital reassembly,
and have proven successful in some domains. However, current matching algorithms do not perform well when artifacts
have deteriorated over many years. One limitation is their reliance on previous acquisition methods that do not
capture fine surface details. These details are often important matching cues when features such as color, 2-D contours or
3-D geometry are no longer reliable. We introduce a set of feature descriptors that are based not only on
color and shape, but also normal maps with a high data quality. Rather than rely exclusively on one form of data,
we use machine-learning techniques to combine descriptors in a multi-cue matching framework. We have tested our
system on three datasets of fresco fragments: Theran Frescoes from the site of Akrotiri, Greece; Roman frescoes
from Kerkrade in the Netherlands; and a Synthetic fresco created by conservators in a style similar to Akrotiri
frescoes. We demonstrate that multi-cue matching using different subsets of features leads to different tradeoffs
between efficiency and effectiveness. We observe that individual feature performance varies from dataset to dataset
and discuss the implications of feature importance for matching in this domain. Our results show good retrieval
performance, significantly improving upon the match prediction rate of state-of the-art 3-D matching algorithms.
The illustrative depictions found in biology or medical textbooks are one possible method of archiving and distributing
historic information. Using a datatype that stores both color and normals, RGBN images, we develop 2-D analogs to
3-D NPR rendering equations. Our approach extends signal processing tools such as scale-space analysis and segmentation
for this new data type. We investigate stylized depiction techniques such as toon shading, line drawing and exaggerated shading.
By incorporating some 3-D information, we reveal fine details while maintaining the simplicity of a 2-D
implementation. Our results achieve levels of detail that are impractical to create with more conventional methods like manual
3-D modeling or 3-D scanning.