Optimizing the use of digital airborne images for 2.5D visualizations
Aleksandra Sima and Simon Kay
Abstract
Monoscopic virtual representations of 3D geometries are rapidly becoming important products
of many databases and software applications. Many GIS tools - even freeware, such as Google Earth - permit the visualization of
city planning models as well as landscapes derived from 3D geometries (digital surface models draped with imagery, called 2.5D visualization).
These applications also are steadily becoming less qualitative, and more metric, as they are integrated into GIS environments. Up until now,
such image rendering has usually been made with non-photogrammetric sensors, and has not been based upon state-of-the-art air survey systems.
In the photogrammetry domain, the orthogonally projected image remains the paradigm. This approach however neglects imagery that may better
represent the surfaces of objects such as building facades. We propose that off-nadir parts of vertical imagery - typically ignored after the
orthorectification process - provide us systematically with much data that can be used to optimize the 2.5D rendering process.
|