Showing posts with label DEM. Show all posts
Showing posts with label DEM. Show all posts

Tuesday, 22 April 2014

Arc-Team tries Large Scale Reflectance Transformation Imaging (RTI)


With the data, collected during our mission presented recently in the post „@MAP“the Arc-Team Mobile Mapping Platform, we've tried for the first time to apply a method called Reflectance Transformation Imaging (RTI) on landscape:

Aerial Photo of the project area taken from Arc-Teams Drone

RTI is a computational photographic method that captures a subject’s surface shape and color and enables the interactive re-lighting of the subject from any direction. RTI also permits the mathematical enhancement of the subject’s surface shape and color attributes. The enhancement functions of RTI reveal surface information that is not disclosed under direct empirical examination of the physical object. (...) RTI images are created from information derived from multiple digital photographs of a subject shot from a stationary camera position. In each photograph, light is projected from a different known, or knowable, direction. This process produces a series of images of the same subject with varying highlights and shadows. Lighting information from the images is mathematically synthesized to generate a mathematical model of the surface, enabling a user to re-light the RTI image interactively and examine its surface on a screen.“ (http://culturalheritageimaging.org/Technologies/RTI/)

We've used the processing software and viewer of Cultural Heritage Imaging, their RTIBuilder software is made available under the Gnu General Public License ver. 3.


RTI is usually used for objects of small or medium size beause of the difficulty or impossibility to illuminate whole structures or even areas / landscapes.


At this point GIS comes to our aid:

Starting from a DTM it's easily possible to create shadow reliefs with GRASSGIS' module r.shaded.relief
The highlight of the module in our case is the capability to modify the altitude of the sun in degrees above the horizon and the azimuth of the sun in degrees to the east of north. 



In this way we could produce artificially the needed data for our RTI-landscape attempt. 
The next step was to export from GRASS a set of 60 images with different lighting positions creating an imaginary light dome around the object:


At this point we reached the first bottelneck of our approach:

Usually, you include at least one reflective sphere in each shot. 

The reflection of the light source on the spheres enables the processing software to calculate the light direction for that image. 

So we had to create and copy in every image a fake sphere with the reflection corresponding to the sunlight direction choosen in GRASS.

It was a stiff piece of work!

At the end everything was ready for processing the images in RTIBuilder. The single steps in the software are very easy to execute and well described on the ProcessingGuide

We've just had some problems with the size of our images (8200x6500 pixels), which the software couldn't process, but maybe it was because of the age of our hardware...

After reducing the image-size everthing worked fine...



At the end, after installing also RTIViewer, we've held in our hands an interactive scene of an archaeological site of nearly 10.000m2 which is almost invisible from the ground.


Thursday, 5 December 2013

From drone-aerial pictures to DEM and ORTHOPHOTO: the case of Caldonazzo's castle

Hi all,
I would like to present the results we obtain in the Caldonazzo's castle project. Caldonazzo is a touristic village in Trentino (North Italy), famous for its lake and its mountains. Few people know about the medieval castle (XII-XIII century) whose tower is actually the arms of the town. Since 2006, the ruins are subject to a valorization project by the Soprintendenza Archeologica di Trento (dott.ssa Nicoletta Pisu). As Arc-Team we participated in the project with archaeological field work, historical study, digital documentation (SFM/IBM) and 3D modeling.
In this first post i will speak about the 3D documentation, the aerial photography campaign and the data elaboration.



1) The 3D documentation 

One of the final aims of the project will be the virtual reconstruction of the castle. To achieve that goal we need (as starting point) an accurate 3D model of the ruins and a DEM of the hill. The first model was realized in just two days of field-work and four days of computer-work (most of the time without a direct contribution of the human operator). The castle's walls were documented using Computer Vision (Structure from Motion and Image-Based Modeling); we use Pyhon Photogrammetry Toolbox to elaborate 350 pictures (Nikon D5000) divided in 12 groups (external walls, tower-inside, tower-outside, palace walls, fireplace, ...).


The different point clouds were rectified thanks to some ground control point. Using a Trimble 5700 GPS the GCPs were connected to the Universal Transverse Mercator coordinate system. The rectification process was lead by GRASS GIS using the Ply Importer Add-on.


To avoid some problems encountered using universal coordinate system in mesh editing software, we preferred, in this first step, to work just with only three numbers before the dot.



2) The aerial photography campaign 

After walls documentation we started a new campaign to acquire the data needed for modeling the surface of the hill (DEM) where the ruins lie. The best solution to take zenithal pictures was to pilot an electric drone equipped whit a video platform. Thank to Walter Gilli, an expert pilot and builder of aerial vehicles, we had the possibility to use two DIY drones (an hexacopter and a xcopter) mounting Naza DJI technology (Naza-M V2 control platform).


Both the drones had a video platform. The hexacopter mount a Sony Nex-7; the xcopter a GoPro HD Hero3. The table below shows the differences between the two cameras.


As you can see the Sony Nex-7 was the best choice: it has a big sensor size, an high image resolution and a perfect focal lenght (16mm digital = 24 mm compare to a 35mm film). The unique disadvantage is the greater weight and dimension than the GoPro, that's why we mounted the Sony on an hexacopter (more propellers = more lifting capability). The main problem of the GoPro is the ultra-wide-angle of the lens that distorts the reality in the border of the pictures.
The flight plan (image below) allowed to take zenithal pictures of the entire surface of the hill (one day of field-work).


The best 48 images were processed by Python Photogrammetry Toolbox (one day of computer-work). The image below shows the camera position in the upper part, the point cloud, the mesh and the texture in the lower part.


At first the point cloud of the hill was rectified to the same local coordinate system of the walls' point cloud. The gaps of the zenithal view were filled by the point clouds realized on the ground (image below).


After the data acquisition and data elaboration phases, we sent the final 3D model to Cicero Moraes to start the virtual reconstruction phase.


3) The Orthophoto

The orthophoto was realized using the texture of the SFM's 3D model. We exported out from MeshLab an high quality orthogonal image of the top view which we just rectified using the Georeferencer plugin of QuantumGIS.
As experiment we tried also to rectified an original picture using the same method and the same GCPs. The image below shows the difference between the two images. As you can see the orthophoto matches very well with the data of the GPS (red lines and red crosses), while the original picture has some discrepancies in the left part (the area most far away from the drone position, which was zenithal on the tower's ruin).



4) The DEM

The DEM was realized importing (and rectifying) the point cloud of the hill inside GRASS 7.0svn using the Ply Importer Add-on. The text file containing the transformation's info was built using the relatives coordinates extracted from Cloud Compare (Point list picking tool) and the UTM coordinates of the GPS' GCPs.




After data importing, we use the v.surf.rst command (Regularized spline tension) to transform the point cloud into a surface (DEM). The images below show the final result in 2D and 3D visualization.



Finally we imported the orthophoto into GRASS.



That's all.
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.