Over the last decade, high resolution elevation data from LiDAR surveys has lead to much better understanding of archaeological features. 4D Research Lab coordinator and ACASA researcher Jitte Waagen has been experimenting with a number of visualization techniques to study the site of Muro Tenente in Apulia, Southern Italy. Muro Tenente is a vast defensive circuit dating to protohistoric (pre-Roman conquest) times that has been under investigation by archaeologists of the Vrije Universiteit Amsterdam.
GIS visualization of terrain models
The LiDAR point data was used to generate a Digital Terrain Model (DTM). A DTM is a raster based representation of elevation and is most often seen being used as a backdrop for archaeological maps. By transforming LiDAR data into a raster, a whole set of computational GIS techniques become available for analysis of the terrain. These techniques can be thought of as image filters, enhancing certain aspects of the image helping us to improve our perception of the data. In his study Jitte used various techniques available in GIS packages QGIS and SAGA with very satisfying results. You can read (and see) all about them in an article he recently wrote for the newsletter of the Aerial Archaeology Research Group.
Real time rendering of a moving light source
The 4D Research Lab (or in fact, me, the author of this blog post 😉 ) also made a small contribution to this study. The reason was that although the GIS techniques did give excellent results, they are not particulalry interactive. Most of the analytical techniques that Jitte used involve changing the shading and lighting of the model to enhance certain details, but they take some time to calculate. In 3D modelling software however, light can be simulated real time, which allows for a more playful approach to visualization. For the next step we visualized the DTM in Blender 2.80.
I’ll give some technical info about the procedure in case you like to reproduce this. It is not that hard, and most archaeologists should be able to do this themselves. In order to transform a DTM (a raster image) to a 3D mesh, you need two ‘modifiers’ in Blender: the subdivision surface modifier and the displace modifier. The first subdivides a simple mesh in many (millions) of faces. The latter reads the raster cells and uses it to transform the mesh to elevations. The modifier workflow is part of the basic functionality, but the process is automated using the Blender GIS plugin. The plugin also allows for some terrain analysis commonly available in GIS such as elevation, aspect, and slope, which can likewise be used as visual tool. The workflow for getting DTM grids into Blender is explained more in detail here. A tip for those wanting to reproduce this: I noticed that the single subdisivion surface modifier that is automatically applied by the GIS plugin does not give sufficient detail. To increase the detail, simply add another subdivision surface on top of the other one. Also, don’t forget to ramp up the number of ‘subdivisions’ for each modifier. The result: a dense, detailed mesh with millions of faces representing a terrain.
As said, using 3D modelling programs the user has full control over the lighting and its effects are shown in the viewport real time. Settings such as the strength, direction and colour of the light, the crispness of the shadows can be adjusted. The great thing about this workflow is that movement of the light source and the shadows enables a further enhancement of perception of details in the surface model. A simple animation, also easily made in Blender, shows the results of this type of visualization.
For the archaeological interpretation of the results I refer you to Jitte’s article.