Combining Aerial & Ground Based Photogrammetry @#$%en EXCITING!

The title says it all really. Combining the approach of both methodologies is massive! Yet the potential applications are still somewhat unknown and that reason alone causes me to loose far too much sleep, it is a truly exciting industry we are entering!

Currently there is a flood of interest in aerial geographical typology surveying/mapping using unmanned aerial vehicles (UAVs), almost non-existent a year ago when we did our first feasibility study. Utilizing small, portable, prosumer grade drones makes a huge amount of sense considering the archaic methods of the past.  Even the photogrammetry methods of a few years ago seem somewhat dated; large fixed wing aircraft with their manual control (pilots), ridiculously expensive 1000mp cameras and fuel, what's that? This approach was certainly not practical for smaller applications and financially out of budget for many small businesses and farmers. Here we are, a fresh startup. We can now do this, we have waypoint, we have reliable GPS data, the technical knowhow, and I’m a pro at pix4D and PhotoScan.

However, in all honesty, it is not a love of maps that catapulted me into this industry, maps are just a little bit boring, yawn! I jumped into the field of photogrammetry for far more personal reasons and I will elaborate at another time, perhaps. Photogrammetry seems far more catered towards the idea of capturing assets and environments for gaming applications, virtual reality, vfx applications and even ptsd treatment. Still maps are important (prospectus clients awaiting).

So the challenge was this. When doing ground level capture of environments, let it be a derelict inner city alleyway, brick ralick of yesteryear for historical preservation or unearthly mars like environments in Aucklands own backyard. We always came across the issue of not being able to truly achieve capture from all angles, the awnings and roofs of buildings for example were always an issue. After attempts of the elongated selfie sticks with their remote triggers, scaling trees like a possum or jumping onto roofs of adjacent buildings, with limited success and potentially high chance of injury. One day it just kinda dawned on me "hold on! Holy shit, I have a drone sitting there!" Luck would have it, drone were a passion around the same time as entering 3D environmental capture and, yeah, what would happen if you put these two together? One must carry the notion of ideas having sex at all times, it is the combination of what are otherwise considered technologies of juxtaposition, not familiar bedfellows coming together. That was that!

Lets elaborate on the benefits, because they are substantial. Combining both aerial and ground based photogrammetry datasets allow you to encapsulate the bigger picture from aerial photography while being able to very much focus on the tricky intricate details of interest at ground level. This process works well for VR applications, as most of the detail for the game engine really should be focused where required, aka FPV perspective. The aerial perspective kinda just fills in the gaps. The amazing thing is that all this just works, two completely different datasets, styles of camera sensors in conjunction with two very different lenses. Certain amount of lens & colour correction, calibration, trial & error, workflow perfection etc is obviously required and is all part of research and development. But the fact is, it mostly works! There are some artifacts, anomalies, imperfections and to the trained eye from varying angles this is quite noticeable. However, for the current work flow, proof of concept, it is more than sufficient and can also be easily resolved with better datasets. This is merely just a minor temporary annoyance, until we order our new UAV with its higher payload capacity.

The is the beauty of having time to just play, research, what ever you want to call it. I love it, so its playtime for me! We happened to be playing around with drones, happened to indulging into photogrammetry, hacked kinect scanners, structure from motion, photosynth, even my Oculus Rift was feeling left out a little. All this in conjunction with a background in UE4, 3D modelling and all things Adobe post production in general, a natural transition, this road had been laid down years ahead.

So yeah, gaming, historical preservation, virtual reality tours of beautiful cathedrals inside and out with photorealism, captured all in the course of a day! How exciting is that! One could only imagine how much we could have preserved before, lets say, the Christchurch earthquakes. I have also found a lot of pushback, and quite frankly negative feedback regarding these methods. A big chunk of the CG gaming artists community are truly shitting themselves, trust me, I've had the beautifully worded abusive forum confrontations and conflicts at gaming meetups! However, I've also had some very influential players at Autodesk and such encourage my endeavour. Interesting times! What is even more exciting. Being able to reproduce these environments with camera angles and lighting techniques unachieveable in the real world, but still making it look real, as it essentially is. we are walking inside our photographs! It is what we have dreamed about!