The making of 'Photogrammetry to VR' teaser

Photogrammetry is a huge deal right now! One would argue that is should have been a huge deal a long time ago. When I first started toying with the 3D reconstruction of assets using photos, it seemed simply like magic! Such an abstract idea, the true potential of photogrammetry combined with VR is applicable for so many avenues. Virtual tours, digital cultural & archaeological preservation, gaming, the 'experience' market & VR cinematography to name a few. 

Back in 2008, TEDTalks & Microsoft politely introduced me to Photosynth, not even truly realising back then what one was witnessing, I remember clearly  thinking to myself "man I would love to recreate our home using this". How much those thoughts would echo future tense. The driving force to go down this rabbit-hole is a very personal story with life events of burning buildings, fiery escapes & great personal loss. One could truly say that necessity really was the greatest driving force for innovation. I shall elaborate more for another time perhaps.

The efficiency one gains from perfecting / refining a photogrammetry to VR workflow can not be understated, as the technology genuinely allows for stunning rapid ‘user experience’ development an order of magnitude faster than traditional game asset & environment creation methodologies. A process that will only steadily improve over time with autonomous technologies (think automated UAVs for capturing locations, working on it). In my opinion, photogrammetry also achieves the holy grail that artists have been striving for, for many years - leaping over uncanny valley!

Uncanny valley is difficult. The reality boils down to the fact that the ‘perfection of imperfection’ is notorious to create via traditional means, to simulate due to its nature. An artist can be skilled and articulate in every way, however, no matter how realistic an asset appears, when experiencing this content in VR, the mind is not so kind. Intricate details that our conscious mind may not even be aware of - our unconscious screams out that something is simply not right. Think about it for a moment; the way rust has formed on a facets & pipes; how leaves and dust and rubbish might have accumulated in the corner of an abandoned warehouse over time due to the elements; the degradation of brickwork & masonry from years of abuse. These are all subtle cues to a prolonged environment of deterioration, something not so easily faked, because no body is perfect, our artistic endeavours can't compete with nature.

This ‘Phogotrammetry to VR’ teaser is actually a remarkably simple, rather rushed demonstration of the process (we never had intended the fanfare). The whole process from this particular scene, from the initial photography, to the virtual reality ready experience took less than a day (only an hour if you remove the processing dead-time), due to our semi-automated processes.

A number of different applications are used, in this order: image importing / image processing / point-cloud alignment / dense point cloud creation / point cloud noise, particles & outlier removal / constant formation & hole closure / decimation / retopology / uv unwrapping / 1B+ dense point cloud to low-poly bake (diffuse / normal / ao / roughness / displacement map) and eventual importation of assets into UE4 or Unity.

We can’t list all our specific packages as the process is constantly evolving and personally took me thousands of hours over a good 18 months of consistent research, adaptation & communications. Aspects of this pipeline / workflow were not even discovered or even doable a few months ago. It’s been new software introduced by friends of mine in the industry and having early alpha access to many of these packages that have allowed us to do what we do. The learning process was complex as I can honestly tell you that I did not come from a game developer or photography background what so ever. Have learnt all from scratch as prior to this adventure I was really just making music and editing videos in Adobe Premier. 

The equipment used was hardly high level. A simple prosumer Canon 600D & Phantom 2 with GoPro. In this demo we didn't even use the Phantom. The lighting was very poor in the late afternoon and I was just on my way home, took a few quick photos, about 200 I believe. Our far more ambition projects contain about 2000 photos and we are using slightly better equipment, but in all honesty, if you know a thing or two about signal processing, cleaning up noise, you can get a hell of a lot out of prosumer based equipment.

The scene is just one big slab of texture streaming quads & colour vertices, no segmentation what so ever. Lighting is interesting, however, with this scene in particular, I’d best describe it as a hybrid approach. Basically we flatten the highlights / lift shadows and reintroduce these by mixing both emissive baked lighting with dynamic lighting placed in the same position, this dynamic lighting then reintroduces the highlights & shadows.

We try to shoot utilizing overcast cloudy days as a big diffuser in the sky, however, we have been experimenting using full blown daylight. Capturing the elements at a point in time with genuine lighting to embrace absolute realism has it's advantages, we'll soon to experiment with HDR as well. Basically there are many ways to skin a cat, this scene was overcast, some future scenes will not be, others are completely delighted using proprietary techniques, easy to do with assets, near impossible to do with environments, but we are finding ways around this.

UE4 is by far the best engine to work with photogrammetry, simply because of it’s cinematic roots, matanee, amazing lighting engine and as someone with NO prior experience, its ease of use was great. Many of the applications we use are currently not on the market / obscure / under alpha development or make for strange bedfellows. Others that I can mention however, such as RealityCapture, Adobe Lightroom, zBrush, Meshlab, xNormal, Maya were all used in the pipeline.

Photogrammetry for game development is awesome and we see it as a great tool to aid in immersive story-telling. Every project is different and enthusis must be placed on how you go about it, factoring in both advantages & limitations. When tackling large scale environments, you must keep in mind that these objects are not going to have a certain level of interactivity, they don't retain physics or individual properties. However, for asset capture of smaller objects using delighting techniques, the process & outcome is priceless! Combining asset, environmental & now volumetric VR, content (the capture of 4D human performances), we are well on our way to a very new unknown, a complete reinvention of story telling, education experiences & other unique media consumption opportunists, that does truly excite me.