realityvirtual.co
realityvirtual.co
We encapsulate experience through Photogrammetry & Artificial Intelligence for Real Time Delivery
 

“ML to the rescue, these mundane tasks, pushing pixels are a thing of the past. We have data & deep learning loves data”

SIMON CHE DE BOER  |  CREATIVE VISIONARY OFFICER

 
TEDx.png
 

3.5D RGBD Hybrid Volumetric Video

Our 3.5D Hybrid Volumetric capture is a lightweight technology capable of transferring hybrid volumetric video content to a wide ecosystem of devices. It is platform agnostic, allows full relighting, a PBR friendly workflow and utilizes core technologies for noise reduction and surface normal extrapolation. More importantly, the techniques is extremely light on both bandwidth and processing power. No longer do we need to download volumetric video. We can stream it in all its full relightable glory. More information in this 2018 technology document. Many more advances are coming to this platform as we’ll be coming these techniques with the latest Microsoft Azure sensors and deepPBR.

Covid-19 Major update: This pre-existing technology will be retrofitted for the purpose of real-time immersive tele-conferencing. This technique is based on the prior research and applications developed for volumetric video recording and playback for the purposes of VR. The technology is being repackaged to allow a ‘one-on-one’ tele-presence experience that is not too dissimilar as to an experience of talking to another person through a sheet of glass. Codename: White MIrror. Full posture / face / eye spacial tracking, 3D parallax and dynamic scene relighting of both parties in a one-on-one experience using nothing more than a standard Windows Kinect and / or Webcam in conjunction with a standard desktop monitor or television. Creating an experience that makes Skype & Zoom look like they are from the dark ages. Direct eye contact is vital for intimacy and a sense of a shared physical space among two people is critical for a ‘being there’ sensation. Truly immersive tele-conferencing able to be delivered on standard telecommunications infrastructure. An early BETA is aimed to be available as early as June.

 
 
 

deepPBR

deepPBR (deep learning driven physically based rendering image extrapolation) is an intelligent image processing tool that aids in the creation of textures for use in the VFX industry. Our philosophy is guided by the standardized PBR workflow.

With deepPBR; an individual aided with nothing more than a reference image, smartphone or digital camera can now create all the varying textures required for modern game engines and 3D packages from a single photograph. Essentially allowing an individual to create a vast and original texture library in days, an undertaking that would take a team of texture artists months.

Our deep learning algorithm extrapolates all the required textures for a PBR workflow. Contextually aware and consistent across the board. Albedo, roughness, normals and displacement. We are adding metallic, vector displacement, cavity and ambient occlusion in the near future.

Our current four GANs have been trained to cater for most natural lighting environments, camera resolutions, chromatic, lens and noise profiles. In a nutshell, the system has a good threshold for natural variation and can remove baked lighting and other unwanted artefacts from some very challenging use cases. In addition, we can produce contextually aware normal maps and the rest. This is simply because we’ve shown the system hundreds of thousands of examples already. Our next generation GANS will be in the millions. Much more information can be found here.

 
 

Single Pass Photogrammetry

Utilizing the latest advancements in photogrammetry acquisition, processing & post processing alongside cutting-edge machine learning principles, is what it’s all about. We are expanding on our vast tool-set. An arsenal of independent technologies to aid in the process of rapid successive data-acquisition / content creation. It is these tools that we have created in-house out of necessity, that has allowed our results to be a magnitude higher in detail and versatility than that of any competition. Our ability to essentially operate on our environments with full post-production functionality while retaining full resolution, is our edge. The benefits of being first and foremost a VFX R&D collective. Our 2017 technology outline documentation can be found here.

 
 

nDisplay for Musicology - powered by Unreal Engine

Interactive content isn't limited to being displayed on a single screen, or even a single dual-screen device like a VR headset. An increasing number of visualization systems aim to immerse the viewer more effectively in the game environment by rendering real-time content through multiple simultaneous displays. These systems may be made up of multiple adjacent physical screens, such as a Powerwall display; or they may use multiple projectors to project the 3D environment onto physical surfaces like domes, tilted walls, or curved screens, such as in a Cave virtual environment.

Our current Unreal Engine build combined with Granite allows truly immersive and extremely detailed experienced to be delivered to the museology / entertainment sector. The Unreal Engine supports these usage scenarios through a system called nDisplay. This system addresses some of the most important challenges in rendering 3D content simultaneously to multiple displays. Our presentation below truly showcases why this is such a game changer combined with our photogrammetry pipeline.

 
 

deepRETOPOLOGY: the holy grail of full automated real world encapsulation.

With the advent of advanced automated deep-learning techniques in conjunction with rapid GPU acceleration. We foresee a dramatic reduction in human hours required for experience creation. realityvirtual.co has demonstrated many automation techniques that still seem to be at arm’s length to the greater photogrammetry community. It is our intention to commercialize / democratize many of these techniques. This will provide a means for expedentual environmental encapsulation with minimal human input required. We are shifting towards a far greater initiative. One could simply say, we are directing our efforts towards backing up the planet.