DeepView is a research paper presented at SIGGRAPH 2020. The presented work is a method of capturing, processing, compressing, and rendering light field video on consumer grade hardware.
A light field is a capture of all the rays of light within a volume. It allows you to move a virtual camera anywhere around the captured volume. One of the use cases for light fields is to view a space in VR supporting translational motion, usually referred to as 6 degrees of freedom, or 6 DoF. The benefits of light fields can be seen when viewing shiny or semi-transparent objects, and how the reflections accurately react to the movement of a virtual camera.
My contributions to the team were pretty diverse, ranging from shooting light fields with the camera rig, writing pipeline ingest scripts, creating prototypes of viewing experiences in Unity on both desktop and mobile devices, capturing and integrating audio, as well as building the project web page.
Technical Artist
2020