Spatio-Temporal Processes for Volumetric Video Content Creation
Citation:
Moynihan, Matthew, Spatio-Temporal Processes for Volumetric Video Content Creation, Trinity College Dublin, School of Computer Science & Statistics, Computer Science, 2022Download Item:
PhD_Thesis_Matt.pdf (PDF) 51.09Mb
Abstract:
Volumetric Video is an emerging media platform which has recently undergone many new captivating developments. It could arguably be stated that recent uptake of volumetric video in consumer media would suggest that the platform is approaching maturity. That said, there still exists a very large barrier to entry for content creators as the technological requirements far exceed that of budget-constrained creators. Furthermore, even the more affluent creators find it difficult to navigate the large data footprint of volumetric video. Hence, there is a huge demand from these communities for new systems that improve upon the quality and accessibility of this medium. Techniques which seek to ensure Spatio-temporal coherence have yielded great success with traditional 2D video from quality improvements to reduced data compression overheads. In this dissertation we aim to investigate how spatio-temporal processes may be applied to volumetric video content creation with the ultimate goal of improving quality and accessibility by means of editing and compression. Specifically we will investigate this under three applications, including upsampling and filtering of point cloud sequences, autonomous tracking and registration of mesh sequences and frameworks for learnable registration of mesh sequences. Improvements to point cloud sequences allow for volumetric video content pipelines to improve spatio-temporal coherence from early stage reconstructions, propagating these qualities towards the final volumetric mesh outputs. Tracking and registration of meshes further improves the quality of volumetric video while also adding temporal redundancy that can be exploited for compression. Finally the advantages of deep learning provide faster processing times and present a framework for more spatio-temporally aware network architectures. Under these three applications this dissertation will seek to answer the question, how can spatio-temporal processes be applied to improve volumetric video content creation?
Sponsor
Grant Number
Science Foundation Ireland (SFI)
Author's Homepage:
https://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:MAMOYNIHDescription:
APPROVED
Author: Moynihan, Matthew
Advisor:
Smolic, AljosaPublisher:
Trinity College Dublin. School of Computer Science & Statistics. Discipline of Computer ScienceType of material:
ThesisCollections:
Availability:
Full text availableKeywords:
Volumetric Video, Spatio-temporal, Free Viewpoint Video, VR, XRLicences: