Virtual reality applications often display video data close to the eyes. A high resolution increases immersion and realism. This comes with drawbacks in terms of bandwidth requirements not sustainable by current hardware, limiting the choice of video resolutions. In order to decrease bandwidth and computation requirements, the transferred data of high resolution video streams needs to be restricted. This thesis develops an approach for high resolution video streaming utilizing foveated imaging data. To be able to load raw video data in dynamic resolution levels simulating visual acuity, a sufficient data format capable of loading video regions in different quality settings is developed. Furthermore, an algorithm is designed to efficiently incorporate visual acuity to select a set of video data ideally suited for the users' field of view. This way, the amount of raw data streamed to the graphics unit is reduced, as only a small region is displayed in the maximum resolution. This approach is implemented and tested for wide-screen and panoramic video material. The implementations are evaluated in terms of performance, robustness and the users' perception. A discussion rounds up the results and limitations with an analysis of open problems for future work.
Type: Master thesis
Publication date: February 2016