Introducing Command R+: Our new, most powerful model in the Command R family.

Learn More

Cohere For AI - Guest Speaker: Dr. Matthias Treder, ML Engineer


Date: Jun 04, 2024

Time: 4:30 PM - 5:30 PM

Location: Online

Abstract: In cloud-based gaming and virtual reality (G&VR), scene content is rendered in a cloud server and streamed as low-latency encoded video to the client device. In this context, distributed rendering aims to offload parts of the rendering to the client. An adaptive approach is proposed, which dynamically assigns assets to client-side vs. server-side rendering according to varying rendering time and bitrate targets.
This is achieved by streaming perceptually-optimized scene control weights to the client, which are compressed with a composable autoencoder in conjunction with select video segments. This creates an adaptive render-video (REVI) streaming framework, which allows for substantial tradeoffs between client rendering time and the bitrate required to stream visually-lossless video from the server to the client. In order to estimate and control the client rendering time and the required bitrate of each subset of each scene, a random-forest based regressor is proposed, in conjunction with the use of AIMD (additive-increase/multiplicative-decrease) to ensure predetermined average bitrate or render-time targets are met. Experiments are presented based on typical sets of G&VR scenes rendered in Blender and HEVC low-latency encoding. A key result is that, when the client is providing for 50% of the rendering time needed to render the whole scene, up to 60% average bitrate saving is achieved versus streaming the entire scene to the client as video.

Add event to calendar

Apple Google Office 365 Outlook Yahoo