A Benchmarking Framework for Interactive 3D Applications in the Cloud

Authors: Tianyi Liu, Sen He, Sunzhou Huang, Danny Tsang, Lingjia Tang, Jason Mars, Wei Wang

Abstract: With the growing popularity of cloud gaming and cloud virtual reality (VR), interactive 3D applications have become a major type of workloads for the cloud. However, despite their growing importance, there is limited public research on how to design cloud systems to efficiently support these applications, due to the lack of an open and reliable research infrastructure, including benchmarks and performance analysis tools. The challenges of generating human-like inputs under various system/application randomness and dissecting the performance of complex graphics systems make it very difficult to design such an infrastructure. In this paper, we present the design of a novel cloud graphics rendering research infrastructure, Pictor. Pictor employs AI to mimic human interactions with complex 3D applications. It can also provide in-depth performance measurements for the complex software and hardware stack used for cloud 3D graphics rendering. With Pictor, we designed a benchmark suite with six interactive 3D applications. Performance analyses were conducted with these benchmarks to characterize 3D applications in the cloud and reveal new performance bottlenecks. To demonstrate the effectiveness of Pictor, we also implemented two optimizations to address two performance bottlenecks discovered in a state-of-the-art cloud 3D-graphics rendering system, which improved the frame rate by 57.7% on average.

Submitted to arXiv on 23 Jun. 2020

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.