Sony Interactive Entertainment PlayStation Meta Meta Reality Labs
Netflix Wue CVLab

Overview

Welcome to the 1st Workshop on AI for Streaming at CVPR! This workshop focuses on unifying new streaming technologies, computer graphics, and computer vision, from the modern deep learning point of view. Streaming is a $50.11 billion industry where hundreds of millions of users demand everyday high-quality 4K content on different platforms. Computer vision and deep learning have emerged as revolutionary forces for rendering content, image and video compression, enhancement, and quality assessment. From neural codecs for efficient compression to deep learning-based video enhancement and quality assessment, these advanced techniques are setting new standards for streaming quality and efficiency. Moreover, novel neural representations also pose new challenges and opportunities in rendering streamable content, and allowing to redefine computer graphics pipelines and visual content.

Call for Papers

We welcome papers addressing topics related to VR, streaming, efficient image/video (pre- & post-)processing and neural compression. The topics include:

  • Efficient Deep Learning
  • Model optimization and Quantization
  • Image/video quality assessment
  • Image/video super-resolution and enhancement
  • Compressed Input Enhancement

  • Generative Models (Image & Video)
  • Neural Codecs
  • Real-time Rendering
  • Neural Compression
  • Video pre/post processing


Instructions and Policies A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR submissions. Dual submission is not allowed. The review process is double blind. Submission site https://cmt3.research.microsoft.com/AIS2024. Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR 2024 main conference papers.



Important Dates (TBU)

Regular Paper submission deadline (& CVPR resubmissions)    March 22, 2024
Challenges Announcement & Registration starts (*)    Feb 11, 2024
Challenges Final Submission (code & factsheet)    March 27, 2024
Preliminary Challenges Results/Ranking    March 28, 2024
Late & Challenge Paper submission deadline    April 7, 2024
Paper decision notification    April 9, 2024
Camera ready deadline    April 12, 2024

Late Paper submissions apply only for previously reviewed papers. In this fast track for previously reviewed papers, the authors must provide the reviews in the supplementary material.
Challenge-related Papers apply to papers describing challenge solutions and/or challenge-related poblems using other datasets.
Paper decision notification The papers can be rejected, accepted without changes, and conditionally accepted (authors need to implement the changes and feedback from the reviews.)

Challenges 🚀

We are happy to host the following grand challenges focused on realistic image/video applications.
Register now in the challenges to receive news by email on updates and new challenges.
The workshop challenges prizes pool will be +10.000$ 🚀 & cool stuff like PS5s


The top ranked participants will be awarded and invited to present their solution at the AIS workshop at CVPR 2024.
The challenge reports (if applicable) will be published at AIS 2024 workshop, and in the CVPR 2024 Workshops proceedings.
The participants can submit papers describing their solution to the challenges and/or related problems (more info below).

We also invite you to check the challenges at the New Trends in Image Restoration and Enhancement (NTIRE) workshop .


Keynote Speaker


Professor Alan Bovik (HonFRPS) holds the Cockrell Family Endowed Regents Chair in Engineering in the Chandra Family Department of Electrical and Computer Engineering in the Cockrell School of Engineering at The University of Texas at Austin, where he is Director of the Laboratory for Image and Video Engineering (LIVE). He is a faculty member in the Department of Electrical and Computer Engineering, the Wireless Networking and Communication Group (WNCG), and the Institute for Neuroscience. His research interests include digital television, digital photography, visual perception, social media, and image and video processing.



Invited Speakers

Lucas Theis

Lucas Theis

Google DeepMind

Saman Zadtootaghaj

Saman Zadtootaghaj

Sony PlayStation

Ryan Lei

Ryan Lei

Meta

Christos Bampis

Christos Bampis

Netflix

Schedule Details (TBD) - 17th June

  • 09:00 - 09:30: Opening
  • 09:30 - 10:00: Talk 1
  • 10:00 - 10:30: Talk 2
  • 10:30 - 12:00: Challenge Presentations
  • 12:00 - 13:30: Lunch & Poster Session
  • 13:30 - 14:00: Talk 3
  • 14:00 - 14:30: Talk 4
  • 14:30 - 17:30: Challenge Presentations
  • 17:30 - 18:00: Closing Remarks & Award Ceremony

Organizers

Marcos V. Conde

Marcos V. Conde ✉️

University of Würzburg & Sony PlayStation

Radu Timofte

Radu Timofte ✉️

University of Würzburg

Daniel Motilla

Daniel Motilla

Sony PlayStation

Ioannis Katsavounidis

Ioannis Katsavounidis

Meta

Christos Bampis

Christos Bampis

Netflix

Rakesh Ranjan

Rakesh Ranjan

Meta Reality Labs

Saman Zadtootaghaj

Saman Zadtootaghaj

Sony PlayStation

Ryan Lei

Ryan Lei

Meta

Program Committee

Marcos V. Conde (University of Würzburg & Sony PlayStation)
Radu Timofte (University of Würzburg)
Florin Vasluianu (University of Würzburg)
Zongwei Wu (University of Würzburg)
Ioannis Katsavounidis (Meta)
Ryan Lei (Meta)
Wen Li (Meta)
Cosmin Stejerean (Meta)
Shiranchal Taneja (Meta)
Christos Bampis (Netflix)
Zhi Li (Netflix)


Rakesh Ranjan (Meta Reality Labs)
Andy Bigos (Sony PlayStation)
Michael Stopa (Sony PlayStation)
Daniel Motilla (Sony PlayStation)
Saman Zadtootaghaj (Sony PlayStation)
Chang Gao (Delft University of Technology)
Qinyu Chen (University of Zurich and ETHZ & Leiden Univ)
Zuowen Wang (University of Zurich and ETHZ)
Shih-Chii Liu (University of Zurich and ETHZ)