Scope and topics of the workshop
Nowadays people spend a significant amount of time on consuming various types of streaming videos such as video on demand (VoD) for movies, dramas or variety shows through Netflix, User Generated Content (UGC) and AI Generated Content (AIGC) through Facebook or Tiktok, or live streaming videos for social, gaming, or shopping, benefiting from the popularity of high-speed networks and intelligent terminals. Moreover, along with the evolution of hardware and the growing popularity of concepts related to the metaverse, people have much more opportunities and interests to experience immersive and interactive multimedia content. Therefore, the users have increasing demands on the Quality of Experience (QoE) of this visual multimedia, which reflects the user’s fulfillment of enjoyment or expectation to a service or an application. Enhancing the QoE of end-users becomes the ultimate goal nowadays for multimedia service providers. The scope of this workshop focuses on the QoE assessment of any visual multimedia applications both subjectively and objectively.
The topics include:
- QoE assessment on different visual multimedia applications, including VoD for movies, dramas, variety shows, UGC on social networks, live streaming videos for gaming/shopping/social, AIGC images or videos, etc.
- QoE assessment for different video formats in multimedia services, including 2D, stereoscopic 3D, High Dynamic Range (HDR), Augmented Reality (AR), Virtual Reality (VR), 360, Free-Viewpoint Video(FVV), Point Cloud, Computer-generated imagery (CGI) , etc.
- Key performance indicators (KPI) analysis for QoE.
Organizers
Dr. Jing Li
Alibaba Group, China
Prof. Xinbo Gao
Xidian University, China
Prof. Patrick Le Callet
University of Nantes, France
Prof. Lucjan Janowski
AGH University of Science and Technology, Poland
Prof. Wen Lu
Xidian University, China
Prof. Jiachen Yang
Tianjin University, China
Dr. Junle Wang
Tencent, China
Program Committee
Prof. Leida Li
Xidian University, China
Prof. Mai Xu
Beihang University, China
Dr. Giuseppe Valenzise
CNRS - CentraleSupelec, France
Prof. Hantao Liu
Cardiff University, U.K.
Prof. Lu Zhang
INSA de Rennes, France
Prof. Guangtao Zhai
Shanghai Jiaotong University, China
Prof. Yuming Fang
Jiangxi University of Finance and Economics, China
Dr. Jiabin Zhang
Alibaba Group, China
Dr. Zhi Li
Netflix Inc., U.S.
Best Paper Award Committee
Patrick Le Callet
University of Nantes, France
Xinbo Gao
Xidian University, China
Ce Zhu
University of Electronic Science and Technology of China, China
Call for Papers
Nowadays people spend a significant amount of time on consuming various types of streaming videos, benefiting from the popularity of high-speed networks and intelligent terminals. Moreover, with the growing popularity of the metaverse, the advanced hardware technology, and the content types, people have increasing demands on the Quality of Experience (QoE) of this visual multimedia.
The workshop QoEVMA2024 focuses on the QoE assessment of any visual multimedia applications, including possible key performance indicators (KPI) analysis on different video formats.
The topics of interests of this workshop include but not limited to:
  • QoE for traditional image/video and stereo image/video: the new research for evaluation of traditional visual multimedia.
  • QoE for AIGC image/video quality assessment.
  • QoE for emerging immersive multimedia or QoE-driven image/video processing: quality in immersive environments (virtual/augmented/mixed realities, 360 videos, free view-point videos).
  • QoE methods QoE-driven processing for point cloud, light field, volumetric content.
  • QoE methods for other application situations: any application situation which can import the QoE, such as screen content image/video.
  • QoE methods for visual multimedia based on machine learning: research on QoE methods for any kind of visual information based on new technologies, and deep learning is encouraged.
  • QoE-driven mobile visual multimedia processing: the QoE applications on mobile situations and the new research on mobile visual multimedia processing based on QoE.
  • One paper will be awarded with the Best Paper Award, which is selected by the Best Paper Award Committee.
    Submission

    The submission follows exactly the same policy with the ACM Multimedia regular paper. Please refer to the submission site (https://2024.acmmm.org/regular-papers) for submission policies.

    Submitted papers (.pdf format) must use the ACM Article Template: https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template.

    Submitted papers may consist of up to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references.

    The review process is single-blinded.

    Submission system: https://openreview.net/group?id=acmmm.org/ACMMM/2024/Workshop/QoEVMA

    Workshop Paper Submission Deadline July 19, 2024 July 24, 2024
    Paper Acceptance Notification August 5, 2024 August 9, 2024
    Camera Ready Version August 19, 2024 (firm deadline)
    Workshop Date November 1, 2024
    Have questions?

    Please feel free to send an email to Dr. Jing Li (jing.li.univ@gmail.com, lj225205@alibaba-inc.com) and Dr. Jiabin Zhang (luocheng.zjb@alibaba-inc.com) if you have any questions relating to the workshop.

    Program
    Time slot Session
    2:00 pm - 2:15 pm Chair's Welcome
    2:15 pm - 3:00 pm Keynote session: Towards Real-world Image Quality Assessment, Leida Li, Xidian University
    3:00 pm - 3:45 pm
    (15mins/presentation)
    Session 1: Quality Assessment on 2D Images
    No-Reference Image Quality Assessment Using Local Binary Patterns: A Comprehensive Performance Evaluation
    A Metric for Evaluating Image Quality Difference Perception Ability in Blind Image Quality Assessment Models
    No-Reference Image Quality Assessment via Local and Global Multi-Scale Feature Integration
    3:45 pm - 4:00 pm Coffee Break
    4:00 pm - 4:45 pm
    (15mins/presentation)
    Session 2: QoE on Immersive Multimedia
    MT-VQA: A Multi-task Approach for Quality Assessment of Short-form Videos
    Visual-Saliency Guided Multi-modal Learning for No Reference Point Cloud Quality Assessment
    Banding Detection via Adaptive Global Frequency Domain Analysis
    4:45 pm - 5:00 pm Best Paper Announcement
    Speakers


    Leida Li, Xidian University

    Title: Towards Real-world Image Quality Assessment

    Leida Li received the B.Sc. and Ph.D. degrees from Xidian University in 2004 and 2009, respectively. From 2014 to 2015, he was a Research Fellow with the Rapid-rich Object SEarch (ROSE) Lab, Nanyang Technological University (NTU), Singapore, where he was a Senior Research Fellow from 2016 to 2017. From 2009 to 2019, he worked as Lecturer, Associate Professor and Professor, in China University of Mining and Technology. Currently, he is a Full Professor with the School of Artificial Intelligence, Xidian University, China. His research interests include image/video quality evaluation, computational aesthetics and visual emotion analysis. He has published more than 100 papers in these areas with more than 7000 citations. His research is funded by NSFC, Huawei, Tencent, OPPO, etc. He was awarded the “OPPO Excellent Partner Award for Industry-University-Research”. Some image aesthetics assessment models he proposed were deployed in OPPO ColorOS 14. He is on the editor board of Journal of Visual Communication and Image Representation (Best Associate Editor Award 2021), EURASIP Journal on Image and Video Processing.

    Abstract: Image quality assessment (IQA) is a fundamental task in low-level vision, which has widespread applications in image/video processing, smart photography, etc. A large number of IQA models have been reported with notable achievements. However, the state-of-the-art IQA models are still subject to the generalization challenge when facing real-world scenarios. In this talk, the latest advances in generalizable image quality assessment will be reviewed, with focus on the distortion diversity and content/theme/scene variation dilemmas when dealing with real-world problems. Recent advances on multi-modal IQA will also be discussed.