The fourth MEDIATE workshop will be held on June 5, as part of the International AAAI Conference on Web and Social Media (ICWSM). The main goal of the workshop is to bring together media practitioners and technologists to discuss new opportunities and obstacles that arise in the modern era of information diffusion. This year's theme is: Misinformation: automated journalism, explainable and multi-modal verification and content moderation. You can also check the program and recorded talks of the 2020, 2021 and 2022 editions of MEDIATE.

Invited Speakers

Jiebo Luo Professor, Computer Science, University of Rochester
Misinformation versus Facts: Understanding the Influence of News Regarding COVID-19 Vaccines on Vaccine Uptake

There was a lot of fact-based information and misinformation in the online discourses and discussions about the COVID-19 vaccines. Using a sample of nearly four million geotagged English tweets and the data from the CDC COVID Data Tracker, we conducted the Fama-MacBeth regression with the Newey-West adjustment to understand the influence of both misinformation and fact-based news on Twitter on the COVID-19 vaccine uptake in the U.S. from April 2019 when U.S. adults were vaccine eligible to June 2021, after controlling state-level factors such as demographics, education, and the pandemic severity. The negative association between the percentage of fact-related users and the vaccination rate might be due to a combination of a larger user-level influence and the negative impact of online social endorsement on vaccination intent.

Tanu Mitra Assistant Professor, Information School, University of Washington
Human and Technological Infrastructures of Fact-checking: Designing and Building with (not just for) Fact-Checkers

With the increase in scale and diffusion of online misinformation, efforts to develop scalable technological systems for fact-checking online information have also increased. However, such systems are limited in practice because their system design often does not take into account how fact-checking is done in the real world, and they ignore the insights and needs of various stakeholder groups core to the fact-checking process. In this talk, I will unpack this fact-checking process by revealing the infrastructures---both human and technological---that support and shape fact-checking work, the primary stakeholders involved in this process, the collaborative effort among them and the associated technological and informational infrastructures, and the key social and technical needs and challenges faced by each stakeholder group. Finally, I will close by previewing a system (YouCred) that was designed and built through 1.5 years of collaboration with key stakeholders of Africa’s largest indigenous fact-checking organization (Pesacheck) to assist fact-checkers with misinformation discovery and credibility assessments on one of the largest video search platforms--YouTube.

Guillaume Bouchard CEO and Co-Founder, CheckStep
Online Safety on a Limited Budget: Protecting Platforms from Misleading Content without Big Tech Profits

Misinformation, hate speech, and polarization are not limited to large online platforms alone. The rise of alternative websites and community apps has led to an increasing number of users encountering the same issues as those found on the so-called Big Tech platforms. Fortunately, there is a range of tools available, including AI flagging systems, shared resources, and free APIs, that can be effectively utilized to protect online platforms. In this presentation, we will shed light on the challenges and opportunities presented by the growing number of online platforms. We will also delve into best practices, identify gaps, and explore ongoing initiatives aimed at enabling small and medium-sized organizations to ensure the safety of their users. Specifically, we will focus on addressing problems related to misinformation, hate speech, and polarization.


Date & Time:
  • 5 min Intro
  • Guillaume Bouchard. Invited Speaker Session.
  • Contributed Session 1 (10 min talk + 5 min Q&A)
    • Martin Wessel and Timo Spinde. Trends in Automatic Media Bias Detection .
  • Contributed Session 2 (10/6 min presentation for long/short papers + 3 min Q&A)
    • Wenjia Zhang, Lin Gui, Rob Procter and Yulan He. NewsQuote: A Dataset Built on Quote Extraction and Attribution for Expert Recommendation in Fact-Checking.[pdf]
    • Farhan Ahmad Jafri, Mohammad Aman Siddiqui, Surendrabikram Thapa, Kritesh Rauniyar, Usman Naseem and Imran Razzak. Uncovering Political Hate Speech During Indian Election Campaign: A New Low-Resource Dataset and Baselines.
    • Lin Ai, Zizhou Liu and Julia Hirschberg. Combating the COVID-19 Infodemic: Untrustworthy Tweet Classification using Heterogeneous Graph Transformer.
    • Jinsheng Pan, Weihong Qi, Zichen Wang and Hanjia Lyu and Jiebo Luo. Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S. News Headlines. [pdf]
  • Jiebo Luo. Invited Speaker Session.
  • 30 min Coffee break
  • Contributed Session 3 (10/6 min presentation for long/short papers + 3 min Q&A)
    • Tommaso Fornaciari, Luca Luceri, Emilio Ferrara and Dirk Hovy. Leveraging Social Interactions to Detect Misinformation on Social Media.[pdf]
    • Enes Altuncu, Jason R.C. Nurse, Meryem Bagriacik, Sophie Kaleba, Haiyue Yuan, Lisa Bonheme and Shujun Li. aedFaCT: Scientific Fact-Checking Made Easier via Semi-Automatic Discovery of Relevant Expert Opinions. [pdf]
    • Rob Procter, Miguel Arana Catania, Yulan He, Maria Liakata, Arkaitz Zubiaga, Elena Kochkina and Runcong Zhao. Some Observations on Fact-Checking Work with Implications for Computational Support. [pdf]
  • Tanu Mitra. Invited Speaker Session.

Call for Papers

Topics of interest include, but are not limited to:

  • Automated journalism: novel automated and human-in-the-loop solutions for rumour detection/verification, fact-checking, stance classification, evaluation of existing solutions and novel relevant applications. Submitted papers should describe how their advantages would lead to being adopted in practice by journalists and the public (e.g. improved generalisability, ability to provide explanations, reduced bias) and address ethical considerations.
  • Explainable and Multi-modal verification: explainable rumour verification systems, evidence-based solutions, uncertainty and prediction explainability and general interpretable and transparent AI-systems, as well as multi-modal rumour verification/fact-checking models, sources and data, non-textual and multi-modal features.
  • Content Moderation: novel content moderation systems for inhibiting misinformation spreading, domain-specific content moderation solutions as well as content moderation systems that showcase generalisability and are interpretable.

We invite submissions of technical papers and talk proposals:

  • Technical papers must be up to 4 pages (short papers) or up to 8 pages (long papers). Technical papers must contain novel, previously-unpublished material related to the topics of the workshop. Accepted papers will be presented orally and will appear in the workshop proceedings.
  • Talk proposals must be up to 2 pages describing the content of a short talk (the actual length will be determined based on program constraints).

Papers must adhere to the ICWSM guidelines and be submitted through EasyChair. You can contact the organizers (details below) for questions related to the submission or participation.

Program Committee:

  • Arkaitz Zubiaga
  • Antonela Tommasel
  • Ce Guo
  • Damiano Spina
  • Dennis Assenmacher
  • Dina Pisarevskaya
  • Elena Kochkina
  • Iman Munire Bilal
  • Maurício Gruppi
  • Marya Bazzi
  • Maria Liakata
  • Mohsen Mosleh
  • Sérgio Nunes
  • Sibel Adali
  • Talia Tseriotou

Important Dates (all deadlines are 23:59, AoE):

  • April 3, 2023: Submission deadline
  • April 21, 2023: Acceptance notification
  • May 6, 2023: Camera ready deadline


Queen Mary University
Queen Mary University
Alan Turing Institute
Alan Turing Institute, University of Warwick
Alan Turing Institute, Queen Mary University
Queen Mary University