The second Mediate workshop will be held virtually on June 7, as part of the International AAAI Conference on Web and Social Media (ICWSM). The main goal of the workshop is to bring together media practitioners and technologists to discuss new opportunities and obstacles that arise in the modern era of information diffusion. This year's theme is: Misinformation: automation, uptake, and digital governance. You can also check last year's program and recorded talks.
The contributed papers of the workshop were published in the Workshop Proceedings of ICWSM, and all the talks are publicly available online.
A look into what it would take to be able to fact check things at scale; the systems, processes, and checks and balances that we need. How can we make better use of the information and skills we already have? How far along are we to doing that already?
Several countries such as India, Turkey, and Pakistan have proposed laws to regulate online content. The legislations include penalties for tech companies if they fail to take down contentious posts on request from relevant authorities. Silicon Valley, on the other hand, has adopted risk regulatory interventions by de-platforming Donald Trump. With the increasing role of government regulations and platform intervention, what is the future for internet freedoms?
Users’ perspectives on what f*ke news and misinformation is and isn’t, who drives it, and where people say they see it are important for understanding the scale and scope of public concern, and how this corresponds with research insights and aligns with proposed responses to these problems, whether automated or regulatory, as well as for the credibility and even effect of responses. In this presentation, I use survey data and focus group material from Reuters Institute research to present an overview of user perspectives on “fake news” and misinformation more broadly, and identify some commonalities and differences between how, respectively, the public, researchers, and policymakers talk about these problems and how that might inform responses.
Successful response to societal challenges requires sustained behavioral change. However, divergent responses to the Covid-19 pandemic in the US showed that partisanship and mistrust of institutions, including science, can increase resistance to Covid-19 mitigation measures and vaccine hesitancy. To better understand these behaviors, we explore attitudes toward science using social media posts (tweets) that have been linked through their locations to places within the US. The data allows us to study how attitudes towards science relate to socioeconomic characteristics of places from which people tweet. Our analysis reveals three types of places with distinct behaviors: large metropolitan counties, smaller metropolitan and suburban counties, and rural counties. While partisanship and race are strongly associated with the share anti-science users across all regions, income was negatively (resp. positively) associated anti-science attitudes in suburban (resp. rural) regions. We find, surprisingly, that emotions in tweets, specifically negative affect and high arousal, are expressed in suburban and rural places with many anti-science users, but not in large metropolitan areas. Importantly, these trends are not apparent in data when aggregated across all places. Our analysis demonstrates the feasibility of using geospatially-resolved social media data to monitor public attitudes on issues of social importance.
Over the past years research breakthroughs in photorealistic synthesis techniques have opened up new possibilities for societally beneficial uses, but also created concerns about misuses, especially for disinformation campaigns. Examples are based on so-called DeepFakes and CheapFakes. This talk covers some of Google’s efforts in the area of mitigating these new threats and also surveys the landscape more broadly. This includes detecting manipulated media via AI systems vs human based strategies. The talk also discusses very simple but effective techniques that need no sophisticated AI techniques to generate, but are hard to detect automatically. This includes simple context retargeting of images from one domain that are reused falsely in new domains. False or misleading context without the use of DeepFakes or CheapFakes is the most prevalent type of disinformation currently spreading over social media, but this has attracted much less attention from AI researchers.
Topics of interest include, but are not limited to:
We invite submissions of technical papers and talk proposals:
Papers must adhere to the ICWSM guidelines and be submitted through EasyChair. You can contact the organizers (details below) for questions related to the submission or participation.
Important Dates (all deadlines are 23:59, AoE):