Agenda

The theme of this year’s EMEA summit will be: The Power of Collaboration, with focus on Middle East and Africa, Future-Proofing, and all digital technologies. Through a mix of presentations, panels, workshops and round tables, speakers will cover some of the most critical topics the trust and safety community in EMEA faces today.

Speakers and sessions will be finalised over the coming week, so check back soon for a final agenda! 

8:00 AM - 9:00 AM
 

Join us for tea, coffee and a light breakfast while you check in! 

9:00 AM - 9:40 AM
Presentation
Pembroke

Join the opening session for remarks from TSPA and a special Keynote.

9:40 AM - 10:10 AM
 

Enjoy a longer break to network, refill your water bottles, and make sure you remembered to turn on your out of office! 

10:10 AM - 10:50 AM
Pembroke

The Digital Services Act (‘DSA’) has opened up new avenues for users to appeal platforms’ content decisions. Platforms must not only ensure they have the appropriate internal mechanisms in place to enable successful appeals but also engage with external appeal bodies, who will conduct their own independent reviews. As a platform, how do you navigate what’s required to facilitate these new appeal avenues? What policy, product, and operational challenges will you face? How do you best partner with an external provider who now has oversight on your platform’s decisions? How can you ensure the changes you make for the DSA tie into your platform’s broader strategic objectives? In this panel, we will hear from a mix of representatives, ranging from platforms to external content decision review providers, who will share their perspectives on how to best navigate this new world.

Herbert

Discussions on the ethical use of data within Trust & Safety frameworks have become paramount not merely from a moral obligation standpoint but also from the lens of legal compliance. This panel will explore the delicate balance between transparency and privacy, diving into the challenges of data management and the evolving role of AI in Trust & Safety, specifically in context of recent developments in EMEA. Our discussion aims to: (1) outline the best practices for ethical data usage; (2) examine approaches to straddle user privacy and transparency in Trust & Safety operations in keeping with incoming laws (e.g., Digital Services Act); (3) explore the hurdles and strategies in managing sensitive data; (4) discuss AI's emerging role in shaping the future of regional Trust & Safety, with focus on diverse ethical implications and decision-making processes across EMEA. Attendees will gain a nuanced understanding of how ethical data practices can positively promote Trust & Safety.

Lansdowne

Many UGC are in video-format - which can contain risk in many forms (audio, OCR, visual, …) and are notoriously expensive to moderate at scale - with both scanning technologies and moderator reviews being prohibitively expensive. Different strategies and technologies exist to make video-moderation more affordable, from cutting-edge AI to heuristics and layering of cheaper technologies.

Ulster

Created in 2023, the Coalition of Trusted Reviews is a group of dedicated, specialized fraud fighters from some of the biggest brands on the Internet. Though they've each been successful individually, Coalition members have joined to work together to take their fraud-fighting abilities to the next level and rid the internet of fake, deceptive content that misleads consumers.

10:50 AM - 11:00 AM
 

Head to the next session!

11:00 AM - 11:25 AM
Pembroke

In the face of a sharp increase in financially-motivated online sextortion schemes targeting minors, our analysis of 27,000 victim posts highlights a significant rise in incidents, with cases tripling between 2021 and 2023 and a 340% surge noted last year. Our detailed review of 300 cases involving minors points to a disturbing pivot towards financial exploitation, predominantly driven by organized crime networks within Africa, notably in Ivory Coast, Nigeria and Mali, with a worrying trend towards young male targets. This escalation, evident across diverse platforms and offenses, poses a critical challenge for trust and safety teams tasked with mitigating this evolving menace. Our upcoming presentation will unveil fresh insights into the sextortion landscape, focusing on minors. Attendees will explore an innovative detection strategy that merges intelligence-led data gathering with algorithmic profiling, aiming to proactively protect potential victims and counteract this growing threat.

Ulster

Traditional metrics for measuring wellbeing often fall short in capturing the nuanced and multifaceted experiences of content moderators. While traditional metrics focus on utilization and participation rates in services, we believe that psychological health encompasses so much more. This presentation will highlight the existing gaps in measuring content moderator wellbeing, explore how a more comprehensive and innovative measurement framework can lead to better support systems and interventions, discuss the relevance and practicality of incorporating metrics that go beyond the traditional in assessing content moderator wellbeing, and share insights from our research collaborative project where moderators actively contribute to the design and evaluation of our metrics.

Lansdowne

When OpenAI’s chatGPT entered the mainstream last year, it catalyzed conversations around the threats of AI, particularly GenAI. The need for regulation, diversity, and the reduction of biases have been thrown about as mitigation strategies against these threats. But barely anyone has asked, regulation for whom? What does that diversity look like? And what are the biases we are trying to reduce? This session will explore how GenAI’s risks are disproportionately distributed across regions, and why countries outside of the Global North-West have much more to lose when GenAI goes wrong.

Herbert

2024 is historic year for civic engagement, with several key elections taking place across Europe. Maintaining the integrity of elections in online spaces requires removing harmful misinformation; however, this alone is does not fully address the behavior behind the development and spread of misleading claims. To do so, platforms must also invest in educational and media literacy initiatives that empowers users both to seek out trusted sources of information and to think critically about the content they create and consume. In this session we'll explore how TikTok collaborated with electoral commissions and fact-checking organizations to develop in-app election centres and media literacy campaigns to combat harmful misinformation. The presentation will detail the rationale behind our approach, the importance of external partnerships in building interventions, and key learnings and opportunities for future online media literacy initiatives

11:25 AM - 11:35 AM
 

Head to your next session!

11:35 AM - 12:00 PM
Ulster

Networked harassment disproportionately targets women and historically marginalized communities (Krook, 2017; Vitak et al., 2017; Blackwell et al., 2017; Data and Society, 2018; EVAW, 2020a; EVAW, 2020b; Glitch, 2023). Gamergate was dangerous back in 2014, but in an election year with at least 70 countries headed to the polls involving two to three billion citizens (O’Caroll and Milmo, 2023), the continued lack of language and legal conceptualization around networked harassment has serious consequences for platforms’ accountability and our democracies. Our work advances the discussion by looking at an as-yet-to-be understood form of networked harassment, what we term “indirect swarming”. This is characterized by the sudden rise in the volume of posts and engagement over a short time-period, catalyzed by an amplifier, a highly-networked account, who covertly signals to their followers to harass a target. Our research objectives are: What is indirect swarming and what are the methods to discern its empirical patterns? How do we distinguish between direct and indirect swarming? Research methodology: We used a critical feminist lens in our analysis to foreground the lived experiences of the targets. We analyzed case studies of networked harassment to further expand upon Alice Marwick’s 2021 model, employing a mixed-method approach using datasets from X and Facebook, collected from the target’s standpoint (Haraway, 1988; Harding, 2004). Through descriptive statistics, quantitative and qualitative analysis, we compared these cases to illustrate the similarities and differences and establish the thresholds that had led us to propose the new categories of direct and indirect swarming. The event of foci in both case studies are the targets’ resignation announcements posted on X. Focusing on the novel risks and harms linked to indirect swarming, this paper offers a solution by sketching out what we call a “protective correlate” as a method that could help platforms mitigate the risks of indirect swarming. This method focuses on signals–similar to signals utilized in recommender systems–rather than content. For social media platforms, the protective correlate gives agency to the user to report indirect swarming while protecting platforms’ editorial rights vis-à-vis the First Amendment. This work thus has important ramifications for democracy and public discourse.

Lansdowne

This presentation will dive into effective crisis management for real world events and viral trends that require trust and safety policy changes and real time collaboration between cross functional teams. Drawing from practical experiences, the speaker will examine how diverse teams come together to tackle emerging challenges effectively. From communication tactics to policy adjustments and technical solutions, the presentation will outline a cohesive strategy that empowers teams to respond swiftly and accurately during critical junctures. Attendees will learn practical strategies for fostering cross-functional collaboration, improving communication channels, and leveraging diverse expertise to maximize impact during critical moments.

Herbert

This session looks into the complex world of cybersecurity and online safety across the MENA region. Discussing the unique challenges faced by different countries, it will also explore critical threats like protecting vulnerable communities and safeguarding online human rights and freedom of expression. The session will then share practical recommendations for fostering stronger collaboration through Public-Private partnerships and building a safer online environment for everyone in the MENA region. Finally, it will showcase an example of how a successful partnership could boost online safety and stop real-world harm.

Pembroke

The presentation will look at the challenges and opportunities of regional policy development. Most of the major platforms have global policies though they have a huge international audience, and this can result in over or under enforcement of content, and at times even alienating the user. The issue of developing regional policies is not just a scaling problem but also about finding a delicate balance between local and global. The structure of the session will be as follows: - Cultural nuances in perception of Trust and Safety in MEA - Challenges in policy development - how to find the delicate balance between local and global? A high level look at some examples on Trust and Safety localization. - Case Study: Turkey - a country between Europe and the Middle East, and how that reflects on user behavior, regulatory compliance, and a nuanced approach to policy development and content moderation. - Recommendations: User advocacy, the importance of regional partnerships and stakeholder engagement, crisis management in the region.

12:00 PM - 1:00 PM
 
 
1:00 PM - 1:40 PM
Lansdowne

As digital platforms seek to expand in the diverse and multilingual EMEA (Europe, Middle East, and Africa) region, the adoption of auto-translation technologies has become a pivotal approach to Trust & Safety (T&S). During this session, our panelists will explore how online platforms can leverage auto-translation to serve a multilingual audience such as the EMEA’s. While language techs will present the technological advancements that have enabled platforms to bridge language barriers and discuss the capabilities that have emerged in this domain, platforms will rely on their global and regional experience to critically examine the pros and cons of relying on auto-translation. Through real-world examples and insights, we’ll highlight how important managing the multi-language challenge is to optimize local user experience and support growth, while also addressing the limitations and challenges that come with interpreting the nuances of language and culture through technology.

Herbert

Collaboration is key to fighting online child sexual abuse and exploitation because they are pervasive threats that can cross various platforms and services. Tech Coalition member companies work together to both accelerate the adoption of existing technologies and invest in the development of new technologies, such as Lantern; come together to share knowledge, upskill fellow members, and strengthen all links in the chain; and, unite with leading child safety organizations to protect children online through research, tech innovation and multi-stakeholder forums. Come learn from some of the our member companies about ways in which industry collaborates to protect children.

1:00 PM - 2:30 PM
Ulster

Please Notes: This round table requires sign up. You can sign up for workshops and round tables by logging into registration. 

The target audience for this round table includes:

  • Child Safety Specialists
  • Trust and Safety professionals
  • Policy and safety by design specialists
  • Domain academics working in Child Safety

In future-proofing the evolving Trust & Safety (T&S) field, this workshop will consider how to formalise greater collaboration between academia and industry to respond to the distinct and emerging learning needs of T&S professionals in the EMEA context. Modelled on the recent Trust & Safety Teaching Consortium within the Stanford Internet Observatory, this workshop will consider the specific contextual needs of EMEA-focused academic and industry stakeholders in this regard, with the view to achieving the following aims: 1. Identifying core knowledge gaps and opportunities where industry and academia can collaborate to improve training, Learning and Development (L&D) and Continuous Professional Development CPD offerings for EMEA-focused T&S professionals, with critical consideration of existing and emergent T&S learning needs in the region. These learning needs relate, for example, to evolving GenAI threats and socio-legal and regional regulatory developments such as the EU Digital Services Act and impending Artificial Intelligence Act, country-level online safety legislation (e.g. UK Online Safety Act), regional conflicts (Ukraine, Gaza, etc.). 2. Prioritising the identified training, L&D and CPD needs of T&S professionals in EMEA contexts. 3. Envisioning optimal pathways to academia-industry collaboration to future proof the training, L&D and CPD needs of these T&S professionals. 4. Distilling the outcomes of aims 1-3 in an open-access co-authored paper for submission to the Journal of Online Trust and Safety to share EMEA-specific stakeholder needs, collaboration opportunities and best practices with the wider T&S field. We envisage that the workshop will be used to highlight a set of high-level themes across these three spheres for further development in a submission to the Journal of Online Trust and Safety. We would be happy to share a written summary of these themes and workshop findings with our participants after the session for information (and for review if desired) before undertaking further work to build upon them in a journal submission. We will not name individual contributors or organisations in any written output without appropriate permission being granted by the contributor. That being said, we will happily acknowledge the support of the TSPA and the workshop contributors in any written output linked to this session.

Pembroke

Please Note: This round table will be first come, first served. 

The target audience for this round table includes: More technically-focused T&S professionals would be welcome but the session is designed to be understood by those who are not technical experts.

This roundtable is an opportunity for T&S professionals to share their experiences of deepfake and AI based attacks on the measures they have put in place to protect users. It will hear a case study from Project DefAI, a project jointly funded by the UK and Swiss governments, to identify, defend and then test and certify those defenses, against such attacks on age assurance technologies.

There is a cat and mouse race to improve protective mechanisms faster than hackers can find ways to circumvent them. T&S professionals need to be aware of the risks, and the sector needs to demonstrate to the public, policymakers and other stakeholders that it is mitigating these risks, prioritising the largest attack vectors

1:40 PM - 1:50 PM
 

Head to your next session!

1:50 PM - 2:30 PM
Lansdowne

Moderators are at the front-line of online world-events. Tools are in place to minimize harm but wars are often shocking events that one cannot prepare for. This panel will address how to deal with the initial shock, support people through the mental difficulties and how to move forward as an individual and team. This panel brings together professionals from online platforms, moderator wellbeing research and those on the front lines to look at the issue from the intersection of tech, policy, outsourcing, and individual experience.

Herbert

The focus of the panel will be on how the different mechanisms of transparency are unlocking new forms of interactions in this new regulatory era. As of May 17th 2024, transparency reports, the statements of reasons database, access to data for researchers is creating a significant increase in publicly available data, and this influx of new data has birthed important questions as to its use. This panel is targeted at capturing this shift, aiming to understand the role transparency will now play in the broader ecosystem. The takeaways will include an industry perspective of how transparency has changed and will continue to change internal operations, met with researcher opinions on harnessing these new forms of data, combined with the Commission’s expectations for how this transparency will reform the ecosystem.

2:30 PM - 3:00 PM
 

Take a moment to step outside for some fresh air, connect with other attendees, or retreat to a quiet corner for some recharge time.

2:45 PM - 4:15 PM
Pembroke

Please Notes: This workshop requires sign up. You can sign up for workshops and round tables by logging into registration. 

Mandatory transparency reporting has become a recurring theme in online safety regulation, but meaningful transparency reporting is harder to identify and rare to find. In this workshop, participants will try their hand at drafting a transparency notice from the perspective of a regulator, considering what metrics tell a meaningful story about online safety efforts on different type of services. They will then swap roles and respond to the notices from the perspective of a platform, considering what information is feasible to collect, track and whether they agree with the suggested metrics. This is a unique opportunity to gain an insight into the challenges and trade-offs involved in transparency reporting, tackling complex issues through scenario planning, metrics mapping and group discussion.

Ulster

Please Notes: This round table requires sign up. You can sign up for workshops and round tables by logging into registration. 

The target audience for this round table includes:

  • Attendees working on policies, operations, content mod, integrity for gaming platforms or platforms that include gaming components. 
  • Functional experts (e.g. in Child Safety, TVEC and other harm areas) from other types of platforms who could share learnings from other sub-sectors.
  • Regulators, civil society, non-profits, and academia who have a perspective on the gaming vertical.

This roundtable will advance the discussion on best practices for moderating gaming environments and delve into the effective creation and operationalization of content moderation policies in games. It will explore the latest research on user trends in gaming to discuss new ways that bad actors are trying to circumvent policies, the challenges posed in gaming environments from AI-generated harms, and how the evolution of communication (e.g. voice vs chat) on gaming platforms necessitates new moderation tactics. This roundtable would include questions such as: 1. With the rise of AI-generated voice/image/text/video, how should gaming platform adapt their policies to mitigate new harms that are surfacing? 2. In what ways can gaming platforms be more proactive when it comes to player safety, particularly as it relates to user authentication and age verification? 3. How will the operationalization of policies and the policies themselves need to be adapted for platforms that focus on players who are children, especially given new regulations around child safety? 4. Depending on the type of game (multi-player, single person, etc.), main channel of communication (voice vs. chat), how should content moderations practices be adapted to optimize effectiveness? 5. What learnings can be applied from social media, e-commerce, dating, or other industries to improve moderation of games? How do cultural nuances need to inform moderation practices for players in different markets?

3:00 PM - 3:40 PM
Herbert

In an era where social media transcends borders, the imposition of global content policies without consideration for specific regional contexts poses significant challenges. Africa, with its rich tapestry of over 3000 languages and diverse cultural landscapes, exemplifies this complexity. From liberal to conservative societies shaped by religious practices, navigating the nuances of content moderation becomes paramount. This panel seeks to illuminate the gap in policy development for regions divergent from global community standards. Should exceptions be made, or should tailored policies be drafted? Delving into specific issues such as nudity, sexual orientation, hate speech, and misinformation, we aim to identify key considerations in crafting policies that resonate with Africa's diverse societies. As an illustrative example, the case of Google's restrictions on adult nudity content in Southern Africa, contrasting it with cultural practices that permit nudity during certain festivals in the region. This discussion aims to elucidate the alignment, or lack thereof, between global content policies and regional cultural norms, and to propose recommendations for policy harmonization or contextual exceptions where appropriate. Furthermore, recognizing parallels in cultural sensitivities, this discussion will extend its focus to include insights from the Middle East and North Africa (MENA) region, offering a comparative lens to address shared challenges. This panel seeks to contribute essential insights to the ongoing discourse on the intersection of digital rights, cultural diversity, and effective content moderation strategies, fostering a path towards contextually sensitive and globally relevant policy frameworks

Lansdowne

This panel brings together innovators in content moderation. Speakers include technology leaders from established start-ups, a mid-sized company and a large enterprise. This panel will discuss opportunities for the field of content moderation due to recent advances in AI and language models (LLMs), particularly within the diverse landscape of the EMEA region. This includes technological innovation, reskilling opportunities for trust & safety professionals, and takeaways applicable to small and large trust & safety teams. The discussion will close with a discussion of the key takeaways that the audience should leave with regarding the future of AI and content moderation tools and teams. The panel is applicable to practitioners and business leaders involved in content moderation, including policy experts, moderators, data scientists, and technologists.

3:40 PM - 3:50 PM
 

Head to your next session!

3:50 PM - 4:15 PM
Lansdowne

This presentation will allow attendees to experience firsthand, 3 innovative wellness activities that are rooted in neuroscience. As the mental health field evolves, we are learning to use the inherent power of the brain to optimize and bolster brain health without the unpleasant experience of repeated review of or exposure to negative or disturbing memories or events. Attendees will leave the presentation feeling refreshed, clear, and focused, and will gain a new perspective on ensuring employee wellness in the trust and safety field.

Herbert

Counterfeits aren't just a retail problem. From fake apps to manipulated streaming content, they permeate diverse digital platforms, posing significant trust and safety challenges for users, businesses, and society. This session delves beyond the "unseen dangers" of knockoffs to explore the broader impact of counterfeiting across various digital technologies.

4:15 PM - 4:25 PM
 

Head to your next session!

4:25 PM - 4:50 PM
Herbert

Trust and Safety is inherently a dynamic, high risk, and rapidly changing space that relies on a mix of tools, policies, and human judgment. Experimentation (A/B testing) is necessary to parse out the causal impact of changes, allowing us to isolate what truly drives improvement. Rigorous experimentation in this domain presents unique challenges, including ethical considerations, privacy concerns, complexity of multi-platform environments, and cross-functional dependencies. This talk will explore why experimentation is essential for Trust and Safety teams, and discuss strategies to standardize and scale this practice despite inherent challenges.

This presentation will share a case study and framework for how Google T&S has launched the Experimentation Platform. The Experimentation Platform enables a continuous improvement model for tools used to moderate content across all Google core products, from Shopping to Maps to Search. These learnings have been adapted and stress tested to apply across T&S digital technology spaces, and at companies ranging from small to enterprise.

Lansdowne

Intelligence and analysis plays a crucial role in today’s Trust & Safety operations. However, it risks being siloed as a reactive service for investigations. Even integrated, proactive teams are typically limited to making optional recommendations that can fall down development priorities. This presentation uses a series of real world case studies to present a new vision for intelligence-led Trust & Safety. It is an alternative strategic model for platforms & services featuring role-reversal, where intelligence guides operations. In demonstrating how law enforcement introduced and evolved the approach, this presentation shows the value of combining strong mitigation with a focus on unforeseen harms, leveraging intelligence to push risk further away. Covering violent extremism, child safety and wider harms, the presentation will illustrate how to implement intelligence-led Trust & Safety today: including practical tips for how policy, enforcement, and other teams can maximise intelligence capabilities, and focus resources, for greater return on investment.

4:25 PM - 5:25 PM
Ulster

Please Notes: This workshop requires sign up. You can sign up for workshops and round tables by logging into registration

 

The goal of this workshop is to allow professionals from different teams working within a platform to brainstorm together in building safety into the platform’s functionalities from the get go. Organisers will introduce a hypothetical platform with 4-5 features on the roadmap that leadership would like to launch. The features include: infinite scrolling, recommending groups, multimedia sharing, direct messages, and introducing visible engagement metrics. Teams will be assigned a feature and given resources to conduct a mini assessment to map out the potential risks stemming from the feature, and what mitigations and governance structures can be implemented to reduce the risk. To conclude the exercise, each team will briefly present their conclusions on whether it is worth introducing the feature to the platform based on the exercise conducted, and what safety mechanisms they would bake into the feature pre-launch. 

Pembroke

Please Notes: This workshop requires sign up. You can sign up for workshops and round tables by logging into registration

The workshop will cover following topics: 1. Introduction to synthetic media (deepfake) technology principles and concepts 2. Identify applications of deepfake technology 3. Challenges posed by deepfakes to Trust & Safety teams 4. Overview of existing commercial deepfake detection technologies 5. Hands-on exercises on detecting deepfaked audios, videos, and images 6. Overview of limitations of deepfake detection technologies and future roadmap

5:00 PM - 6:30 PM
 

Join us for post-event drinks, refreshments, and networking!