Brands’ Ads Found Next to Harmful Content: Ad Safety Tools Fail

September 17, 2024

Recent revelations have highlighted a stark and pressing concern for global brands: the ineffectiveness of current ad safety tools in shielding their advertisements from appearing alongside harmful online content. A new study by ad quality firm Adalytics revealed that advertisements from significant global brands appeared on webpages containing explicit sexual content, racial slurs, and violent imagery, posing serious brand safety concerns. This finding has significant implications for brand integrity and consumer trust, shedding light on the vulnerabilities in the digital advertising ecosystem.

Brands’ Ads on Offensive Content

A new study by Adalytics uncovered that even major global brands are not immune to the dangers of their ads appearing on objectionable websites. Titles containing explicit pornographic terms, racial slurs, and violent content were identified alongside ads for major brands such as Meta, Microsoft, Procter & Gamble, Amazon, Disney, Nestle, Mercedes, Walmart, and Marriott. This included pages with titles like ‘gag penis,’ ‘N*****,’ ‘horse cock,’ and ‘decapitation.’ The presence of these offensive terms next to advertisements raises serious questions about the extent of the oversight involved and the effectiveness of current brand safety measures.

Fandom.com, an entertainment-focused wiki platform, was notably mentioned as hosting user-generated content (UGC) that ended up displaying these problematic ads despite being against their guidelines. This situation exposes vulnerabilities in content management systems and highlights the challenges of monitoring and regulating UGC. Even with guidelines in place to prevent the display of such content, these lapses still occurred, showcasing the limitations of the existing safety measures employed by platforms.

Brand Safety Measures and Their Shortcomings

Despite implementing advanced ad safety measures, including pre-bid and post-bid brand safety tech and keyword blocking, significant brands are still finding their ads placed on inappropriate sites. This raises concerns about the effectiveness of these technologies and mechanisms set by vendors like Integral Ad Science (IAS), DoubleVerify, and Oracle Moat. These brand safety technologies, designed to prevent such occurrences, appear to be falling short, thereby demanding a re-evaluation of their efficacy.

The gaps highlighted in the study suggest that current technologies may be flawed or that the problem lies in how advertisers set and adhere to parameters. The revelations from Adalytics emphasize the need for more robust measures and raise questions about whether ad safety technologies are sufficiently advanced to address these challenging situations. The inadequacies in current tools point to a broader issue within the industry, where technological capabilities and implementation practices are not meeting the necessary standards to ensure brand safety effectively.

Implications of the Adalytics Report

The findings of the Adalytics report are particularly concerning given the current political climate in the United States, where the risks posed by advertising alongside political misinformation already present a major issue. The report underscores a widespread industry problem, suggesting that ad verification partners may either be using inadequate technology or failing to meet clients’ parameters and standards set by industry groups like the Interactive Advertising Bureau (IAB) and the Global Alliance for Responsible Media (GARM).

These revelations call into question the reliability of the systems currently in place and highlight the critical need for more stringent measures to address brand safety effectively. The necessity for heightened vigilance is underscored by the potential damage to brand integrity that can result from ads being displayed next to harmful or offensive content. This situation illustrates the broader risks inherent in the digital advertising landscape, particularly during periods of heightened political activity.

Reactions and Responses from Stakeholders

In response to the findings, a spokesperson from Fandom acknowledged the lapse, stating that the offensive content originated from less-trafficked wikis that their moderation systems had failed to detect. Following the report, Fandom took additional measures to enhance site safety, demonstrating a proactive approach to mitigating the issue. This response underscores the importance of continual improvement and adaptation in content moderation practices to address emerging challenges effectively.

A representative from a Fortune 500 company expressed severe distress over the findings. Despite employing safeguards, harmful exposures still occurred, highlighting the broader risks associated with the intense election cycle in the US. This situation points to inherent weaknesses in current brand safety tools that require urgent addressing. The reactions from stakeholders reflect a growing awareness of the critical nature of these issues and the urgent need for more reliable solutions.

DoubleVerify, on the other hand, disputed Adalytics’ findings, claiming the data had been misrepresented and manipulated. They argued that without understanding the exact brand safety tactics or setup employed by advertisers, Adalytics’ conclusions could be flawed. This defense indicates a deeper complexity in managing ad safety effectively, where even established methods may not fully capture the intricacies of various advertising environments.

The Role of Adtech Providers

The investigation revealed that ad verification vendors IAS and DoubleVerify are potentially not performing their brand safety functions adequately. These firms, which utilize machine learning and artificial intelligence for content classification and blocking, have come under scrutiny regarding the accuracy of their content classification mechanisms. Despite boasting high accuracy rates, the observed misclassifications leading to inappropriate ad placements indicate a need for more precise and accountable systems.

The implementation of AI and natural language processing needs a critical reevaluation to meet the demands of today’s complex digital advertising landscape. While technological advancements promise improved solutions, their current application appears insufficient to mitigate the risks effectively. This situation highlights the inherent challenges in developing and deploying sophisticated tools capable of accurately distinguishing between safe and harmful content on a large scale.

Recommendations for Enhanced Transparency and Accountability

Adalytics suggests that the industry requires greater transparency around AI and URL-level data sharing from DSPs, media agencies, and verification providers. Such an approach would allow for independent evaluations of the effectiveness of brand safety solutions, ensuring that they meet the necessary standards to protect brands from harmful exposure. This increased transparency is essential for identifying and addressing deficiencies within the current systems.

Brands must undertake more rigorous audits and demand clarity from their verification services. By maintaining tighter checks and balances on ad placements and safety mechanisms, advertisers can ensure a higher degree of brand safety. Regular audits are crucial to uphold the standards and avoid such oversights in the future. The responsibility for ensuring ad safety should not be entirely offloaded onto third-party providers but must also involve active participation from the brands themselves.

The Broader Industry Context

The issue touches upon larger systemic problems within the online advertising ecosystem. The drive for performance and impressions often eclipses the associated risks, suggesting a need to reassess incentive structures where ad verification vendors benefit from more impressions, even if they appear alongside harmful content. This dynamic implies a fundamental rethinking of how success is measured and incentivized within the industry.

Collaboration between advertisers, agencies, adtech vendors, and publishers is crucial for upholding standards of safety and suitability in ad placements. Greater transparency and rigorous independent audits of brand safety mechanisms can help restore trust and efficacy to the digital advertising industry. The overarching goal is to develop a more trustworthy and reliable ecosystem that effectively mitigates risks and addresses the challenges of the evolving digital landscape.

Conclusion

Recent revelations have highlighted a significant and urgent concern for global brands: the failure of current ad safety tools to protect their advertisements from appearing next to harmful online content. A new study conducted by ad quality firm Adalytics uncovered that ads from major global brands were being displayed on web pages containing explicit sexual material, racial slurs, and violent images. This situation poses serious risks to brand safety.

The findings have profound implications for the integrity of brands and the trust of consumers, revealing major vulnerabilities within the digital advertising ecosystem. Essentially, the systems brands rely on to ensure their ads appear in a safe and appropriate context are not functioning as effectively as needed. This exposes brands to potential reputational damage, as their ads might unintentionally support or be associated with inappropriate content.

Furthermore, these revelations open up a broader discussion about the accountability of digital platforms and the critical need for improved and more stringent ad safety measures. Brands must now re-evaluate their advertising strategies and the tools they employ to ensure that their messages reach consumers without being compromised by inappropriate placements. In an age where online presence significantly impacts a brand’s image, enhancing the reliability of ad safety tools is more crucial than ever.

Subscribe to our weekly news digest!

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for subscribing.
We'll be sending you our best soon.
Something went wrong, please try again later