Companies slammed Twitter for displaying ads next to child pornography accounts, suspending ad campaigns

Twitter Circles feature released to share content with small groups: All the details

Some major advertisers, including Dyson, Mazda and chemical company Ecolab, have suspended their marketing campaigns or removed their ads from parts of Twitter because their ads appeared alongside tweets soliciting child pornography, the companies told Reuters.

Advertisements for at least 30 brands, from Walt Disney, Comcast’s NBCUniversal and Coca-Cola to a children’s hospital, have appeared on the profile pages of Twitter accounts linked to exploitative content, according to a Reuters review of accounts identified in new online research into child sexual abuse from cyber security group Ghost Data.

Some of the tweets included key words related to “rape” and “teenage” and appeared alongside tweets promoted by corporate advertisers, a Reuters review found. In one instance, a promoted tweet for shoe and accessories brand Cole Haan appeared next to a tweet in which a user said it was trading “teen/kid” content.

“We’re horrified,” David Maddox, Cole Haan’s brand president, told Reuters after receiving tips that the company’s ads appeared alongside such tweets. “Ether Twitter is going to fix this, or we’ll fix it any way we can, including not buying Twitter ads.”

In another example, a user tweeted looking for content that read “Only Young Girls, No Boys,” which was immediately followed by a tweet promoting Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not return multiple requests for comment.

In a statement, Twitter spokeswoman Celeste Carswell said the company has “zero tolerance for child sexual abuse” and is investing more resources dedicated to child safety, including hiring new positions to write policy and implement solutions.

She added that Twitter is working closely with its advertising clients and partners to investigate and take steps to prevent the situation from happening again.

Twitter’s challenges in identifying child abuse content were first reported in late August by an investigation by tech news site The Verge. Reuters reported for the first time here the emerging pushback from advertisers who are critical to Twitter’s revenue stream.

Like all social media platforms, Twitter bans depictions of child sexual abuse, which is illegal in most countries. But it generally allows mature content and a flourishing exchange of pornographic imagery, which comprises about 13 percent of all content on Twitter, according to internal company documents seen by Reuters.

Twitter declined to comment on the volume of adult content on the platform.

Ghost Data identified more than 500 accounts that openly shared or solicited child sexual abuse content during a 20-day period this month. Twitter failed to remove more than 70% of accounts during the study period, according to the group, which shared the findings exclusively with Reuters. Reuters could not independently confirm the accuracy of the search for Ghost data.

After Reuters shared a sample of 20 accounts with Twitter, the company removed about 300 additional accounts from the network, but more than 100 other accounts still remained on the site the next day, according to a review of Ghost Data and Reuters.

Reuters shared the full list of more than 500 accounts after it was submitted by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, Twitter’s Carswell said.

Andrea Stropa, founder of Ghost Data, said the study was an attempt to assess Twitter’s ability to remove content. He said he personally funded the research after receiving a tip about the topic.

Twitter suspended more than 1 million accounts last year for child abuse content, according to the company’s transparency report.

“There is no place online for this type of content,” a spokesman for carmaker Mazda USA said in a statement to Reuters, adding that the company is now banning its ads from appearing on Twitter profile pages in response.

A Disney spokesperson called the content “reprehensible” and said they are “redoubled our efforts to ensure that the digital platforms we advertise on, and the media buyers we use, strengthen their efforts to prevent such errors from recurring.”

A spokesperson for Coca-Cola, whose tweets appeared to be promoted on an account tracked by researchers, said it does not condone content associated with its brand and said “any violation of these standards is unacceptable and is taken very seriously.”

NBCUniversal said it has asked Twitter to remove ads related to inappropriate content.

code word

Twitter is hardly alone in failing to regulate children online safety. Child welfare advocates say the number of known images of child sexual abuse has grown from thousands to millions in recent years, as predators use social networks including Facebook and Instagram to target victims and share explicit images.

For the accounts identified by Ghost Data, nearly all of the traffickers of child sexual abuse content marketed the content on Twitter, then instructed buyers to reach out to them on messages such as Discord and Telegram to complete payments and receive archived files. Hosting platforms such as New Zealand-based Mega and US-based Dropbox, the group reports.

A Discord spokesperson said the company has banned one server and one user for violating rules about sharing links or content that sexually suggest children.

Mega said the link referenced in the Ghost Data report was created in early August and declined to be identified shortly after the user deleted it. Mega said two days later that the user’s account was permanently closed.

Dropbox and Telegram said they use various tools to moderate content but did not provide additional details on how they would respond to the report.

Yet the backlash from advertisers threatens Twitter’s business, which makes more than 90 percent of its revenue by selling digital ad placements to brands looking to market products to the service’s 237 million daily active users.

Twitter is also battling to court Tesla CEO and billionaire Elon Musk, who is trying to back out of a $44 billion (roughly Rs. 3,60,300 crore) deal to buy the social media company over complaints about spam accounts and its influence. on business.

A February 2021 report by a team of Twitter employees concluded that the company needed to invest more in identifying and removing widespread child abuse content, with the company having a backlog of cases to review for possible reporting to law enforcement.

“While the amount of (child sexual abuse content) has grown rapidly, Twitter has not invested in technology to detect and manage the increase,” according to the report, a report created by an internal team to provide an overview of the state of children. Get legal advice on exploitative content and proposed policies on Twitter.

“Recent reports about Twitter provide an outdated, fleeting look at just one aspect of our work in this space and are not an accurate reflection of where we are today,” Carswell said.

Traffickers often use code words like “cp” for child pornography and are “deliberately as vague as possible,” according to internal documents. The more Twitter cracks down on certain keywords, the more users are forced to use ambiguous text, which “is difficult to automate against (Twitter),” the documents said.

Ghost Data’s Stroppa said such tactics would complicate efforts to find content, but noted that his small team of five researchers and lack of access to Twitter’s internal resources were able to find hundreds of accounts in 20 days.

Twitter did not respond to a request for further comment.

© Thomson Reuters 2022


Affiliate links may be automatically generated – see our ethics statement for details.

LEAVE A REPLY

Please enter your comment!
Please enter your name here