╌>

Social Media’s ‘Frictionless Experience’ for Terrorists

  

Category:  Other

Via:  hallux  •  7 months ago  •  10 comments

By:   Lora Kelley - The Atlantic

Social Media’s ‘Frictionless Experience’ for Terrorists
These platforms were already imperfect. Now extremist groups are making sophisticated use of their vulnerabilities.

S E E D E D   C O N T E N T


The incentives of social media have long been perverse. But in recent weeks, platforms have become virtually unusable for people seeking accurate information.

Dangerous Incentives

“For following the war in real-time,” Elon Musk declared to his 150 million followers on X (formerly Twitter) the day after Israel declared war on Hamas, two accounts were worth checking out. He tagged them in his post, which racked up some 11 million views. Three hours later, he   deleted the post ; both accounts were known spreaders of disinformation, including the   claim   this spring that there was an explosion near the Pentagon. Musk, in his capacity as the owner of X, has   personally sped up   the deterioration of social media as a place to get credible information. Misinformation and violent rhetoric run rampant on X, but other platforms have also quietly rolled back their already lacking attempts at content moderation and leaned into virality, in many cases at the cost of reliability.

Social media has long encouraged the sharing of outrageous content. Posts that stoke strong reactions are rewarded with reach and amplification. But, my colleague Charlie Warzel told me, the Israel-Hamas war is also “an awful conflict that has deep roots … I am not sure that anything that’s happened in the last two weeks requires an algorithm to boost outrage.” He reminded me that social-media platforms have never been the best places to look if one’s goal is genuine understanding: “Over the past 15 years, certain people (myself included) have grown addicted to getting news live from the feed, but it’s a remarkably inefficient process if your end goal is to make sure you have a balanced and comprehensive understanding of a specific event.”

Where social media shines, Charlie said, is in showing users firsthand perspectives and real-time updates. But the design and structure of the platforms are starting to weaken even those capabilities. “In recent years, all the major social-media platforms have evolved further into algorithmically driven TikTok-style recommendation engines,” John Herrman  wrote  last week in  New York Magazine . Now a toxic brew of bad actors and users merely trying to juice engagement have seeded social media with dubious, and at times dangerous, material that’s designed to go viral.

Musk has also introduced financial incentives for posting content that provokes massive engagement: Users who pay for a Twitter Blue subscription (in the U.S., it costs $8 a month) can in turn get paid for posting content that generates a lot of views from other subscribers, be it outrageous lies,  old   clips  repackaged as wartime footage , or something else that might grab eyeballs. The accounts of those Twitter Blue subscribers now display a blue check mark—once an authenticator of a person’s real identity, now a symbol of fealty to Musk.

If some of the changes making social-media platforms less hospitable to accurate information are obvious to users, others are happening more quietly inside companies. Musk slashed the company’s trust-and-safety team, which handled content moderation, soon after he took over last year. Caitlin Chin-Rothmann, a fellow at the Center for Strategic and International Studies, told me in an email that  Meta  and  YouTube  have also made cuts to their trust-and-safety teams as part of broader layoffs in the past year. The reduction in moderators on social-media sites, she said, leaves the platforms with “fewer employees who have the language, cultural, and geopolitical understanding to make the tough calls in a crisis.” Even before the layoffs, she added, technology platforms struggled to moderate content that was not in English. After making widely publicized investments in content moderation under intense public pressure after the 2016 presidential election, platforms have quietly dialed back their capacities. This is happening at the same time as these same platforms have deprioritized the surfacing of legitimate news by reputable sources via their algorithms (see also: Musk’s decision to strip out the headlines that were previously displayed on X if a user shared a link to another website).

Content moderation is not a panacea. And violent videos and propaganda have been spreading beyond major platforms, on  Hamas-linked  Telegram channels, which are private groups that are effectively   unmoderated . On mainstream sites, some of the less-than-credible posts have come directly from politicians and government officials. But experts told me that efforts to ramp up moderation—especially investments in moderators with language and cultural competencies—would improve the situation.

The extent of inaccurate information on social media in recent weeks has attracted attention from regulators, particularly in Europe, where there are different standards—both cultural and legal—regarding free speech compared with the United States. The European Union opened an  inquiry  into X earlier this month regarding “indications received by the Commission services of the alleged spreading of illegal content and disinformation, in particular the spreading of terrorist and violent content and hate speech.” In an  earlier  letter in response to questions from the EU, Linda Yaccarino, the CEO of X, wrote that X had labeled or removed “tens of thousands of pieces of content”; removed hundreds of Hamas-affiliated accounts; and was relying, in part, on “community notes,” written by eligible users who  sign up  as contributors, to add context to content on the site. Today, the European Commission  sent letters  to Meta and TikTok requesting information about how they are handling disinformation and illegal content. (X responded to my request for comment with “busy now, check back later.” A spokesperson for YouTube told me that the company had removed tens of thousands of harmful videos, adding, “Our teams are working around the clock to monitor for harmful footage and remain vigilant.”   A spokesperson for TikTok directed me to a  statement  about how it is ramping up safety and integrity efforts, adding that the company had heard from the European Commission today and would publish its first transparency report under the European Digital Services Act next week. And a spokesperson for Meta told me, “After the terrorist attacks by Hamas on Israel, we quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation.” The spokesperson added that the company will respond to the European Commission.)

Social-media platforms were already imperfect, and during this conflict, extremist groups are making sophisticated use of their vulnerabilities.  The New York Times  reported that Hamas, taking advantage of X’s weak content moderation, have  seeded the site  with violent content such as audio of a civilian being kidnapped. Social-media platforms are providing “a near-frictionless experience for these terrorist groups,” Imran Ahmed, the CEO of the Center for Countering Digital Hate, which is currently  facing a lawsuit  from Twitter over its research investigating hate speech on the platform, told me. By paying Musk $8 a month, he added, “you’re able to get algorithmic privilege and amplify your content faster than the truth can put on its pajamas and try to combat it.”


Red Box Rules

Don't be a tool or you'll end up in the box

256


 

Tags

jrDiscussion - desc
[]
 
Hallux
PhD Principal
1  seeder  Hallux    7 months ago

Freedom of speech is great for the writer, it tends to suck for the reader.

 
 
 
mocowgirl
Professor Quiet
1.1  mocowgirl  replied to  Hallux @1    7 months ago
Freedom of speech is great for the writer, it tends to suck for the reader.

Why?

 
 
 
Hallux
PhD Principal
1.1.1  seeder  Hallux  replied to  mocowgirl @1.1    7 months ago

It should not take long to figure that out.

 
 
 
mocowgirl
Professor Quiet
1.1.2  mocowgirl  replied to  Hallux @1.1.1    6 months ago
It should not take long to figure that out.

Are you in favor of blasphemy laws?  

 
 
 
Right Down the Center
Senior Guide
1.2  Right Down the Center  replied to  Hallux @1    6 months ago

People decide to read it. People decide if they believe it.

 
 
 
Drakkonis
Professor Guide
2  Drakkonis    7 months ago

Don't really see a solution here. Social media would have to be heavily regulated and no one is going to be in agreement on who or what would or should do the regulating. Children being able to make their own gender transitioning choices, for example. Fact or fiction? Truth or misinformation? How are we as a society going to solve this problem when terms such as "my truth" are an every day part of what we've become? What if my belief that Hamas lies about as often as they breathe is labeled "misinformation" because my view doesn't hold the same "historical context" that the person or organization designated to regulate the conversation holds to be true? The solution could very well be worse than the problem. You know, like it is in Russia, China, North Korea and the rest. 

 
 
 
cjcold
Professor Quiet
2.1  cjcold  replied to  Drakkonis @2    7 months ago
in Russia, China, North Korea

Social media is not the government.

Still looking for a site that doesn't allow far right wing liars.

It certainly isn't this one.

 
 
 
GregTx
PhD Guide
2.1.1  GregTx  replied to  cjcold @2.1    7 months ago

And yet here you are..

 
 
 
Drakkonis
Professor Guide
2.1.3  Drakkonis  replied to  cjcold @2.1    7 months ago
Social media is not the government.

Thanks for the clarification. 

Still looking for a site that doesn't allow far right wing liars. It certainly isn't this one.

I've noticed that what often makes far left or right wing positions "liars" is the political and philosophical biases and assumptions the accusers begin from. That doesn't explain it all, of course, but it seems to explain a lot. Take what goes on, and the accusations traded back and forth, concerning the US/Mexican border as an example. 

In any case, you seem to have missed my point, even though you quoted the relevant examples I gave. How do we get rid of the misinformation without becoming like Russia, China etc.

 
 
 
Drinker of the Wry
Junior Expert
3  Drinker of the Wry    7 months ago
Social media is not the government.

No shit?

Still looking for a site that doesn't allow far right wing liars

Let me know if you find one and I'll let you know if I find a site that doesn't allow far left wing liars.

 
 

Who is online


Sparty On
afrayedknot
Vic Eldred
Ronin2


109 visitors