How TikTok and Instagram hook Gen Z

Short-form video is the medium of our time. The average teenager will spend hours a day on TikTok and Instagram reels, which are the main sources of news and entertainment for the 18-24 demographic.

Adam Mosseri, head of Instagram, administers the most important algorithm in the English-speaking world. He is undoubtedly as consequential as David Zaslav or Rupert Murdoch – the CEO of Warner Bros and the owner of News Corp – but his name rarely makes headlines.

On Mosseri’s Instagram, videos proven to hold users’ attention will be shown to more people. “Watch-time” is the basic currency of his algorithm. This has forced creators, many of whom earn a living on the platform, to abide by a golden formula: hook, secondary hook, payoff.

The algorithm has spawned an entire coaching industry in which aspiring influencers pay veteran creators for crash courses in perfecting the formula. Each course teaches more or less the same thing: promise the viewer an answer to a question, keep promising, then answer at the end. Better yet, don’t answer it – promise to answer it in the next video.

Where watch-time is the quantitative component of these algorithms, “trends” are the qualitative. If a particular word, image or sound appears to be trending among a certain data demographic, unrelated content will be algorithmically choked out of that demographic’s feed. This forces creators to cluster their content around proven trends.

A trend is never a story. It is always a concept or feeling that can be immediately communicated within three seconds, because it is generally understood that creators have only three seconds to hook users before they scroll away. As influential creator coach Dominik Rieger will often remark: “The viewer must immediately know ‘This is for me’.”

When Sean “Diddy” Combs trends, as he often does, it is never regarding a piece of evidence or a development in one of his trials. What is, on paper, a story about sexual coercion and exploitation of power is translated by the algorithm into a static portrait: a shame-faced Puff Daddy slick with baby oil. Searching “Diddy” will take you to a trove of baby oil related brainrot, and barely a single piece of factual reporting. Diddy’s actions did not create a story to be followed but a crude vignette to be gawked at.

When US president Donald Trump’s shocking birthday letter to Jeffrey Epstein was published by The Wall Street Journal, it did not become a major trend on TikTok or Instagram because the only way to parse the story was by reading the letter itself, which takes more than three seconds. As far as the algorithm is concerned, if an event’s essence cannot be compressed into a three-second span, it may as well have never happened. The proliferation of short-form video has created a media environment structurally hostile to sequential reasoning.

Young people’s attention is guided by an ever-narrowing algorithmic spotlight. Stories that are too big to be rendered by the spotlight are able to bask in pitch darkness and the people we allow to control the algorithms are not interested in changing that. In fact, Mosseri has been open about his efforts to speed up trends: “I want us to be better at trends. It takes still too long for things to pop on Instagram.”

The political implications of this media environment are clear. If short-form video platforms continue to transmute real-world events into less-than-superficial spectacles, the rich and powerful need not manually censor anything. If all chains of cause and effect have found their terminus in the platform algorithms, and if public consciousness is held inert by the same three-second hooks, what will be worth censoring?

The dichotomy of Turkey

On 1 August, a significant prisoner swap between the USA and Russia took place in Turkey’s capital Ankara and 26 prisoners were freed, including the peerless American reporter Evan Gershkovich. In playing a central role in the most extensive prisoner exchange since the end of the Cold War, Turkey’s National Intelligence Organization (MIT) won accolades. The operation reminded the world that its NATO membership has been the cornerstone of Turkey’s defence and security policy since it joined the bloc in 1952.

Yet over the next 24 hours, Turkey’s Information and Communication Technologies Authority barred access to Instagram without providing a specific reason. Reports suggested the ban was a response to Instagram’s removing posts related to the death of Hamas leader Ismail Haniyeh, a close ally of Turkey’s strongman president

During his 21-year reign, Recep Tayyip Erdoğan has established himself as the most relentless implementer of censorship in Turkish history. Twitter, Wikipedia, OnlyFans, YouTube, Google Sites, Blogger, Blogspot, Google Docs, SoundCloud, WordPress, Facebook, Reddit, Google Drive, Dropbox, WhatsApp, Voice of America, Deutsche Welle, and Roblox have been among the victims of Erdoğan’s censorship.

Erdoğan has always oppressed free voices by tagging them as fascists. He has attacked and imprisoned all sectors of Turkish society under that accusation – except for Turkey’s actual fascistic groups which are parts of his far-right governing coalition.

On 5 August, Erdoğan accused Mark Zuckerberg’s Meta of “digital fascism.” But five days later, Turkey restored access to Instagram. The nine-day block reminded people of the arbitrary nature of Erdoğan’s regime, which is built on macho posturing to audiences at home and bullying “foreign powers” in the name of the Turkish nation.

Turkish users could then re-access Instagram after the country’s minister of transport and infrastructure claimed Instagram had accepted that “our demands… will be met”. Yet Instagram continues to remove posts mourning the death of Haniyeh: nothing has changed.

Three days after Instagram was reinstated, a woman who criticised Erdoğan’s ban in a YouTube interview was arrested for “insulting Turkey’s President”. She was sent to a prison where she remains at the time of writing this.

For some, Erdoğan’s Instagram ban was but a pointless act. I see it as part of a more ominous tactic. Banning Instagram solidifies the idea that censorship in Turkey is all about Erdoğan’s whims. The strongman can cut access to Google, Amazon, Netflix, iCloud, and other vital internet services if and when he feels like it. He’s all-powerful: no legal entity can stop him from doing whatever he wants.

An insidious and unlegislated form of policing?

On a housing estate, somewhere in north-west London, a dispute said to be between rival groups of young men, apparently rages on. From this quagmire of social deprivation emerges Chinx (OS) who, released from an eight-year custodial sentence at the four-year mark, starts dropping bars like his very life depended on it. And, in a way it does. Because for boys like Chinx, young, black and poor, there is only one way out and that is to become the next Stormzy. Only, two behemoths stand in his way: the Metropolitan Police and their apparent “side man” Meta, parent company of Facebook and Instagram.

In January 2022, Chinx posted a video clip of a drill music track called Secrets Not Safe. Following a request by the Metropolitan Police arguing that the post could lead to retaliatory gang-based violence , Meta removed the post and Chinx’s Instagram account was deleted.

Meta’s decision has now been challenged by the Oversight Board, a quasi-independent adjudicator conceived to police the online giant’s application of its own policies but funded by the company.

The Board recently condemned the company’s decision to remove Chinx’s post and delete his account as not complying with Meta’s own stated values and with wider human rights considerations.

As part of its review of Meta’s decision, the Board made a Freedom of Information Act request to the Met over its requests to remove content from various online platforms. Whilst a good proportion of their responses to the request were unhelpful bordering on obstructive, what it did disclose was troubling.

In the year to the end of May 2022, the Met asked online platforms, including Meta, to remove 286 pieces of content. Every single one of those requests related to drill music. No other music genre was represented. Some 255 of the Met’s requests resulted in the removal of content, a success rate of over 90%.

The decision makes for illuminating, if worrying, reading when one considers the potential chilling impact Meta’s actions may have on the freedom of expression of an already suppressed, marginalised and some would argue, over-policed section of our community. Four areas of concern emerge.

Law enforcement access to online platforms

Instagram, in common with other applications, has reporting tools available to all users to make complaints. Whilst it may be that law enforcement organisations use such tools, these organisations also have at their disposal what amounts to direct access to these online platform’s internal complaints procedures. When law enforcement makes a request to take content down, Meta deals with such a request “at escalation”. This triggers a process of investigation by Meta’s internal specialist teams who investigate the complaint. Investigation includes analysis of the content by Meta to decipher whether there is a “veiled threat”.

This case demonstrates a worrying pattern in my view; namely the level of privileged access that law enforcement has to Meta’s internal enforcement teams, as evidenced by correspondence the Board saw in this case.

Lack of evidence

What became clear during the exposition of facts by the Board was that despite the apparent need for a causal link between the impugned content and any alleged “veiled threat” or “threat of violence” law enforcement advanced no evidence in support of their complaint. In the light of the fact, as all parties appeared to accept, that this content itself was not unlawful, this is shocking.

On the face of it then, Meta has a system allowing for fast-tracked, direct access to their complaints procedure which may result in the removal of content, without any cogent evidence to support a claim that the content would lead to real life violence or the threat thereof.

This omission is particularly stark as, as in this case, the violence alluded to in the lyrics took place approximately five years prior to the uploading of the clip. This five-year gap, as the Board commented, made it all the more important for real and cogent evidence to be cited in support of removal of the content. We ought to remind ourselves here that the Board found that in this case there was no evidence of a threat, veiled or otherwise, of real-life violence.

Lack of appeal

Meta’s internal systems dictate that if a complaint is taken “at escalation” – as all government requests to take down content are, and this includes requests made by the Met Police –  this means there is no internal right of appeal for the user. Chinx (OS) and the other accounts affected by this decision had no right to appeal the decision with Meta nor with the Oversight Board. The result is that a decision that, in some cases, may result in the loss of an income stream as well as an erosion of the right to express oneself freely, may go unchallenged by the user. In fact, as Chinx (OS) revealed during an interview with BBC Radio 4’s World at One programme, he was not made aware at any point during the process why his account had been deleted and the content removed.

The Board itself commented that: “The way this relationship works for escalation-only policies, as in this case, brings into question Meta’s ability to independently assess government actors’ conclusions that lack detailed evidence.”

Disproportionality

Each of the three shortcomings above revealed by the Board within Meta’s procedures are worrying enough; but, coupled with the disproportionate impact this system has upon black males (the main authors and consumers of this content) it veers dangerously close to systemic racism.

The findings of the Oversight Board’s FOI request on the Met’s activities in relation to online platforms clearly back this up.

The Digital Rights Foundation argues that while some portray drill music as a rallying call for gang violence, it in fact serves as a medium for youth, in particular black and brown youth, to express their discontent with a system that perpetuates discrimination and exclusion.

An insidious and backdoor form of policing

The cumulative effect of Meta’s actions arguably amounts to an insidious and unlegislated form of policing. Without the glare of public scrutiny, with no transparency and no tribunal to test or comment on the lack of evidence, the Met have succeeded in securing punishment (removal of content could be argued to be a punishment given that it may lead to loss of income) through the back door against content that was not, in and of itself unlawful.

As the Board pointed out in their decision, for individuals in minority or marginalised groups, the risk of cultural bias against their content is especially acute. Art, the Board noted, is a particularly important and powerful expression of “voice”, especially for people from marginalised groups creating art informed by their experiences. Drill music offers young people, and particularly young black people, a means of creative expression. As the UN Special Rapporteur in the field of cultural rights has stated, “…representations of the real must not be confused with the real… Hence, artists should be able to explore the darker side of humanity, and to represent crimes… without being accused of promoting these.”

The right to express yourself freely, even if what you say may offend sections of our community, is one of those areas that truly tests our commitment to this human right.

Six sites blocked by China’s Great Firewall

[vc_row][vc_column][vc_column_text]

The New York Times is blocked in China.

Last month, China’s Ministry of Industry and Information Technology unveiled the country’s a new 14-month campaign to tighten control over the internet. The Chinese government is specifically concerned about virtual private networks, which punch holes through the country’s so-called “Great Firewall”. Without the VPNs, China’s internet users are unable to browse some of the world’s largest web sites. So the campaign made big news around the world.

But Charlie Smith of the 2016 Index on Censorship Digial Activism Award-winning GreatFire, an anonymous collective fighting Chinese internet censorship, told us that the VPN campaign is “actually kind of being mis-reported by the press, in general. It’s not as big a deal as it is being made out to be. We’d make a lot of noise if it was a big deal.”

Here are just six sites that are regularly blocked by China’s Great Firewall:

  1. YouTube

YouTube was first blocked in March of 2008 during riots in Tibet and has been blocked several times since, including on the 25th anniversary of the Tiananmen Square protests in 2014. At the time of the Tibetan riots, much of China’s population speculated that the YouTube ban was an attempt by the government to filter access to footage that a Tibetan exile group had released

  1. Instagram

It’s typical for China’s internet censors to go into overdrive during politically sensitive events and/or time periods, which is why it doesn’t come as a surprise that Instagram was blocked in 2014 after pro-democracy protests in Hong Kong. To some, the block on Instagram during the protests exposed Beijing’s fears that people in the mainland might be inspired by the events taking place in Hong Kong. While some parts of the social media site may be restored, the site is still listed as 92 percent blocked.  

  1. The New York Times

In late December 2016, the Chinese government made waves by ordering Apple to remove their New York Times app from the Chinese digital app store. According to the newspaper, the app had been removed on 23 December under regulations prohibiting all apps from engaging in activities that endanger national security or disrupt social order. The New York Times website as a whole has been blocked since 2012 in China, after the newspaper published an article regarding the wealth of former prime minister Wen Jiabao and his family. People turned to the NYT app after the blockage in order to maintain access to the the paper’s stories. Now that the app is blocked as well, the New York Times is only available to those who had downloaded the app before its removal from the store.

  1. Bloomberg

In June of 2012, the popular business and financial information website published a story regarding the multimillion dollar wealth of Vice President Xi Jinping and his extended family. Considering this story too invasive, the Chinese government blocked Bloomberg and has yet to reopen the site to the public. At the time, the Chinese government was going through a period of transition, as power shifted from then President Hu Jintao to Jinping. 

  1. Twitter

Censors in China blocked access to Twitter in June of 2009 in anticipation of the 20th anniversary of the pro-democracy protests in Tiananmen Square. The move seems to reflect the government’s anxiety when it comes to the anniversary and the sensitive memories that come with it. The blocking of Twitter has also allowed for the rise of the Chinese app Weibo, a censored Twitter clone, which quickly became one of China’s most popular.

  1. Reuters

One of the more recent bans by the Chinese government came in the form of the international news agency Reuters. In March 2015, the organisation announced that both its English and Chinese sites were no longer reachable in the country . China has blocked media outlets like Reuters in the past, but these moves have always come after the release of a controversial story. In the case Reuters, the ban seemed to have come out of nowhere, with the reason behind the blockage still unclear.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_basic_grid post_type=”post” max_items=”4″ element_width=”6″ grid_id=”vc_gid:1487260644692-d841ab7e-8ed3-4″ taxonomies=”85″][/vc_column][/vc_row]

SUPPORT INDEX'S WORK