In 2025, social networks will "change their skin" to return to their true purpose of being "platforms for the common good of society" (prosocial media). This is a trend predicted by technology news sites. This also seems to be the desire of many social network users who are fed up with the chaotic world of the Internet.
In terms of content, the current mainstream social media platforms are nothing more than mirrors of each other. They all have the same features and sometimes even the same content. The lack of innovation makes social media lose its novelty.
As for the impact, by now, anyone who uses social media is no stranger to its harmful effects on human mental health. A growing body of psychological and sociological research shows that time spent on platforms like Instagram and TikTok increases the risk of anxiety, depression, negative body image, and low self-esteem. These research findings are ubiquitous on streaming platforms, like common knowledge. The good news is that many optimists believe that social media users will learn to save themselves. Many young people are wondering what their lives, personal health, emotions, and mental health would be like without social media. In 2023, technology research firm Gartner predicted that 50% of users will abandon or significantly reduce their social media usage by 2025. Looking at popular trends online, Jessica Byrne of thred.com believes that this prediction is highly likely to come true. Of course, millions of users will not delete their social media accounts immediately or overnight, but the change will start with users no longer interacting continuously on the platforms. Byrne believes that Generation Z (born between 1996 and 2012) will lead this change.
Longing for a reality they never experienced before—a “pre-Internet” world—Gen Z is reviving hobbies that have been lost since people moved their lives online. Young people are joining running groups and book clubs on social media. They’re finding new ways to connect with their peers beyond just liking and sharing posts. They’re searching for meaning in life. Their curiosity gives them an intrinsic drive to seek out new experiences.
Wired's Audrey Tang calls this shift prosocial media. Tang explains: Prosocial media is media that not only captures users' attention but also promotes mutual understanding among them, empowering all voices and promoting the ability to listen to differences. One of the first steps that social networks themselves have taken in recent years is to create a feature that allows people to collectively add context (Community Notes) to potentially misleading information.
In Taiwan, Cofacts, a crowdsourced fact-checking platform, is taking this concept further by empowering users to contextualize information in private groups. Launched in 2017 by tech community g0v, the platform was successfully rolled out in Thailand in 2019. Research from Cornell University found that Cofacts processed misinformation queries faster and more accurately than professional fact-checking sites. Prosocial media also addresses the problem of centralized control in the hands of a few tech giants by using decentralized social media protocols that allow content to flow seamlessly between different social media platforms. For example, last year Meta’s Threads joined the Fediverse, a group of interoperable social media platforms that includes Mastodon and Wordpress. Threads users can follow accounts and post on other social networks. In February, another decentralized platform, Bluesky, launched with funding from Twitter founder Jack Dorsey. Decentralization promises a more democratic online space, where people have more control over their data and their experiences. This is a factor that is increasingly important to users. A study at the University of Cincinnati found that this is a major reason why users decide to join a decentralized social network like Mastodon. It’s all speculation at this point. Everyone has a million different reasons to stay on social media. But it’s entirely possible that these changes will happen by 2025 and will persist at least until the next big thing comes along.
Meta's "AI users" will also have profile pictures, introduce themselves, and post and share AI-generated content on the two social networks, which have a combined 5 billion users worldwide. A future where humans interact with algorithms in human guises is emerging. Meta's move is said to be aimed at increasing engagement and retaining young users. Connor Hayes, vice president of generative AI products at Meta, said Meta's top priority over the next two years is to make its apps "more fun and engaging," including making interactions with AI more social. It's no secret that Meta relies on AI, but in an age where AI-generated content is so overwhelming that it's impossible to tell the difference between real and fake, Mark Zuckerberg's desire for humans to play social media with AI only makes people more worried.
“Without strong safeguards, platforms risk amplifying false narratives through AI-driven accounts,” Becky Owen, global head of marketing and innovation at creative agency Billion Dollar Boy, told the Financial Times. Owen, who previously headed creator innovation at Meta, stressed that while AI characters could become a “new creative entertainment format,” there is also the risk that they will flood platforms with low-quality content, undermining the creative value of content creators and eroding user trust. “Unlike human creators, AI characters do not have human life experiences, emotions, or the capacity for empathy,” he added. In fact, over the past few years, the internet has been flooded with low-quality AI-generated content, posted everywhere to attract engagement. Analysts have their own word for this type of content: slop. Slop is low-quality content (both text and images) generated by AI with the primary purpose of attracting advertising revenue and improving search engine rankings. AI can build a better future, but we have to worry about blocking the crap they create. The "social media for society" movement will not go far if AI bots with no humanity and emotions are everywhere. What the world needs now, as CNET tech reporter Katelyn Chedraoui puts it, is a better AI labeling system. Some flagging and warning solutions have been implemented, such as "AI content" or watermarks on photos, but they are not enough.
In the age of AI content explosion, everyone needs to learn how to protect themselves and hone their skills in spotting AI-generated content. But as AI continues to improve, even experts will have a hard time accurately assessing images. What’s worrying, Chedraoui says, is that improving the visibility of labels is at the bottom of many AI companies’ priority list. “2025 should be the year we develop a better system for recognizing and labeling AI images,” she urges.
Comment (0)