SkinnyTok Content Still Going Strong as ‘Skinni Tokk’
What? #SkinnyTok refers to a slew of content on social media that promotes “thin” beauty ideals, often to unhealthy lengths and through risky eating habits and exercise routines.
So What? Despite efforts made by platforms such as TikTok and Instagram to remove or filter out potentially harmful content—especially for teen users—“thinfluencers” (as they’re called) are still finding ways to promote their content by using alternate or scrambled versions of the hashtag, namely “skinni tokk.”
Now What? If you have search filters enabled on your teen’s phone, be sure to add “skinni tokk” to the list. But also, ask your teen about why this sort of content appeals to her. Is it because she’s feeling self-conscious or because she wants to look like a specific celebrity or influencer? What sort of self-esteem issues does she think the people who promote this content might have? And what does the Bible have to say about taking care of our bodies or focusing on how we look?
‘ICL TS PMO Copypasta’ Among Top Slang of 2025
What? According to Know Your Meme, “ICL TS PMO copypasta” (or variations, such as TSPMO ICL) is multiple slang terms all strung together. And it’s essentially a tongue-in-cheek way of calling people out for overusing slang.
So What? Two of the acronyms in the phrase (TS and PMO) are generally spelled out to include profanities (the s- and p-word, respectively). There are also concerns over cultural appropriation, since some of the individual terms have been adopted (and misinterpreted) from African American Vernacular English.
Now What? Talk to your teens about how often they use these shorthand phrases and why. In the early days of texting and social media (when there were set character limits or you could be charged per character used), it might have made sense. But in today’s near-limitless technological landscape, it may feel lazy and trite—so your teen may want to consider how they’re presenting themselves online.
Chatbots Can’t Handle Mental Health Crises
What? Many popular chatbot companies, including OpenAI, Character.AI and Meta, say they have “safety features in place” to protect users who ask artificial intelligence for help with mental health challenges. However, according to The Verge, those same safety features can be unreliable.
So What? In testing various chatbots, Verge writer Robert Hart found that—when given a prompt asking to talk to someone about a mental health crisis—some refused to engage in the topic, others tried to redirect the conversation and still others failed to provide local crisis hotline information. Experts note that if someone is struggling with thoughts of suicide or self-harm, these types of responses can exacerbate the issue.
Now What? Not all chatbots failed; Hart noted that ChatGPT and Gemini (Google’s AI chatbot) both responded with clear and accurate resources on the first try. However, it’s a good reminder that if your child or someone they know is in crisis, don’t go to AI for help: This is a matter for real-life people, not computer programs.
Recent Comments