Flagging women’s health postings is a dangerous tricky version that muzzles important conversations and further discourages information and support searchers.
Campaigners of such an approach point out that essential information on women’s health is becoming more opaque due to the obscene accounts on social networks where users just post provocative pictures.
Charities and health activists are angry at Instagram, Facebook and the digital companies which are supplying the sources of all these problems. They say ‘a picture is worth a thousand words’ and yet these companies delete posts that contain valuable information while keeping up street art where inappropriate photographs are allowed.
Through experts’ advice on sharing their privacy about women and youth health and sexual wellbeing on social platforms it is considered to be very important and useful. By doing so, more trustworthy information gets spread across and fewer people are likely to get the wrong information.
According to the latest study, useful health posts, for instance, those that reflect different signs of cancer get marked to be inappropriate or obscene.
Here’s why social media needs to stop:
Stigma and Shame: Social media becomes a disgraceful platform as discussions about menstruation, breasts and other bodily functions are censored. This signifies that women’s health issues are shameful and tagged as taboo.
Lack of Awareness: Flagging these posts to help people from “knowing” about critical health matters. Timely discovering and preventing a number of many women’s health issues is what is needed.
Unequal Treatment: The paradox has been added. The women’s health evidentiary violations are never done, sometimes even when they are similar to the male body parts. The effect of this is a vacuum of information and hence gender bias is fed.
In a survey that CensHERship campaign took among more than 50 organizations, nine out of ten women health account helders claimed they have faced a kind of censorship within the last year.
Algorithms are used by social media sites to cull, delete, or censor things they deem unfit. Nevertheless, activists assert that automation of the systems is biased as they automatically label some female anatomy-related words as “inappropriate” without understanding the context.
An example is that CensHERship comes across posts that show women doing self-checks of their breasts being classified as ‘prostitution’ and an educational account where discussions about sex were being completely removed was reported from Instagram.
A campaign that increased the community’s awareness about breast cancer didn’t advertise posts with female nipples as these were forbidden. Therefore, their feelings were penetrated by male ones. Consequently, their ‘gynecological cancers’ awareness campaigns were the ones that had accounts banned due to the use of the simple word ‘vagina’.
The majority of people are blacklisted or banned from Instagram (90%), and about one quarter of these people are banned from Facebook too. The reliability/credibility of some Instagram users is also mentioned as a problem on the other social media platforms like TikTok and LinkedIn.
Here are some things that can be done:
Social media platforms must update their algorithms to differentiate between educational content and true pornography as it can reveal cover-ups of child exploitation and pornography. User’s can challenge the wrongly flagged products and ask for a better policy to be introduced.