Social media platform X (formerly Twitter) has quietly introduced a feature in recent days that publicly displays key background information about user accounts. The tool adds a small button on profile pages that reveals the country or region where an account is based, how many times the user has changed their handle, when the account was created and where the app was originally installed, making previously hidden information accessible to all users.
A platform long known for allowing people to craft alternate identities and adopt personas from across the world abruptly lifted the curtain — and a sprawling ecosystem of fabricated accounts was exposed.
Fake ‘Gaza influencers’ uncovered
Nikita Bier, X’s head of product, signaled in October that the company planned to roll out the feature. At the time, it appeared to be a routine anti-spam update. Once the feature went live and users began clicking the “About this account” button, however, the scope of the fraud became clear.
Users discovered a network of accounts posing as Palestinians in Gaza who claimed to be reporting under bombardment and sharing emotional personal stories. Many were not based in Gaza at all. Some accounts shut down almost immediately after their listed locations were exposed.
One account that described its owner as a witness in Rafah “living under airstrikes” was shown to be posting from Afghanistan. A supposed nurse in Khan Younis turned out to be based in Pakistan. A man claiming to be a father of six in a displacement camp was based in Bangladesh. A “poet from Deir al-Balah writing by candlelight” was located in Russia.
The revelations went far beyond a few isolated cases. Entire bot farms appeared to be operating for months. Users posing as “North Gaza survivors” were actually in Pakistan. Self-described “Rafah residents” were in Indonesia. Accounts claiming to be members of Hamas’s Nukhba unit uploaded videos from Malaysia. Even fake profiles presenting themselves as IDF soldiers — “officers,” “snipers” and “reservists” supposedly operating in Gaza — were traced to London.
Some users continued to insist they were in Gaza despite the contradiction with their displayed location. In one prominent case, a user named Moatasem Al-Daloul posted a video of himself walking through what he said were destroyed homes in the Gaza Strip. It was not immediately possible to verify whether the video was authentic or had been filmed against an artificial background. Grok, X’s built-in artificial intelligence assistant, indicated that the platform’s displayed geographic data was accurate.
A push for transparency
The new feature allows users to choose whether to show their country or a more general region, similar to an option long available on Instagram. On X, however, the information is more prominent and cannot be hidden once enabled.
According to foreign media reports, code analysts have found evidence that X is preparing another tool that would alert users when an account attempts to disguise its true location with a VPN. If implemented, it could make remaining forms of manipulation on the platform far more difficult.
The changes raise broader questions about the future of online discourse. What happens when anonymity erodes and accounts that positioned themselves as eyewitnesses to conflict are revealed to be young people around the world with no connection to the events they describe? And what does this mean for social networks and their influence on political and social narratives shaping the lives of hundreds of millions of people?
