News Round-Up: EU Fines Musk's X, US Visa Checks Target Censorship, and Meta Bans Under-16 Australians
Every week, the editorial team of Freedom Research compiles a round-up of news that caught our eye, or what felt like under-reported aspects of news deserving more attention.
Over the past week, the following topics attracted our attention:
The EU Fines Elon Musk’s X €120 Million
US Tightens Work Visa Checks over Censorship Concerns
Privacy Risks of AI Chatbot Conversations
Instagram, Facebook Expel Underage Australians
US Congress Goes Full Parent Mode on Social Media and Privacy
The EU Fines Elon Musk’s X €120 Million
On Friday, the European Commission fined Elon Musk’s company X €120 million. This is the first fine imposed by the EU under the content moderation law. The decision is expected to deepen tensions with the US over EU digital rules and has drawn criticism from US Vice President JD Vance, who condemned the plan as a fine for “not enforcing censorship,” Politico writes.
The European Commission finds that social media platform X has violated its transparency obligations as a very large online platform under the EU Digital Services Act (DSA). The Commission considers that X’s “blue tick” design is misleading, as it was changed from a user verification feature to a paid feature. The Commission also finds that X’s advertising library is not transparent and does not allow researchers access to public data.

Compared to previous fines imposed by Brussels on technology companies, the current fine on X is considered moderate, but it is only part of the EU’s investigation into X launched two years ago under the DSA. Other parts of the investigation are still ongoing, such as X’s efforts to prevent the spread of illegal content and combat information manipulation.
Henna Virkkunen, The European Commission’s Executive Vice President for Tech Sovereignty, compared the EU’s decision on X with the decision on TikTok, which was also published on Friday. The TikTok ad library investigation was closed without a fine because the company promised to change the design of its service. Virkkunen believes that the goal is not to impose fines, but to ensure the enforcement of digital law, and if a company complies with EU rules, there is no need for a fine.
The DSA allows companies to be fined up to 6% of their global annual turnover. Although X’s global turnover is estimated to be a few billion dollars, Musk’s total turnover is much higher. In any case, Virkkunen considers the fine imposed on X to be “proportionate.” It was calculated taking into account “the nature of these infringements, their gravity in terms of affected EU users, and their duration.”
US officials have repeatedly criticized the DSA, calling it a censorship law and threatening countermeasures in the form of trade tariffs. Vice President Vance wrote on X: “Rumors swirling that the EU commission will fine X hundreds of millions of dollars for not engaging in censorship. The EU should be supporting free speech not attacking American companies over garbage.”
Secretary of State Marco Rubio also expressed his opinion on X, writing: “The European Commission’s $140 million fine isn’t just an attack on @X, it’s an attack on all American tech platforms and the American people by foreign governments. The days of censoring Americans online are over.”
Henna Virkkunen remained unmoved by the US critic, stating that “The DSA has nothing to do with censorship, this decision is about the transparency of X.” Commission spokeswoman Paula Pinho added that no one in the EU agrees with how some Americans view European laws. “It’s not about censorship, and we have repeated several times from this podium, so on this we really agree to disagree on how it is perceived,” Pinho said at a press conference.
US Tightens Work Visa Checks over Censorship Concerns
On December 2, the administration of US President Donald Trump announced that it would tighten the conditions for H-1B visas. Diplomats have been asked to review the CVs or LinkedIn profiles of H-1B visa applicants and their accompanying family members to determine whether they have been involved in censorship, writes Reuters. Those who have worked in fields involving misinformation, disinformation, content moderation, fact-checking, or online security may not be granted visas.
A letter sent to consulates states: “If you uncover evidence that an applicant was responsible for, or complicit in, censorship or attempted censorship of protected expression in the United States, you should pursue a finding that the applicant is ineligible.” In the letter, the administration confirmed that while all visa applicants are now subject to the rule, H-1B applicants must undergo additional screening. This is because H-1B visa holders often work in the technology sector, including social media or financial services, which are associated with restrictions on freedom of speech.

For technology companies, which were major supporters of Trump and recruit many employees from countries such as India and China, H-1B visas are extremely important. These visas are only granted to highly qualified employees. Now, however, consular officials must thoroughly investigate the work experience of all applicants and ensure that none have been involved in activities that restrict freedom of speech. The rules apply to both new and repeat applicants.
According to a spokesperson for the State Department, the country no longer supports foreigners coming to the US to work as censors and restrict Americans’ freedom of speech. The spokesperson emphasized that the president himself had once been a victim of such abuse, when social media platforms locked his accounts. The president does not want his fellow citizens to suffer the same fate, and allowing foreigners to engage in censorship would insult and harm Americans.
The Trump administration has made freedom of speech a key part of its foreign policy, especially when it comes to silencing conservative voices. US officials have repeatedly condemned the silencing of right-wing politicians in Europe, for example in countries such as Romania, Germany, and France. US officials have also explained that European authorities censor opinions, for example on immigration, in the name of combating disinformation. Just this spring, Secretary of State Marco Rubio threatened to deny visas to people who censor Americans’ freedom of speech, and officials who want to regulate American technology companies could be treated in particular.
Privacy Risks of AI Chatbot Conversations
According to a new study by Stanford University, chatbots (e.g., ChatGPT, Gemini) pose a significant privacy risk because artificial intelligence companies train their models using user conversations, and corporate privacy policies are often unclear, and users do not understand their rights. In other words, the six largest artificial intelligence companies in the United States use user input to improve their models, enhance their capabilities, and gain market share. Some companies offer the option to opt out, while others do not.
More specifically, researchers compared the privacy policies, sub-policies, FAQs, and other guidelines of six US artificial intelligence companies: Amazon (Nova), Anthropic (Claude), Google (Gemini), Meta (Meta AI), Microsoft (Copilot), and OpenAI (ChatGPT). The documents were evaluated using the methodology of the California Consumer Privacy Act, as it is the most comprehensive privacy law in the US and all of the developers examined are required to comply with it. The researchers analyzed the language of the documents to determine whether user inputs to chatbots are used to train or improve LLMs; what personal data is collected, how it is stored and processed; and whether users can consent or refuse to have their conversations used in this way.

According to the author of the study, Jennifer King, a privacy and data protection policy researcher at Stanford’s Human-Centered AI Institute, all six artificial intelligence companies examined may use any conversations a user has with a chatbot to train chatbots, including those shared in a separate file. Some developers retained information in their systems for an indefinite period of time, with only a few claiming that personal data was anonymized before being used to train AI. Some developers also allow humans to review conversation transcripts in order to improve AI training. It also emerged that companies with multiple products, such as Google, Meta, Microsoft, and Amazon, link user communications with other products from the same company that the consumer uses, such as search, sales/purchases, social media, etc.
According to researchers, privacy policies are very inadequate, usually written in complex legal language that is difficult for consumers to read and understand. Nevertheless, users must agree to these rules if they want to visit websites, use search engines, and communicate with language models. In addition, artificial intelligence developers have collected a huge amount of data from the public internet to train their models. However, this has also resulted in a large amount of personal data ending up in the datasets.
Researchers at Stanford University consider the situation to be particularly serious when users share personal biometric and health data with artificial intelligence. The authors gave an example: if you ask a chatbot for dinner ideas, specifying conditions such as low sugar content or suitability for the heart, the chatbot may infer from this, and the algorithm may decide that the user fits into the classification of people with health problems. “This determination drips its way through the developer’s ecosystem. You start seeing ads for medications, and it’s easy to see how this information could end up in the hands of an insurance company. The effects cascade over time,” King described.
Another threat discovered by researchers is children’s privacy. Most companies do not implement measures to prevent children’s input from entering the model’s training data. Google has announced that it only uses data from teenagers who have consented to it. Anthropic, however, claims that it does not collect data from children and does not allow users under the age of 18 to create accounts, although it does not require age verification. According to Microsoft, the company does collect data from minors, but does not use it in the development of language models.
According to Stanford researchers, there is not enough information about all of these practices in privacy policies. The authors therefore recommend that policymakers and developers establish federal privacy regulations and require users to explicitly consent to the use of their data for training artificial intelligence. The researchers also recommend that developers be required to filter personal data by default. “As a society, we need to weigh whether the potential gains in AI capabilities from training on chat data are worth the considerable loss of consumer privacy. And we need to promote innovation in privacy-preserving AI, so that user privacy isn’t an afterthought,” King said. For users, the researchers recommend that they carefully consider what information they share in AI conversations and, if possible, refuse to allow their data to be used for AI training purposes.
Instagram, Facebook Expel Underage Australians
A week before Australia’s social media ban for minors comes into full effect, Meta has begun “cleaning up” its platforms Instagram, Facebook, and Threads by removing accounts belonging to users younger than 16. The decision is expected to affect approximately 150,000 Facebook users and 350,000 Instagram (including Threads) accounts, according to the BBC.
Australia is the first country in the world to implement a social media ban for those under 16, a ban which will come into full effect on December 10. If a platform fails to take reasonable measures to comply, it could face a fine of up to 49.5 million Australian dollars (33 million US dollars).

According to a Meta spokesperson, the company is complying with the law and implementing it in several stages. At the same time, the company believes that a more effective, standardized, and privacy-preserving approach is needed. Meta believes that the government should require app stores to verify the age of users when they download an app and also ask for parental consent in the case of young people. This would mean that young people would not be required to verify their age on every app.
Meta began closing youngsters’ accounts on December 4, but promised that users would be able to download their posts, messages, and videos before their accounts are deactivated. If a user discovers they have been mistakenly identified as under 16, they can request a review and verify their age with a video selfie. Users can also submit a driver’s license or other official identification document.
The Australian government has consistently argued that the ban is intended to protect children from the harmful effects of social media. According to Communications Minister Anika Wells, some problems are to be expected at first, but the ban is still necessary to protect Generation Alpha (all children under the age of 15) and future generations. Wells believes that the law protects “Generation Alpha from being sucked into purgatory by the predatory algorithms described by the man who created the feature as behavioral cocaine”. She characterized youngsters as dopamine-addicted from the moment they acquire a smartphone and created a social media account.
In addition to Meta’s platforms, other social media platforms such as YouTube, X, TikTok, Snapchat, Reddit, Kick, and Twitch must also start verifying age. Some platforms, such as the video and photo sharing apps Lemon8 and Yope, are still being monitored to see if children are migrating to them. Yope has clarified the platform is not social media, but a private messaging app similar to WhatsApp. Lemon8 has reportedly said it will prohibit users under the age of 16 from using the platform, although the ban does not appear to apply to it.
Critics say that banning minors from social media could completely isolate certain groups that depend on the platforms for communication. The ban may also force children to move to less regulated corners of the internet.
The Australian social media ban is also being closely monitored by senior European Union officials, who are citing it as a potential example.
US Congress Goes Full Parent Mode on Social Media and Privacy
In Washington, the two parties – Democrats and Republicans – have finally found a sufficiently large common enemy, the attention economy. In a rare show of cooperation, the parties have presented two censorship bills and a tax scheme collectively known as the UnAnxious Generation, writes Reclaim The Net.
The UnAnxious Generation is a telling reference to Jonathan Haidt’s psychology bestseller The Anxious Generation, which describes how social media has changed and degraded American childhood. The name of the package therefore seems to reflect Congress’s desire to save American children from Silicon Valley through web regulation and control of freedom of speech. One of the proponents of the proposals, Massachusetts Democrat Jake Auchincloss, promises to go after tech companies by linking legal immunity to content “moderation,” taxing advertising revenue, and ensuring that children cannot access apps without an “Age Signal.”

The first bill, the Deepfake Liability Act, amends Section 230 of the Communications Decency Act, which currently gives platforms legal immunity for user-generated content. In other words, political statements, memes, and conspiracy theories can be published without anyone being sued for them. Under the new proposal, this exemption becomes conditional, imposing a vague duty of care to prevent deepfake pornography, cyberstalking, and digital forgeries. However, the latter term is not clearly defined in the draft bill, and critics say it could cover real fake videos, artificially created memes, as well as parodies or satire. In other words, the broad wording could provide grounds for lawsuits over humor and political cartoons in the future. The plan is for social media companies to be proactive rather than reactive, and under the proposal, prohibited content would become the responsibility of the company’s CEO.
The second bill is the Parents Over Platforms Act, which aims to fill the gaps that allowed children to circumvent social media age restrictions. Currently, many apps, such as Instagram and TikTok, ask users to provide their age when registering, although it is not possible to verify this information on the platforms. The new law would require parents to report their child’s age to the App Store when setting up their phone. The App Store would then forward this information to apps based on the child’s age group, ensuring that children under the age of 13 cannot access restricted platforms. According to Indiana Republican Erin Houchin, co-author of the bill, the bill stems from her personal experience, in which her 13-year-old daughter “hacked our parental controls” and started chatting with strangers. However, Houchin was unable to close her daughter’s account in any way. For this reason, the co-author believes that the bill gives parents back control and closes dangerous opportunities for children. Critics argue that such a system is inherently prone to data errors and confusion between applications, but Congress does not seem to be bothered by this possibility.
The third bill is the Education Not Endless Scrolling Act, which would impose a 50% tax on digital advertising revenue exceeding $2.5 billion and would apply to large social media companies. The tax revenue would be distributed to mentoring programs, local journalism, and technical education. Auchincloss explained, “These social media corporations have made hundreds of billions of dollars making us angrier, lonelier, and sadder, and they have no accountability to the American public.” Critics have said that the tax sounds like a moral tax, where the government collects penance for every click.
It is worth noting that the timing of the bills is not coincidental. In the name of child safety, Congress has been flooded with bills, and an impressive 19 more are up for debate soon. In addition, Houchin and Auchincloss are launching the Kids Online Safety Caucus to develop cross-party solutions for protecting children online.



Children or “teenagers” cannot "consent."
Restricting one identity- not 18+- is a first step to the goal of restricting other identities- terrorist, sanctioned individual, prisoner, unaligned, etc.