Censorship

US House Panel Subpoenas Alphabet Over Content Moderation (yahoo.com) 40

An anonymous reader quotes a report from Reuters: The U.S. House Judiciary Committee subpoenaed Alphabet on Thursday seeking its communications with former President Joe Biden's administration about content moderation policies. House Judiciary Committee Chairman Jim Jordan, a Republican, also asked the YouTube parent company for similar communications with companies and groups outside government, according to a copy of the subpoena seen by Reuters. The subpoena seeks communications about limits or bans on content about President Donald Trump, Tesla CEO and close Trump ally Elon Musk, the virus that causes COVID-19 and a host of other conservative discussion topics. "Alphabet, to our knowledge, has not similarly disavowed the Biden-Harris Administration's attempts to censor speech," Jordan said in a letter.

Meanwhile, Google spokesperson Jose Castaneda said the company will "continue to show the committee how we enforce our policies independently, rooted in our commitment to free expression."
Games

Slashdot Asks: What's the Most Influential Video Game of All Time? 228

The folks at Bafta are running a public survey to identify the most influential video game ever made. When The Guardian asked prominent gaming figures to weigh in, they received a fascinating array of responses with zero overlap. Game designer Mike Bithell picked Metal Gear Solid 2, while Blumhouse's Louise Blain chose the short-lived horror experiment PT. The Guardian's own games editor backed Ocarina of Time for establishing 3D game world standards.

Other notable selections included Tomb Raider (pioneering female protagonists), QWOP (inspiring experimental design), Doom (revolutionizing FPS and modding), Mario Kart (competitive social play), Journey (emotional storytelling), Princess Maker (branching narrative systems), Paperboy (everyday world simulation), and Super Mario Bros (fundamental game design principles). So, Slashdotters, what's your pick for the most influential video game ever created? Which title fundamentally changed how games are designed, played, or experienced? Did it influence you personally, the industry as a whole, or both?
Privacy

India Grants Tax Officials Sweeping Digital Access Powers (indiatimes.com) 16

India's income tax department will gain powers to access citizens' social media accounts, emails and other digital spaces beginning April 2026 under the new income tax bill, in a significant expansion of its search and seizure authority.

The legislation, which has raised privacy concerns among legal experts, allows tax officers to "gain access by overriding the access code" to computer systems and "virtual digital spaces" if they suspect tax evasion.

The bill broadly defines virtual digital spaces to include email servers, social media accounts, online investment accounts, banking platforms, and cloud servers.

"The expansion raises significant concerns regarding constitutional validity, potential state overreach, and practical enforcement," Sonam Chandwani, Managing Partner at KS Legal and Associates, told Indian newspaper Economic Times.
The Internet

Cult Text-Based Zombie MMO 'Urban Dead' Is Shutting Down After 20 Years (gamesradar.com) 47

The long-running text-based zombie MMO Urban Dead is shutting down on March 14, 2025, after nearly 20 years. The reason: compliance concerns with the UK's Online Safety Act. Games Radar+ reports: "The Online Safety Act comes into force later this month, applying to all social and gaming websites where users interact, and especially those without strong age restrictions," [writes Kevan Davis, the solo British developer behind the game]. "With the possibility of heavy corporate-sized fines even for solo web projects like this one, I've reluctantly concluded that it doesn't look feasible for Urban Dead to be able to continue operating."

"So a full 19 years, 8 months and 11 days after its quarantine began, Urban Dead will be shut down," Davis writes. "No grand finale. No final catastrophe. No helicopter evac. Make your peace or your final stand in whichever part of Malton you called home, and the game will be switched off at noon UTC on 14 March."
The original website is still online if you want to play the game before its shutdown later this month.
Firefox

Firefox 136 Released With Vertical Tabs, Official ARM64 Linux Binaries (9to5linux.com) 49

An anonymous reader quotes a report from 9to5Linux: Mozilla published today the final build of the Firefox 136 open-source web browser for all supported platforms ahead of the March 4th, 2025, official release date, so it's time to take a look at the new features and changes. Highlights of Firefox 136 include official Linux binary packages for the AArch64 (ARM64) architecture, hardware video decoding for AMD GPUs on Linux systems, a new HTTPS-First behavior for upgrading page loads to HTTPS, and Smartblock Embeds for selectively unblocking certain social media embeds blocked in the ETP Strict and Private Browsing modes.

Firefox 136 is available for download for 32-bit, 64-bit, and AArch64 (ARM64) Linux systems right now from Mozilla's FTP server. As mentioned before, Mozilla plans to officially release Firefox 136 tomorrow, March 4th, 2025, when it will roll out as an OTA (Over-the-Air) update to macOS and Windows users.
Here's a list of the general features available in this release:

- Vertical Tabs Layout
- New Browser Layout Section
- PNG Copy Support
- HTTPS-First Behavior
- Smartblock Embeds
- Solo AI Link
- Expanded Data Collection & Use Settings
- Weather Forecast on New Tab Page
- Address Autofill Expansion

A full list of changes can be found here.
The Almighty Buck

Trump Names Cryptocurrencies for 'Digital Asset Stockpile' in Social Media Post (cnbc.com) 156

Despite a January announcement that America would explore the idea of a national digital asset stockpile, the exact cryptocurrecies weren't specified. Today on social media the president posted that it would include bitcoin, ether, XRP, Solana's SOL token and Cardano's ADA, reports CNBC — prompting a Sunday rally in cryptocurrencies trading. XRP surged 33% after the announcement while the token tied to Solana jumped 22%. Cardano's coin soared more than 60%. Bitcoin rose 10% to $94,425.29, after dipping to a three-month low under $80,000 on Friday. Ether, which has suffered some of the biggest losses in crypto year-to-date, gained 12%... This is the first time Trump has specified his support for a crypto "reserve" versus a "stockpile." While the former assumes actively buying crypto in regular installments, a stockpile would simply not sell any of the crypto currently held by the U.S. government.
"The total cryptocurrency market has risen about 10%," reports Reuters, "or more than $300 billion, in the hours since Trump's announcement, according to CoinGecko, a cryptocurrency data and analysis company."

"A U.S. Crypto Reserve will elevate this critical industry..." the president posted, promising to "make sure the U.S. is the Crypto Capital of the World," reports The Hill: His announcement comes just after the White House announced it would be welcoming cryptocurrency industry professionals on March 7 in a first-of-its-kind summit... It's unclear what exactly Trump's crypto reserve would look like, and while he previously dismissed crypto as a scam, he's embraced the industry throughout his most recent campaign.
AI

What Happened When Conspiracy Theorists Talked to OpenAI's GPT-4 Turbo? (washingtonpost.com) 134

A "decision science partner" at a seed-stage venture fund (who is also a cognitive-behavioral decision science author and professional poker player) explored what happens when GPT-4 Turbo converses with conspiracy theorists: Researchers have struggled for decades to develop techniques to weaken the grip of conspiracy theories and cult ideology on adherents. This is why a new paper in the journal Science by Thomas Costello of MIT's Sloan School of Management, Gordon Pennycook of Cornell University and David Rand, also of Sloan, is so exciting... In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.

The reduction in belief held across a range of topics... Even more encouraging, participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy. And the results appear to be durable, holding up in evaluations 10 days and two months later... Why was AI able to persuade people to change their minds? The authors posit that it "simply takes the right evidence," tailored to the individual, to effect belief change, noting: "From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence...."

It is hard to walk away from who you are, whether you are a QAnon believer, a flat-Earther, a truther of any kind or just a stock analyst who has taken a position that makes you stand out from the crowd. And that's why the AI approach might work so well. The participants were not interacting with a human, which, I suspect, didn't trigger identity in the same way, allowing the participants to be more open-minded. Identity is such a huge part of these conspiracy theories in terms of distinctiveness, putting distance between you and other people. When you're interacting with AI, you're not arguing with a human being whom you might be standing in opposition to, which could cause you to be less open-minded.

Answering questions from Slashdot readers in 2005, Wil Wheaton described playing poker against the cognitive-behavioral decision science author who wrote this article...
Perl

Perl's CPAN Security Group is Now a CNA, Can Assign CVEs (perlmonks.org) 10

Active since 1995, the Comprehensive Perl Archive Network (or CPAN) hosts 221,742 Perl modules written by 14,548 authors. This week they announced that the CPAN Security Group "was authorized by the CVE Program as a CVE Numbering Authority (CNA)" to assign and manage CVE vulnerability identifications for Perl and CPAN Modules.

"This is great news!" posted Linux kernel maintainer Greg Kroah-Hartman on social media, saying the announcement came "Just in time for my talk about this very topic in a few weeks about how all open source projects should be doing this" at the Linux Foundation Member Summit in Napa, California. And Curl creator Daniel Stenberg posted "I'm with Greg Kroah-Hartman on this: all Open Source projects should become CNAs. Or team up with others to do it." (Also posting "Agreed" to the suggestion was Seth Larson, the Python Software Foundation's security developer-in-residence involved in their successful effort to become a CNA in 2023.)

444 CNAs have now partnered with the CVE Program, according to their official web site. The announcement from PerlMonks.org: Years ago, a few people decided during the Perl Toolchain Summit (PTS) that it would be a good idea to join forces, ideas and knowledge and start a group to monitor vulnerabilities in the complete Perl ecosystem from core to the smallest CPAN release. The goal was to follow legislation and CVE reports, and help authors in taking actions on not being vulnerable anymore. That group has grown stable over the past years and is now known as CPANSec.

The group has several focus areas, and one of them is channeling CVE vulnerability issues. In that specific goal, a milestone has been reached: CPANSec has just been authorized as a CVE Numbering Authority (CNA) for Perl and modules on CPAN

Social Networks

Apple Launches 'Age Assurance' Tech As US States Mull Social Media Laws (reuters.com) 53

Apple announced a new feature allowing parents to share a child's age with app developers without exposing sensitive information, as lawmakers debate age-verification laws for social media and apps. Reuters reports: States, such as Utah and South Carolina, are currently debating laws that would require app store operators such as Apple and Alphabet's Google to check the ages of users. That has set up a conflict in the tech industry over which party should be responsible for checking ages for users under 18 -- app stores, or each individual app. Meta, for instance, has long argued in favor of legislation requiring app stores to check ages when a child downloads an app.

Apple on Thursday said it does not want to be responsible for collecting sensitive data for those age verifications. "While only a fraction of apps on the App Store may require age verification, all users would have to hand over their sensitive personally identifying information to us -- regardless of whether they actually want to use one of these limited set of apps," Apple wrote in a whitepaper on its website.

Security

A Disney Worker Downloaded an AI Tool. It Led To a Hack That Ruined His Life. (dailymail.co.uk) 96

A Disney employee's download of an AI image generation tool from GitHub led to a massive data breach in July 2024, exposing over 44 million internal Slack messages. The software contained infostealer malware that compromised Matthew Van Andel's computer [non-paywalled source] for five months, giving hackers access to his 1Password manager.

The attackers used the stolen credentials to access Disney's corporate systems, publishing sensitive information including customer data, employee passport numbers, and revenue figures from Disney's theme parks and streaming services. The breach also devastated Van Andel personally. Hackers exposed his Social Security number, financial login details, and even credentials for his home's Ring cameras. Shortly after the incident, Disney fired Van Andel following a forensic analysis of his work computer, citing misconduct he denies. Security researchers believe the attacker, who identified as part of a Russia-based hacktivist group called Nullbulge, is likely an American individual.
Cellphones

Denmark To Ban Mobile Phones In Schools and After-School Clubs (theguardian.com) 66

Denmark is set to ban mobile phones in schools and after-school clubs, following a government commission's recommendation that children under 13 should not have their own smartphones. The Guardian reports: The government said it would change existing legislation to force all folkeskole -- comprehensive primary and lower secondary schools -- to become phone-free, meaning that almost all children aged between seven and 16-17 will be required by law not to bring their phones into school. The announcement marks a U-turn by the government, which had previously refused to introduce such a law. It comes as governments across Europe are trying to impose tighter regulations on children's access to phones and social media.

The Danish wellbeing commission was set up by the prime minister, Mette Frederiksen, in 2023 to investigate growing dissatisfaction among children and young people. Its long-awaited report, published on Tuesday, raised the alarm over the digitisation of children and young people's lives and called for a better balance between digital and analogue life. Among its 35 recommendations was the need for government legislation banning phones from schools and after-school clubs.

The minister for children and education, Mattias Tesfaye, told Politiken: "There is a need to reclaim the school as an educational space, where there is room for reflection and where it is not an extension of the teenage bedroom." There will be scope for local authorities to make exceptions, including for children with special educational needs, but he said mobile phones and personal tablets "do not belong in school, neither during breaks nor during lessons." He said the government had started preparing a legislative amendment.

Power

A Major Blackout Hits Chile, Leaving Millions Without Power (apnews.com) 27

A massive blackout has hit Chile, leaving millions without power and disrupting transportation, businesses, and essential services across 14 of the country's 16 regions. The Associated Press reports: The National Electrical Coordinator, Chile's grid operator, said a disruption had occurred in a high-voltage transmission line that carries power from the Atacama Desert of northern Chile to the capital of Santiago in the country's central valley. It did not say what actually caused the disruption that pushed much of the country's power grid into shutdown, from the northernmost Chilean port of Arica to the southern Los Lagos agricultural region.

Interior Minister Carolina Toha said hospitals, prisons and government buildings were switching on backup generators to keep essential equipment operating. In a press conference, Toha urged the public to stay calm and said officials were racing to put the grid back in operation and restore electric service across the country of some 19 million people. "It's affecting the entire electrical system of the country," she said of the breakdown in the 500-kV backbone transmission line. Toha said if all areas didn't return to normal by sunset the government would take emergency measures to avert a crisis. [...]

Videos on social media from all over Chile, a long ribbon of a country stretching 4,300 kilometers (over 2,600 miles) along the southern Pacific coast, showed chaos at intersections with no functioning traffic lights, people having to use their mobile phones as torches in the underground metro and police dispatched to help evacuate office buildings.

AI

Most US Workers Avoid AI Chatbots Despite Productivity Benefits, PEW Finds (pewresearch.org) 100

Most American workers are not embracing AI chatbots in their jobs, with 55% rarely or never using these tools and 29% completely unfamiliar with them, according to a Pew Research Center survey released Tuesday.

Only 16% of workers report regular use of AI chatbots like ChatGPT, Gemini or Copilot. Adoption is highest among younger workers (23% of those aged 18-29) and those with post-graduate degrees (26%). Among users, research (57%), editing content (52%), and drafting reports (47%) top the list of applications. While 40% find chatbots extremely or very helpful for working faster, just 29% say they significantly improve work quality.

For the majority who don't use AI chatbots, 36% cite lack of relevance to their job as the primary reason. Employer attitudes remain largely neutral, with half neither encouraging nor discouraging usage. The technology sector leads in workplace adoption, with 36% of employers actively promoting chatbot use, followed by financial services (24%).
AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
Privacy

California Sues Data-Harvesting Company NPD, Enforcing Strict Privacy Law (msn.com) 6

California sued to fine a data-harvesting company, reports the Washington Post, calling it "a rare step to put muscle behind one of the strongest online privacy laws in the United States." Even when states have tried to restrict data brokers, it has been tough to make those laws stick. That has generally been a problem for the 19 states that have passed broad laws to protect personal information, said Matt Schwartz, a policy analyst for Consumer Reports. He said there has been only 15 or so public enforcement actions by regulators overseeing all those laws. Partly because companies aren't held accountable, they're empowered to ignore the privacy standards. "Noncompliance is fairly widespread," Schwartz said. "It's a major problem."

That's why California is unusual with a data broker law that seems to have teeth. To make sure state residents can order all data brokers operating in the state to delete their personal records [with a single request], California is now requiring brokers to register with the state or face a fine of $200 a day. The state's privacy watchdog said Thursday that it filed litigation to force one data broker, National Public Data, to pay $46,000 for failing to comply with that initial phase of the data broker law. NPD declined to comment through an attorney... This first lawsuit for noncompliance, Schwartz said, shows that California is serious about making companies live up to their privacy obligations... "If they can successfully build it and show it works, it will create a blueprint for other states interested in this idea," he said.

Last summer NPD "spilled hundreds of millions of Americans' Social Security Numbers, addresses, and phone numbers online," according to the blog Krebs on Security, adding that another NPD data broker sharing access to the same consumer records "inadvertently published the passwords to its back-end database in a file that was freely available from its homepage..."

California's attempt to regulate the industry inspired the nonprofit Consumer Reports to create an app called Permission Slip that reveals what data companies collect and, for people in U.S. states, will "work with you to file a request, telling companies to stop selling your personal information."

Other data-protecting options suggested by The Washington Post:
  • Use Firefox, Brave or DuckDuckGo, "which can automatically tell websites not to sell or share your data. Those demands from the web browsers are legally binding or will be soon in at least nine states."
  • Use Privacy Badger, an EFF browser extension which the EFF says "automatically tells websites not to sell or share your data including where it's required by state law."

China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

Piracy

Meta Claims Torrenting Pirated Books Isn't Illegal Without Proof of Seeding (arstechnica.com) 192

An anonymous reader quotes a report from Ars Technica: Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn't necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing (PDF) this week. Evidence instead shows that Meta "took precautions not to 'seed' any downloaded files," Meta's filing said. Seeding refers to sharing a torrented file after the download completes, and because there's allegedly no proof of such "seeding," Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

[...] Meta ... is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a "widely-used protocol to download large files." According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access "data from a 'well-known online repository' that was publicly available via torrents." To defend its torrenting, Meta has basically scrubbed the word "pirate" from the characterization of its activity. The company alleges that authors can't claim that Meta gained unauthorized access to their data under CDAFA. Instead, all they can claim is that "Meta allegedly accessed and downloaded datasets that Plaintiffs did not create, containing the text of published books that anyone can read in a public library, from public websites Plaintiffs do not operate or own."

While Meta may claim there's no evidence of seeding, there is some testimony that might be compelling to the court. Previously, a Meta executive in charge of project management, Michael Clark, had testified (PDF) that Meta allegedly modified torrenting settings "so that the smallest amount of seeding possible could occur," which seems to support authors' claims that some seeding occurred. And an internal message (PDF) from Meta researcher Frank Zhang appeared to show that Meta allegedly tried to conceal the seeding by not using Facebook servers while downloading the dataset to "avoid" the "risk" of anyone "tracing back the seeder/downloader" from Facebook servers. Once this information came to light, authors asked the court for a chance to depose Meta executives again, alleging that new facts "contradict prior deposition testimony."
"Meta has been 'silent so far on claims about sharing data while 'leeching' (downloading) but told the court it plans to fight the seeding claims at summary judgement," notes Ars.
Censorship

FTC Launches Broad Tech 'Censorship' Probe Targeting Meta, Uber (yahoo.com) 201

The U.S. Federal Trade Commission has launched an inquiry into potential "censorship" by technology platforms ranging from Meta to Uber, marking an escalation in scrutiny of content moderation practices. FTC Chair Andrew Ferguson called for public comment on what he termed "Big Tech censorship," describing it as "un-American" and "potentially illegal."

The broad probe could examine social media, video sharing, ride-sharing and event planning services. The announcement follows long-standing Republican claims that conservative viewpoints face discrimination on social media platforms.
EU

WhatsApp Faces Tougher EU Rules As Users Top 45 Million (msn.com) 38

Meta's WhatsApp messaging service has surpassed 45 million users, earning the designation of a "Very Large Online Platform" under the EU's Digital Services Act. Bloomberg reports: WhatsApp's open channels, which are feeds affiliated with news outlets or public figures that under the DSA are comparable to a social network, averaged about 46.8 million monthly average users in the second half of 2024, Meta said in a filing on Feb. 14 that hasn't previously been reported. [...] The DSA content moderation rulebook imposes stricter requirements on very large online platforms, defined as those whose EU-based monthly active users exceed 45 million. Users of WhatsApp's core messaging feature do not count toward the designation under the DSA.

The commission would still need to rule that WhatsApp should be included in the more regulated tier. Under the DSA, very large online platforms must carry out risk assessments on the spread of illegal or harmful content, and put in place a mitigation strategy. Fines under the DSA can reach as much as 6% of a company's annual global sales. The DSA requires platforms to disclose user numbers every six months. Messaging service Telegram also published an update this week, saying that monthly EU users of its public channels are "significantly fewer than 45 million."

Slashdot Top Deals