Security

Cyberattack Knocks Mobile Guardian MDM Offline, Wipes Thousands of Student Devices (techcrunch.com) 17

Zack Whittaker reports via TechCrunch: A cyberattack on Mobile Guardian, a U.K.-based provider of educational device management software, has sparked outages at schools across the world and has left thousands of students unable to access their files. Mobile Guardian acknowledged the cyberattack in a statement on its website, saying it identified "unauthorized access to the iOS and ChromeOS devices enrolled to the Mobile Guardian platform." The company said the cyberattack "affected users globally," including in North America, Europe and Singapore, and that the incident resulted in an unspecified portion of its userbase having their devices unenrolled from the platform and "wiped remotely." "Users are not currently able to log in to the Mobile Guardian Platform and students will experience restricted access on their devices," the company said.

Mobile device management (MDM) software allows businesses and schools to remotely monitor and manage entire fleets of devices used by employees or students. Singapore's Ministry of Education, touted as a significant customer of Mobile Guardian on the company's website since 2020, said in a statement overnight that thousands of its students had devices remotely wiped during the cyberattack. "Based on preliminary checks, about 13,000 students in Singapore from 26 secondary schools had their devices wiped remotely by the perpetrator," the Singaporean education ministry said in a statement. The ministry said it was removing the Mobile Guardian software from its fleet of student devices, including affected iPads and Chromebooks.

Google

Google and Cloudflare Summoned To Explain Their Plans To Defeat Pirate IPTV (torrentfreak.com) 20

Italy's telecoms regulator AGCOM has summoned Google and Cloudflare to a September meeting to discuss strategies for combating online piracy, six months after launching its Piracy Shield blocking system. The move comes as IPTV piracy remains resilient despite new anti-piracy legislation passed in the country last year. The law introduced harsher penalties for providers and consumers of pirated content, including fines for watching pirate streams. It also granted more aggressive site-blocking powers.

Major stream suppliers appear minimally affected by overseas laws. however. AGCOM chief Massimiliano Capitanio seeks commitments from Google to limit pirate services in search results, according to TorrentFreak. The regulator also wants Cloudflare to address IPTV providers using its services to evade blocking.
The Internet

Indonesia Bans Search Engine DuckDuckGo On Gambling, Pornography Concerns (reuters.com) 71

An anonymous reader quotes a report from Reuters: Indonesia said it has banned the privacy-oriented search engine DuckDuckGo, citing concerns that it could be used to access pornography and online gambling websites which are illegal in the country, the communications ministry said on Friday. Indonesia, with the world's biggest Muslim population, has strict rules that ban the sharing online of content deemed obscene. Social media platform Reddit and video-hosting platform Vimeo are blocked.

Usman Kansong, a communications ministry official, told Reuters that DuckDuckGo had been blocked "because of the many complaints made to us about the rampant online gambling and pornography content in its search results." The ministry did not say how DuckDuckGo differs from other search engines such as Alphabet's Google but on its website, DuckDuckGo said it offered several products intended to "help people protect their online privacy" including the search engine, which it said has been praised by privacy advocates.

Privacy

Illinois Governor Approves Business-Friendly Overhaul of Biometric Privacy Law (reuters.com) 38

Illinois Governor J.B. Pritzker has signed a bill into law that will significantly curb the penalties companies could face for improperly collecting and using fingerprints and other biometric data from workers and consumers. From a report: The bill passed by the legislature in May and signed by Pritzker, a Democrat, on Friday amends the state's Biometric Information Privacy Act (BIPA) so that companies can be held liable only for a single violation per person, rather than for each time biometric data is allegedly misused.

The amendments will dramatically limit companies' exposure in BIPA cases and could discourage plaintiffs' lawyers from filing many lawsuits in the first place, management-side lawyers said. "By limiting statutory damages to a single recovery per individual ... companies in most instances will no longer face the prospect of potentially annihilative damages awards that greatly outpace any privacy harms," David Oberly, of counsel at Baker Donelson in Washington, D.C., said before the bill was signed. BIPA, a 2008 law, requires companies to obtain permission before collecting fingerprints, retinal scans and other biometric information from workers and consumers. The law imposes penalties of $1,000 per violation and $5,000 for reckless or intentional violations.

AI

Elon Musk Revives Lawsuit Against OpenAI and Sam Altman 47

Elon Musk has reignited his legal battle against OpenAI, the creators of ChatGPT, by filing a new lawsuit in a California federal court. The suit, which revives a six-year-old dispute, accuses OpenAI founders Sam Altman and Greg Brockman of breaching the company's founding principles by prioritizing commercial interests over public benefit.

Musk's complaint alleges that OpenAI's multibillion-dollar partnership with Microsoft contradicts the original mission to develop AI responsibly for humanity's benefit. The lawsuit describes the alleged betrayal in dramatic terms, claiming "perfidy and deceit... of Shakespearean proportions." OpenAI has not yet commented on the new filing. In response to Musk's previous lawsuit, which was withdrawn seven weeks ago, the company stated its commitment to building safe artificial general intelligence for the benefit of humanity.
Social Networks

Founder of Collapsed Social Media Site 'IRL' Charged With Fraud Over Faked Users (bbc.com) 22

This week America's Securities and Exchange Commission filed fraud charges against the former CEO of the startup social media site "IRL"

The BBC reports: IRL — which was once considered a potential rival to Facebook — took its name from its intention to get its online users to meet up in real life. However, the initial optimism evaporated after it emerged most of IRL's users were bots, with the platform shutting in 2023...

The SEC says it believes [CEO Abraham] Shafi raised about $170m by portraying IRL as the new success story in the social media world. It alleges he told investors that IRL had attracted the vast majority its supposed 12 million users through organic growth. In reality, it argues, IRL was spending millions of dollars on advertisements which offered incentives to prospective users to download the IRL app. That expenditure, it is alleged, was subsequently hidden in the company's books.

IRL received multiple rounds of venture capital financing, eventually reaching "unicorn status" with a $1.17 billion valuation, according to TechCrunch. But it shut down in 2023 "after an internal investigation by the company's board found that 95% of the app's users were 'automated or from bots'."

TechCrunch notes it's the second time in the same week — and at least the fourth time in the past several months — that the SEC has charged a venture-backed founder on allegations of fraud... Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity "DiamondHands" to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya's Social Capital, Coinbase Ventures and Winklevoss Capital.

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko's customers, the number of candidates on its platform and the startup's revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images.

Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Government

Is the 'Kids Online Safety Act' Losing Momentum? (theguardian.com) 40

America's Senate "overwhelmingly passed major online safety reforms to protect children on social media," reports the Guardian.

"But with ongoing pushback from the tech industry and freedom of speech organizations, the legislation faces an uncertain future in the House." "It's a terrible idea to let politicians and bureaucrats decide what people should read and view online," freedom of speech group the Electronic Frontier Foundation said of the Senate's passage of Kosa... Advocates of Kosa reject these critiques, noting the bill has been revised to address many of those concerns — including shifting enforcement from attorneys general to the federal trade commission and focusing the "duty of care" provisions on product design features of the site or app rather than content specifically. A number of major LGBTQ+ groups dropped their opposition to the legislation following these changes, including the Human Rights Campaign, GLAAD and the Trevor Project.

After passing the Senate this week, the bill has now moved onto the House, which is on a six-week summer recess until September. Proponents are now directing their efforts towards House legislators to turn the bill into law. Joe Biden has indicated he would sign it if it passes. In a statement Tuesday encouraging the House to pass the legislation, the US president said: "We need action by Congress to protect our kids online and hold big tech accountable for the national experiment they are running on our children for profit...."

House speaker Mike Johnson of Louisiana has expressed support for moving forward on Kosa and passing legislation this Congress, but it's unclear if he will bring the bill up in the House immediately. Some experts say the bill is unlikely to be passed in the House in the form passed by the Senate. "Given the concerns about potential censorship and the possibility of minors' lacking access to vital information, pausing KOSA makes eminent sense," said Gautam Hans, associate clinical professor of law and associate director of the First Amendment Clinic at Cornell Law School. He added that the House may put forward its own similar legislation instead, or modify KOSA to further address some of these concerns.

The political news site Punchbowl News also noted this potentially significant quote: A House GOP leadership aide told us this about KOSA: "We've heard concerns across our Conference and the Senate bill cannot be brought up in its current form."
TechDirt argues that "Senator Rand Paul's really excellent letter laying out the reasons he couldn't support the bill may have had an impact."

Thanks to long-time Slashdot reader SonicSpike for sharing the news.
AI

NIST Releases an Open-Source Platform for AI Safety Testing (scmagazine.com) 4

America's National Institute of Standards and Technology (NIST) has released a new open-source software tool called Dioptra for testing the resilience of machine learning models to various types of attacks.

"Key features that are new from the alpha release include a new web-based front end, user authentication, and provenance tracking of all the elements of an experiment, which enables reproducibility and verification of results," a NIST spokesperson told SC Media: Previous NIST research identified three main categories of attacks against machine learning algorithms: evasion, poisoning and oracle. Evasion attacks aim to trigger an inaccurate model response by manipulating the data input (for example, by adding noise), poisoning attacks aim to impede the model's accuracy by altering its training data, leading to incorrect associations, and oracle attacks aim to "reverse engineer" the model to gain information about its training dataset or parameters, according to NIST.

The free platform enables users to determine to what degree attacks in the three categories mentioned will affect model performance and can also be used to gauge the use of various defenses such as data sanitization or more robust training methods.

The open-source testbed has a modular design to support experimentation with different combinations of factors such as different models, training datasets, attack tactics and defenses. The newly released 1.0.0 version of Dioptra comes with a number of features to maximize its accessibility to first-party model developers, second-party model users or purchasers, third-party model testers or auditors, and researchers in the ML field alike. Along with its modular architecture design and user-friendly web interface, Dioptra 1.0.0 is also extensible and interoperable with Python plugins that add functionality... Dioptra tracks experiment histories, including inputs and resource snapshots that support traceable and reproducible testing, which can unveil insights that lead to more effective model development and defenses.

NIST also published final versions of three "guidance" documents, according to the article. "The first tackles 12 unique risks of generative AI along with more than 200 recommended actions to help manage these risks. The second outlines Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, and the third provides a plan for global cooperation in the development of AI standards."

Thanks to Slashdot reader spatwei for sharing the news.
Government

Artist and Musician Sue SEC Over Its NFT Regulatory Jurisdiction (decrypt.co) 32

"Five years ago, Brian Frye set an elaborate trap," writes Decrypt.co. "Now the law professor is teaming up with a singer-songwriter to finally spring it" on America's Security and Exchange Commission "in a novel lawsuit — and in the process, prevent the regulator from ever coming after NFT art projects again." Over and again, the SEC has sued cherry-picked NFT projects it says qualify as unregistered securities — but never once has the regulator defined what types of NFT projects are legal and which are not, casting a chill over the nascent industry... [In 2019] Frye, an expert in securities law and a fan of novel technologies, minted an NFT of a letter he sent to the SEC in which he declared his art project to constitute an illegal, unregistered security. If the conceptual art project wasn't a security, Frye challenged the agency, then it needed to say so. The SEC never responded to Frye — not then, and not after several more self-incriminating correspondences from the professor. But in due time, the agency began vigorously pursuing, and suing, NFT projects.
So 10 months ago, Jonathan Mann — who writes a new song every day and shares it online — crafted a song titled "This Song is A Security." As a seller of NFTs himself, Mann wrote the song "to fight back against the SEC, and defend his right — plus the rights of other artists like him — to earn revenue," according to the article: Frye, who'd practically been salivating for such an opportunity for half a decade, was a natural fit.... In the lawsuit filed against the SEC in Louisiana earlier this week, they challenged the SEC's standing to regulate their NFT-backed artworks as securities, and demanded the agency declare that their respective art projects do not constitute illegal, unregistered securities offerings.
More from the International Business Times: The complaint asked the court to clarify whether the SEC should regulate art and whether artists were supposed to "register" their artworks before selling the pieces to the general public. The complaint also asked whether artists should be "forced to make public disclosures about the 'risks' of buying their art," and whether artists should be "required to comply" with federal securities laws...

The Blockchain Association, a collective crypto group that includes some of the biggest digital asset firms, asserted that the SEC has no authority over NFT art. "We support the plaintiffs in their quest for legal clarity," the group said.

In an interview with Slashdot, Mann says he started his "Song a Day" project almost 17 years ago (when he was 26 years old) — and his interest in NFTs is sincere: "Over the years, I've always sought a way to make Song A Day sustainable financially, through video contests, conference gigs, ad revenue, royalties, Patreon and more.

"When I came across NFTs in 2017, they didn't have a name. We just called them 'digital collectibles'. For the last 2+ years, NFTs have become that self-sustaining model for my work.

"I know most people believe NFTs are a joke at best and actively harmful at worst. Even most people in the crypto community have given up on them. Despite all that, I still believe they're worth pursuing.

"Collecting an NFT from an artist you love is the most direct way to support them. There's no multinational corporation, no payment processor, and no venture capitalists between you and the artist you want to support."

Slashdot also tracked down the SEC's Office of Public Affairs, and got an official response from SEC public affairs specialist Ryan White.

Slashdot: The suit argues that the SEC's approach "threatens the livelihoods of artists and creators that are simply experimenting with a novel, fast-growing technology," and seeks guidance in the face of a "credible threat of enforcement". Is the SEC going to respond to this lawsuit? And if you don't have an answer at this time, can you give me a general comment on the issues and concerns being raised?

SEC Public Affairs Specialist Ryan White: We would decline comment.

Decrypt.co points out that the lawsuit "has no guarantee of offering some conclusive end to the NFT regulation question... That may only come with concrete legislation or a judgment by the Supreme Court."

But Mann's song still makes a very public show out of their concerns — with Mann even releasing a follow-up song titled "I'm Suing the SEC." (Its music video mixes together wacky clips of Mila Kunis's Stoner Cats and Fonzie jumping a shark with footage of NFT critics like Elizabeth Warren and SEC chairman Gary Gensler.)

And an earlier song also used auto-tune to transform Gensler's remarks about cryptocurrencies into the chorus of a song titled "Hucksters, Fraudsters, Scam Artists, Ponzi Schemes".

Mann later auctioned an NFT of the song — for over $3,000 in Ethereum.
Portables (Apple)

Apple Is Finally Sending Out Payments For Its Defective Macbook Butterfly Keyboards (9to5mac.com) 26

An anonymous Slashdot reader shared this report from the blog 9to5Mac: In 2022, Apple agreed to pay a $50 million dollar settlement for certain eligible 2015-2019 MacBook owners who experienced problems with their butterfly keyboards. The claims process opened in late 2022, and the settlement got final approval last May. Starting today, eligible MacBook owners are finally receiving their payouts...

Apple finally moved away from the butterfly keyboard on the 16-inch MacBook Pro in late 2019. By mid 2020, the 13-inch MacBook Pro and MacBook Air also moved to the new Magic Keyboard. However, that wouldn't be the end of the story for Apple... In mid 2022, Apple was required to pay a $50 million settlement. The claims process started later that year, although there were some caveats. For one, you could only claim this settlement if you lived in California, Florida, Illinois, Michigan, New Jersey, New York, or Washington. This excludes 43 US states, so a good number of people with butterfly keyboards weren't even covered. Additionally, the estimated payout amount varied depending on the severity of your keyboard problems:

- Up to $395 for 2 or more top case replacements
- Up to $125 for 1 top case replacement
- Up to $50 for keycap replacements

Obviously, this wasn't the most ideal outcome for customers, but if you were eligible and filed a claim (or multiple), you're in luck!

The original goal "was to make the keyboards thinner and the laptops slimmer," remembers ZDNet. This backfired spectacularly as MacBook owners started complaining that the keys would easily stick or get jammed by dust, crumbs, or other tiny objects. Noted tech blogger John Gruber even called the new keyboards "the worst products in Apple's history."
Gruber's headline? "Appl Still Hasn't Fixd Its MacBook Kyboad Problm"
Government

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge (msn.com) 43

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.]

But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties.

If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them.

"Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding."

40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article.

"Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."
Privacy

Epic Games CEO Criticized For Calling Apple's 'Find My' Feature 'Super Creepy' (macrumors.com) 176

Slashdot reader Applehu Akbar shared this report from MacRumors: Epic Games CEO Tim Sweeney commented on Apple's 'Find My' service, referring to it as "super creepy surveillance tech" that "shouldn't exist." Sweeney went on to explain that several years ago, "a kid" stole a Mac laptop out of his car. Years later, Sweeney was checking Find My, and as the Mac was still connected to his Apple ID account, it showed him the location where the thief lived.
When someone asked Sweeney if he'd at least gotten his laptop back, Sweeney answered "No. I was creeped the hell out by having unexpectedly received the kid's address, and turned off Find My iPhone on all of my devices."

Slashdot reader crmarvin42 quipped "Tell me you are stupidly rich, without telling me you are stupidly rich... Next someone will be saying that it is 'Creepy' to have security footage of someone taking your Amazon packages off of your porch." And they also questioned Sweeney's sincerity, suggesting that he's "just saying that to try and make Apple look bad because of all the lawsuits going on."

MacRumors followed the ensuing discussion: Sweeney said that the location of a device in someone's possession can't be tracked without tracking the person, and "people have a right to privacy." ["This right applies to second hand device buyers and even to thieves."] He claims that detection and recovery of a lost or stolen device should be "mediated by due process of law" and not exposed to the device owner "in vigilante fashion."
Some responded to Sweeney's comments by sharing the headline of a Vox news story about Epic's own privacy polices. ("Fortnite maker Epic Games has to pay $520 million for tricking kids and violating their privacy.")

MacRumors cited a 2014 report that thefts of iPhones dropped after the introduction of Apple's "Activation Lock" feature (which prevents the disabling of 'Find My' without a password).

But when the blog AppleInsider accused Sweeney of "an incredibly bad leap of logic" — Sweeney responded. "You're idealizing this issue as good guys tracking criminals to their lairs, but when Find My or Google's similar tech points a device owner to a device possessor's home, one must anticipate the presence of families and kids and innocent used device buyers, and ask whether it's really appropriate for a platform to use GPS and shadowy mesh network tech to set up physical confrontations among individuals."

Sweeney also posted a quote from Steve Jobs about how at Apple, "we worry that some 14-year-old is going to get stalked and something terrible is going to happen because of our phone."
Privacy

NFL to Roll Out Facial Authentication Software to All Stadiums, League-Wide (therecord.media) 72

America's National Football League "is the latest organization to turn to facial authentication to bolster event security," reports the Record, citing a new announcement this week: All 32 NFL stadiums will start using the technology this season, after the league signed a contract with a company that uses facial scans to verify the identity of people entering event venues and other secure spaces.

The facial authentication platform, which counts the Cleveland Browns' owners as investors, will be used to "streamline and secure" entry for thousands of credentialed media, officials, staff and guests so they can easily access restricted areas such as press boxes and locker rooms, Jeff Boehm, the chief operating officer of Wicket, said in a LinkedIn post Monday. "Credential holders simply take a selfie before they come, and then Wicket verifies their identity and checks their credentials with Accredit (a credentialing platform) as they walk through security checkpoints," Boehm added.

Wicket technology was deployed in a handful of NFL stadiums last year as part of a pilot program. Other stadiums will start rolling it out beginning on Aug. 8, when the pre-season kicks off. Some teams also have extended their use of the technology to scan the faces of ticket holders. The Cleveland Browns, Atlanta Falcons and New York Mets all have used the company's facial authentication software to authenticate fans with tickets, according to Stadium Tech Report. "Fans come look at the tablet and, instantly, the tablet recognizes the fan," Brandon Covert, the vice president of information technology for the Cleveland Browns, said in a testimonial appearing on Wicket's website. "It's almost a half-second stop. It's not even a stop — more of a pause."

"The Browns also use Wicket to verify the ages of fans purchasing alcohol at concession stands, according to Wicket's LinkedIn page," the article points out.

And a July report from Privacy International found that 25 of the top 100 soccer stadiums in the world are already using facial recognition technology.

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Safari

When It Comes to Privacy, Safari Is Only the Fourth-Best Browser (yahoo.com) 36

Apple's elaborate new ad campaign promises that Safari is "a browser that protects your privacy." And the Washington Post says Apple "deserves credit for making many privacy protections automatic with Safari..."

"But Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, said Safari is no better than the fourth-best web browser for your privacy." "If browser privacy were a sport at the Olympics, Apple isn't getting on the medal stand," Cahn said. (Apple did not comment about this.)

Safari stops third-party cookies anywhere you go on the web. So do Mozilla's Firefox and the Brave browser... Chrome allows third-party cookies in most cases unless you turn them off... Even without cookies, a website can pull information like the resolution of your computer screen, the fonts you have installed, add-on software you use and other technical details that in aggregate can help identify your device and what you're doing on it. The measures, typically called "fingerprinting," are privacy-eroding tracking by another name. Nick Doty with the Center for Democracy & Technology said there's generally not much you can do about fingerprinting. Usually you don't know you're being tracked that way. Apple says it defends against common fingerprinting techniques but Cahn said Firefox, Brave and the Tor Browser all are better at protecting you from digital surveillance. That's why he said Safari is no better than the fourth-best browser for privacy.

Safari's does offer extra privacy protections in its "private" mode, the article points out. "When you use this option, Apple says it does more to block use of 'advanced' fingerprinting techniques. It also steps up defenses against tracking that adds bits of identifying information to the web links you click."

The article concludes that Safari users can "feel reasonably good about the privacy (and security) protections, but you can probably do better — either by tweaking your Apple settings or using a web browser that's even more private than Safari."
The Courts

US Sues TikTok Over 'Massive-Scale' Privacy Violations of Kids Under 13 (reuters.com) 10

An anonymous reader quotes a report from Reuters: The U.S. Justice Department filed a lawsuit Friday against TikTok and parent company ByteDance for failing to protect children's privacy on the social media app as the Biden administration continues its crackdown on the social media site. The government said TikTok violated the Children's Online Privacy Protection Act that requires services aimed at children to obtain parental consent to collect personal information from users under age 13. The suit (PDF), which was joined by the Federal Trade Commission, said it was aimed at putting an end "to TikTok's unlawful massive-scale invasions of children's privacy." Representative Frank Pallone, the top Democrat on the Energy and Commerce Committee, said the suit "underscores the importance of divesting TikTok from Chinese Communist Party control. We simply cannot continue to allow our adversaries to harvest vast troves of Americans' sensitive data."

The DOJ said TikTok knowingly permitted children to create regular TikTok accounts, and then create and share short-form videos and messages with adults and others on the regular TikTok platform. TikTok collected personal information from these children without obtaining consent from their parents. The U.S. alleges that for years millions of American children under 13 have been using TikTok and the site "has been collecting and retaining children's personal information." The FTC is seeking penalties of up to $51,744 per violation per day from TikTok for improperly collecting data, which could theoretically total billions of dollars if TikTok were found liable.
TikTok said Friday it disagrees "with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform."
Government

Secret Service's Tech Issues Helped Shooter Go Undetected At Trump Rally (theguardian.com) 155

An anonymous reader quotes a report from The Guardian: The technology flaws of the U.S. Secret Service helped the gunman who attempted to assassinate Donald Trump during a rally in Butler, Pennsylvania, last month evade detection. An officer broadcast "long gun!" over the local law enforcement radio system, according to congressional testimony from the Secret Service this week, the New York Times reported. The radio message should have travelled to a command center shared between local police and the Secret Service, but the message was never received by the Secret Service. About 30 seconds later, the shooter, Thomas Crooks, fired his first shots.

It was one of several technology issues facing the Secret Service on 13 July due to either malfunction, improper deployment or the Secret Service opting not to utilize them. The Secret Service had also previously rejected requests from the Trump campaign for more resources over the past two years. The use of a surveillance drone was turned down by the Secret Service at the rally site and the agency also did not bring in a system to boost the signals of agents' devices as the area had poor cell service. And a system to detect drone use in the area by others did not work, according to the report in the New York Times, due to the communications network in the area being overwhelmed by the number of people gathered at the rally. The federal agency did not use technology it had to bolster their communications system. The shooter flew his own drone over the site for 11 minutes without being detected, about two hours before Trump appeared at the rally.
Ronald Rowe Jr, the acting Secret Service director, said it never utilized the technological tools that could have spotted the shooter beforehand.

A former Secret Service officer also told the New York Times he "resigned in 2017 over frustration with the agency's delays in evaluating new technology and getting clearance and funding to obtain it and then train officers on it," notes The Guardian. Furthermore, the Secret Service failed to record communications between federal and local law enforcement at the rally.
Government

US Progressives Push For Nvidia Antitrust Investigation (reuters.com) 42

Progressive groups and Senator Elizabeth Warren are urging the Department of Justice to investigate Nvidia for potential antitrust violations due to its dominant position in the AI chip market. The groups criticize Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. Reuters reports: Demand Progress and nine other groups wrote a letter (PDF) this week, opens new tab urging Department of Justice antitrust chief Jonathan Kanter to probe business practices at Nvidia, whose market value hit $3 trillion this summer on demand for chips able to run the complex models behind generative AI. The groups, which oppose monopolies and promote government oversight of tech companies, among other issues, took aim at Nvidia's bundling of software and hardware, a practice that French antitrust enforcers have flagged as they prepare to bring charges.

"This aggressively proprietary approach, which is strongly contrary to industry norms about collaboration and interoperability, acts to lock in customers and stifles innovation," the groups wrote. Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by cloud computing companies like Google, Microsoft and Amazon.com. The chips made by the cloud giants are not available for sale themselves but typically rented through each platform.
A spokesperson for Nvidia said: "Regulators need not be concerned, as we scrupulously adhere to all laws and ensure that NVIDIA is openly available in every cloud and on-prem for every enterprise. We'll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need."
Government

Senators Propose 'Digital Replication Right' For Likeness, Extending 70 Years After Death 46

An anonymous reader quotes a report from Ars Technica: On Wednesday, US Sens. Chris Coons (D-Del.), Marsha Blackburn (R.-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bipartisan legislation, up for consideration in the US Senate, aims to protect individuals from unauthorized AI-generated replicas of their voice or likeness. The NO FAKES Act would create legal recourse for people whose digital representations are created without consent. It would hold both individuals and companies liable for producing, hosting, or sharing these unauthorized digital replicas, including those created by generative AI. Due to generative AI technology that has become mainstream in the past two years, creating audio or image media fakes of people has become fairly trivial, with easy photorealistic video replicas likely next to arrive. [...]

To protect a person's digital likeness, the NO FAKES Act introduces a "digital replication right" that gives individuals exclusive control over the use of their voice or visual likeness in digital replicas. This right extends 10 years after death, with possible five-year extensions if actively used. It can be licensed during life and inherited after death, lasting up to 70 years after an individual's death. Along the way, the bill defines what it considers to be a "digital replica": "DIGITAL REPLICA.-The term "digital replica" means a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that- (A) is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission- (i) in which the actual individual did not actually perform or appear; or (ii) that is a version of a sound recording, image, or audiovisual work in which the actual individual did perform or appear, in which the fundamental character of the performance or appearance has been materially altered; and (B) does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder."
The NO FAKES Act "includes provisions that aim to balance IP protection with free speech," notes Ars. "It provides exclusions for recognized First Amendment protections, such as documentaries, biographical works, and content created for purposes of comment, criticism, or parody."
Google

Google Defeats RNC Lawsuit Claiming Email Spam Filters Harmed Republican Fundraising 84

A U.S. judge has thrown out a Republican National Committee lawsuit accusing Alphabet's Google of intentionally misdirecting the political party's email messages to users' spam folders. From a report: U.S. District Judge Daniel Calabretta in Sacramento, California, on Wednesday dismissed the RNC's lawsuit for a second time, and said the organization would not be allowed to refile it. While expressing some sympathy for the RNC's allegations, he said it had not made an adequate case that Google violated California's unfair competition law.

The lawsuit alleged Google had intentionally or negligently sent RNC fundraising emails to Gmail users' spam folders and cost the group hundreds of thousands of dollars in potential donations. Google denied any wrongdoing.

Slashdot Top Deals