AI

US Police Seldom Disclose Use of AI-Powered Facial Recognition, Investigation Finds (msn.com) 63

An anonymous reader shared this report from the Washington Post: Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology...

In fact, the records show that officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects "through investigative means" or that a human source such as a witness or police officer made the initial identification... The Coral Springs Police Department in South Florida instructs officers not to reveal the use of facial recognition in written reports, according to operations deputy chief Ryan Gallagher. He said investigative techniques are exempt from Florida's public disclosure laws... The department would disclose the source of the investigative lead if it were asked in a criminal proceeding, Gallagher added....

Prosecutors are required to inform defendants about any information that would help prove their innocence, reduce their sentence or hurt the credibility of a witness testifying against them. When prosecutors fail to disclose such information — known as a "Brady violation" after the 1963 Supreme Court ruling that mandates it — the court can declare a mistrial, overturn a conviction or even sanction the prosecutor. No federal laws regulate facial recognition and courts do not agree whether AI identifications are subject to Brady rules. Some states and cities have begun mandating greater transparency around the technology, but even in these locations, the technology is either not being used that often or it's not being disclosed, according to interviews and public records requests...

Over the past four years, the Miami Police Department ran 2,500 facial recognition searches in investigations that led to at least 186 arrests and more than 50 convictions. Among the arrestees, just 1 in 16 were told about the technology's use — less than 7 percent — according to a review by The Post of public reports and interviews with some arrestees and their lawyers. The police department said that in some of those cases the technology was used for purposes other than identification, such as finding a suspect's social media feeds, but did not indicate in how many of the cases that happened. Carlos J. Martinez, the county's chief public defender, said he had no idea how many of his Miami clients were identified with facial recognition until The Post presented him with a list. "One of the basic tenets of our justice system is due process, is knowing what evidence there is against you and being able to challenge the evidence that's against you," Martinez said. "When that's kept from you, that is an all-powerful government that can trample all over us."

After reviewing The Post's findings, Miami police and local prosecutors announced plans to revise their policies to require clearer disclosure in every case involving facial recognition.

The article points out that Miami's Assistant Police Chief actually told a congressional panel on law enforcement AI use that his department is "the first to be completely transparent about" the use of facial recognition. (When confronted with the Washington Post's findings, he "acknowledged that officers may not have always informed local prosecutors [and] said the department would give prosecutors all information on the use of facial recognition, in past and future cases".

He told the Post that the department would "begin training officers to always disclose the use of facial recognition in incident reports." But he also said they would "leave it up to prosecutors to decide what to disclose to defendants."
United Kingdom

UK Post Office Executive Suspended Over Allegations of Destroying Software Scandal Evidence (computerweekly.com) 72

The British Post Office scandal "was first exposed by Computer Weekly in 2009, revealing the stories of seven subpostmasters and the problems they suffered due to Horizon accounting software," remembers Computer Weekly, "which led to the most widespread miscarriage of justice in British history."

But now the Post Office "is investigating allegations that a senior executive instructed staff to destroy or conceal documents that could be of interest to the Post Office scandal public inquiry," Computer Weekly writes. A company employee acknowleged a report in an internal whistleblower program "regarding destroying or concealing material... allegations that a senior Post Office member of staff had instructed their team to destroy or conceal material of possible interest to the inquiry, and that the same individual had engaged in inappropriate behaviour." The shocking revelation echoes evidence from appeals against wrongful convictions in 2021. During the Court of Appeal trials it was revealed that a senior Post Office executive instructed employees to shred documents that undermined an insistence that its Horizon computer system was robust, amid claims that errors in the system caused unexplained accounting shortfalls.
IOS

iOS and Android Security Scare: Two Apps Found Supporting 'Pig Butchering' Scheme (forbes.com) 31

"Pig Butchering Alert: Fraudulent Trading App targeted iOS and Android users."

That's the title of a new report released this week by cybersecurity company Group-IB revealing the official Apple App Store and Google Play store offered apps that were actually one part of a larger fraud campaign. "To complete the scam, the victim is asked to fund their account... After a few seemingly successful trades, the victim is persuaded to invest more and more money. The account balance appears to grow rapidly. However, when the victim attempts to withdraw funds, they are unable to do so."

Forbes reports: Group-IB determined that the frauds would begin with a period of social engineering reconnaissance and entrapment, during which the trust of the potential victim was gained through either a dating app, social media app or even a cold call. The attackers spent weeks on each target. Only when this "fattening up" process had reached a certain point would the fraudsters make their next move: recommending they download the trading app from the official App Store concerned.

When it comes to the iOS app, which is the one that the report focussed on, Group-IB researchers said that the app remained on the App Store for several weeks before being removed, at which point the fraudsters switched to phishing websites to distribute both iOS and Android apps. The use of official app stores, albeit only fleetingly as Apple and Google removed the fake apps in due course, bestowed a sense of authenticity to the operation as people put trust in both the Apple and Google ecosystems to protect them from potentially dangerous apps.

"The use of web-based applications further conceals the malicious activity," according to the researchers, "and makes detection more difficult." [A]fter the download is complete, the application cannot be launched immediately. The victim is then instructed by the cybercriminals to manually trust the Enterprise developer profile. Once this step is completed, the fraudulent application becomes operational... Once a user registers with the fraudulent application, they are tricked into completing several steps. First, they are asked to upload identification documents, such as an ID card or passport. Next, the user is asked to provide personal information, followed by job-related details...

The first discovered application, distributed through the Apple App Store, functions as a downloader, merely retrieving and displaying a web-app URL. In contrast, the second application, downloaded from phishing websites, already contains the web-app within its assets. We believe this approach was deliberate, since the first app was available in the official store, and the cybercriminals likely sought to minimise the risk of detection. As previously noted, the app posed as a tool for mathematical formulas, and including personal trading accounts within an iOS app would have raised immediate suspicion.

The app (which only runs on mobile phones) first launches a fake activity with formulas and graphics, according to the researchers. "We assume that this condition must bypass Apple's checks before being published to the store. As we can see, this simple trick allows cybercriminals to upload their fraudulent application to the Apple Store." They argue their research "reinforces the need for continued review of app store submissions to prevent such scams from reaching unsuspecting victims". But it also highlights "the importance of vigilance and end-user education, even when dealing with seemingly trustworthy apps..."

"Our investigation began with an analysis of Android applications at the request of our client. The client reported that a user had been tricked into installing the application as part of a stock investment scam. During our research, we uncovered a list of similar fraudulent applications, one of which was available on the Google Play Store. These apps were designed to display stock-related news and articles, giving them a false sense of legitimacy."
Privacy

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars (wired.com) 109

Wired reports on "AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers — all while recordi00ng the precise locations of these observations..."

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data. However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates... Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people's personal political views and their homes can be recorded into vast databases that can be queried.

"It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people."

DRN, in a statement issued to WIRED, said it complies with "all applicable laws and regulations...." Over more than a decade, DRN has amassed more than 15 billion "vehicle sightings" across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN's commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database. The system is partly fueled by DRN "affiliates" who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits...

"License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud," Jeremiah Wheeler, the president of DRN, says in a statement... Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.

Privacy experts shared their reactions with Wired
  • "Perhaps [people] want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that's accessible to police authorities." — Jay Stanley, a senior policy analyst at the American Civil Liberties Union
  • "When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that's just not how the technology works. The technology collects everyone's data and stores that data often for immense periods of time." — Dave Maass, an EFF director of investigations
  • "The way that the country is set up was to protect citizens from government overreach, but there's not a lot put in place to protect us from private actors who are engaged in business meant to make money." — Nicole McConlogue, associate law professor at Mitchell Hamline School of Law (who has researched license-plate-surveillance systems)

Thanks to long-time Slashdot reader schwit1 for sharing the article.


Crime

Criminal Charges Announced Over Multi-Year Fraud Scheme in a Carbon Credits Market (marketwatch.com) 52

This week the U.S. Attorney's Office for the Southern District of New York unsealed charges over a "scheme to commit fraud" in carbon markets, which they say fraudulently netted one company "tens of millions of dollars" worth of credits — which led to "securing an investment of over $100 million."

MarketWatch reports: Ken Newcombe had spent years building a program to distribute more environmentally friendly cookstoves for free to rural communities in Africa and Southeast Asia. The benefit for his company, C-Quest Capital, would be the carbon credits it would receive in exchange for reducing the amount of fuel people burned in order to cook food — credits the company could then sell for a profit to big oil companies like BP.

But when Newcombe tried to ramp up the program, federal prosecutors said in an indictment made public Wednesday, he quickly realized that the stoves wouldn't deliver the emissions savings he had promised investors. Rather than admit his mistake, he and his partners cooked the books instead, prosecutors said... That allowed them to obtain carbon credits worth tens of millions of dollars that they didn't deserve, prosecutors said. On the basis of the fraudulently gained credits, prosecutors said, C-Quest was able to secure $250 million in funding from an outside investor.

"The alleged actions of the defendants and their co-conspirators risked undermining the integrity of [the global market for carbon credits], which is an important part of the fight against climate change," said Damian Williams, the U.S. attorney for the Southern District of New York.

From announced by the U.S. Attorney's Office: U.S. Attorney Damian Williams said... "The alleged actions of the defendants and their co-conspirators risked undermining the integrity of that market, which is an important part of the fight against climate change. Protecting the sanctity and integrity of the financial markets continues to be a cornerstone initiative for this Office, and we will continue to be vigilant in rooting out fraud in the market for carbon credits...."

While most carbon credits are created through, and trade in compliance markets, there is also a voluntary carbon market. Voluntary markets revolve around companies and entities that voluntarily set goals to reduce or offset their carbon emissions, often to align with goals from employees or shareholders. In voluntary markets, the credits are issued by non-governmental organizations, using standards for measuring emission reductions that they develop based on input from market participants, rather than on mandates from governments. The non-governmental organizations issue voluntary carbon credits to project developers that run projects that reduce emissions or remove greenhouse gases from the atmosphere.

CQC was a for-profit company that ran projects to generate carbon credits — including a type of credit known as a voluntary carbon unit ("VCU") — by reducing emissions of greenhouse gases. CQC profited by selling VCUs it obtained, often to companies seeking to offset the impact of greenhouse gases they emit in the course of operating their businesses.

The company itself was not charged due to "voluntary and timely self-disclosure of misconduct," according to the announcement, along with "full and proactive cooperation, timely and appropriate remediation, and agreement to cancel or void certain voluntary carbon units.
Cellphones

America's FCC Orders T-Mobile To Deliver Better Cybersecurity (csoonline.com) 13

T-Mobile experienced three major data breaches in 2021, 2022, and 2023, according to CSO Online, "which impacted millions of its customers."

After a series of investigations by America's Federal Communications Commission, T-Mobile agreed in court to a number of settlement conditions, including moving toward a "modern zero-trust architecture," designating a Chief Information Security Office, implementing phishing-resistant multifactor authentication, and adopting data minimization, data inventory, and data disposal processes designed to limit its collection and retention of customer information.

Slashdot reader itwbennett writes: According to a consent decree published on Monday by the U.S. Federal Communications Commission, T-Mobile must pay a $15.75 million penalty and invest an equal amount "to strengthen its cybersecurity program, and develop and implement a compliance plan to protect consumers against similar data breaches in the future."

"Implementing these practices will require significant — and long overdue — investments. To do so at T-Mobile's scale will likely require expenditures an order of magnitude greater than the civil penalty here,' the consent decree said.

The article points out that order of magnitude greater than $15.75 million would be $157.5 million...
Privacy

A Quarter Million Comcast Subscribers Had Data Stolen From Debt Collector (theregister.com) 38

An anonymous reader quotes a report from The Register: Comcast says data on 237,703 of its customers was in fact stolen in a cyberattack on a debt collector it was using, contrary to previous assurances it was given that it was unaffected by that intrusion. That collections agency, Financial Business and Consumer Solutions aka FBCS, was compromised in February, and according to a filing with Maine's attorney general, the firm informed the US cable giant about the unauthorized access in March. At the time, FBCS told the internet'n'telly provider that no Comcast customer information was affected. However, that changed in July, when the collections outfit got in touch again to say that, actually, the Comcast subscriber data it held had been pilfered.

Among the data types stolen were names, addresses, Social Security numbers, dates of birth, and the Comcast account numbers and ID numbers used internally at FBCS. The data pertains to those registered as customers at "around 2021." Comcast stopped using FBCS for debt collection services in 2020. Comcast made it clear its own systems, including those of its broadband unit Xfinity, were not broken into, unlike that time in 2023. FBCS earlier said more than 4 million people had their records accessed during that February break-in. As far as we're aware, the agency hasn't said publicly exactly how that network intrusion went down. Now Comcast is informing subscribers that their info was taken in that security breach, and in doing so seems to be the first to say the intrusion was a ransomware attack. [...]

FBCS's official statement only attributes the attack to an "unauthorized actor." It does not mention ransomware, nor many other technical details aside from the data types involved in the theft. No ransomware group we're aware of has ever claimed responsibility for the raid on FBCS. When we asked Comcast about the ransomware, it simply referred us back to the customer notification letter. The cableco used that notification to send another small middle finger FBCS's way, slyly revealing that the agency's financial situation prevents it from offering the usual identity and credit monitoring protection for those affected, so Comcast is having to foot the bill itself.

Government

California Passes Law To Protect Consumer 'Brain Data' (govtech.com) 28

On September 28, California amended the California Consumer Privacy Act of 2018 to recognize the importance of mental privacy. "The law marks the second such legal protection for data produced from invasive neurotechnology, following Colorado, which incorporated neural data into its state data privacy statute, the Colorado Privacy Act (CPA) in April," notes Law.com. GovTech reports: The new bill amends the California Consumer Privacy Act of 2018, which grants consumers rights over personal information that is collected by businesses. The term "personal information" already included biometric data (such as your face, voice, or fingerprints). Now it also explicitly includes neural data. The bill defines neural data as "information that is generated by measuring the activity of a consumer's central or peripheral nervous system, and that is not inferred from nonneural information." In other words, data collected from a person's brain or nerves.

The law prevents companies from selling or sharing a person's data and requires them to make efforts to deidentify the data. It also gives consumers the right to know what information is collected and the right to delete it. "This new law in California will make the lives of consumers safer while sending a clear signal to the fast-growing neurotechnology industry there are high expectations that companies will provide robust protections for mental privacy of consumers," Jared Genser, general counsel to the Neurorights Foundation, which cosponsored the bill, said in a statement. "That said, there is much more work ahead."

EU

Meta Faces Data Retention Limits On Its EU Ad Business After Top Court Ruling (techcrunch.com) 35

An anonymous reader quotes a report from TechCrunch: The European Union's top court has sided with a privacy challenge to Meta's data retention policies. It ruled on Friday that social networks, such as Facebook, cannot keep using people's information for ad targeting indefinitely. The judgement could have major implications on the way Meta and other ad-funded social networks operate in the region. Limits on how long personal data can be kept must be applied in order to comply with data minimization principles contained in the bloc's General Data Protection Regulation (GDPR). Breaches of the regime can lead to fines of up to 4% of global annual turnover -- which, in Meta's case, could put it on the hook for billions more in penalties (NB: it is already at the top of the leaderboard of Big Tech GDPR breachers). [...]

The original challenge to Meta's ad business dates back to 2014 but was not fully heard in Austria until 2020, per noyb. The Austrian supreme court then referred several legal questions to the CJEU in 2021. Some were answered via a separate challenge to Meta/Facebook, in a July 2023 CJEU ruling -- which struck down the company's ability to claim a "legitimate interest" to process people's data for ads. The remaining two questions have now been dealt with by the CJEU. And it's more bad news for Meta's surveillance-based ad business. Limits do apply. Summarizing this component of the judgement in a press release, the CJEU wrote: "An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data."

The ruling looks important on account of how ads businesses, such as Meta's, function. Crudely put, the more of your data they can grab, the better -- as far as they are concerned. Back in 2022, an internal memo penned by Meta engineers which was obtained by Vice's Motherboard likened its data collection practices to tipping bottles of ink into a vast lake and suggested the company's aggregation of personal data lacked controls and did not lend itself to being able to silo different types of data or apply data retention limits. Although Meta claimed at the time that the document "does not describe our extensive processes and controls to comply with privacy regulations." How exactly the adtech giant will need to amend its data retention practices following the CJEU ruling remains to be seen. But the law is clear that it must have limits. "[Advertising] companies must develop data management protocols to gradually delete unneeded data or stop using them," noyb suggests.
The court also weighed in a second question that concerns sensitive data that has been "manifestly made public" by the data subject, "and whether sensitive characteristics could be used for ad targeting because of that," reports TechCrunch. "The court ruled that it could not, maintaining the GDPR's purpose limitation principle."
Patents

Cloudflare Defeats Patent Troll (cloudflare.com) 63

Cloudflare has emerged victorious in a patent infringement lawsuit against Sable Networks, securing a $225,000 settlement and forcing the patent holder to dedicate its entire portfolio to the public domain. The case, which began in March 2021 with Sable asserting nearly 100 claims across four patents, concluded after a Texas jury found Cloudflare not guilty of infringement in February 2024.

Sable, described by Cloudflare as a "patent troll," had previously sued several tech companies, including Cisco and Juniper Networks, who settled out of court. Cloudflare's aggressive defense strategy included launching Project Jengo, a crowd-sourced initiative to invalidate Sable's patents. The settlement prevents Sable from asserting these patents against any other company in the future, marking a significant blow to patent trolling practices in the tech industry. In a blog post, Cloudflare adds: While this $225,000 can't fully compensate us for the time, energy and frustration of having to deal with this litigation for nearly three years, it does help to even the score a bit. And we hope that it sends an important message to patent trolls everywhere to beware before taking on Cloudflare.
Biotech

23andMe Is On the Brink. What Happens To All Its DNA Data? (npr.org) 60

The one-and-done nature of 23andMe is "indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse," reports NPR. As 23andMe struggles for survival, many of its 15 million customers are left wondering what the company plans to do with all the data it has collected since it was founded in 2006. An anonymous reader shares an excerpt from the report: Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy. "For our customers, our focus continues to be on transparency and choice over how they want their data to be managed," he said. When signing up for the service, about 80% of 23andMe's customers have opted in to having their genetic data analyzed for medical research. "This rate has held steady for many years," Kill added. The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company's customer data to develop new treatments for disease. Anya Prince, a law professor at the University of Iowa's College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist. For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm. "HIPAA does not protect data that's held by direct-to-consumer companies like 23andMe," she said.

Although DNA data has no federal safeguards, some states, like California and Florida, do give consumers rights over their genetic information. "If customers are really worried, they could ask for their samples to be withdrawn from these databases under those laws," said Prince. According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data. "I couldn't go to GSK and say, 'Hey, my sample was given to you -- I want that taken out -- if it was anonymized, right? Because they're not going to re-identify it just to pull it out of the database," Prince said.

Vera Eidelman, a staff attorney with the American Civil Liberties Union who specializes in privacy and technology policy, said the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement. "Having to rely on a private company's terms of service or bottom line to protect that kind of information is troubling -- particularly given the level of interest we've seen from government actors in accessing such information during criminal investigations," Eidelman said. She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles. "This has happened without people's knowledge, much less their express consent," Eidelman said.

Neither case relied on 23andMe, and spokesperson Kill said the company does not allow law enforcement to search its database. The company has, however, received subpoenas to access its genetic information. According to 23andMe's transparency report, authorities have sought genetic data on 15 individuals since 2015, but the company has resisted the requests and never produced data for investigators. "We treat law enforcement inquiries, such as a valid subpoena or court order, with the utmost seriousness. We use all legal measures to resist any and all requests in order to protect our customers' privacy," Kill said. [...] In a September filing to financial regulators, [23andMe CEO Anne Wojcicki] wrote: "I remain committed to our customers' privacy and pledge," meaning the company's rules requiring consent for DNA to be used for research would remain in place, as well as allowing customers to delete their data. Wojcicki added that she is no longer considering offers to buy the company after previously saying she was.

Government

Senator Calls Out John Deere For Clean Air Act Violations, Blocking Farmer Repairs (substack.com) 48

"The Fight to Repair Newsletter is reporting that U.S. Senator Elizabeth Warren is calling out agricultural equipment giant John Deere for possible violations of the federal Clean Air Act and a years-long pattern of thwarting owners' ability to repair their farm equipment," writes longtime Slashdot reader chicksdaddy. From the report: Deere "appears to be evading its responsibilities under the Clean Air Act to grant customers the right to repair their own agricultural equipment." That is costing farmers an estimated $4.2 billion annually "causing them to miss key crop windows on which their businesses and livelihoods rely," Warren wrote in a letter (https://www.theverge.com/2024/10/3/24260513/john-deere-right-to-repair-elizabeth-warren-clean-air-act) dated October 2nd. The letter from Warren (PDF), a Senator from Massachusetts and strong repair advocate, is just the latest volley lobbed at Illinois-based Deere, an iconic American brand and the largest supplier of agricultural equipment to farms in the U.S. Deere controls an estimated 53 percent of the U.S. market for large tractors and 60 percent of the U.S. market for farm combines.

In recent weeks, Deere faced criticism, including from Republican presidential candidate Donald Trump, after laying off close to 2,000 U.S. based employees at facilities in Iowa and Illinois, moving many of those jobs to facilities in Mexico. The company has also been repeatedly called out for complicating repair and service of its farm equipment -- often relying on software locks and digital rights management to force farmers to use Deere dealers and authorized service providers for even the simplest repairs.

The Courts

Judge Blocks California's New AI Law In Case Over Kamala Harris Deepfake (techcrunch.com) 128

An anonymous reader quotes a report from TechCrunch: A federal judge blocked one of California's new AI laws on Wednesday, less than two weeks after it was signed by Governor Gavin Newsom. Shortly after signing AB 2839, Newsom suggested it could be used to force Elon Musk to take down an AI deepfake of Vice President Kamala Harris he had reposted (sparking a petty online battle between the two). However, a California judge just ruled the state can't force people to take down election deepfakes -- not yet, at least. AB 2839 targets the distributors of AI deepfakes on social media, specifically if their post resembles a political candidate and the poster knows it's a fake that may confuse voters. The law is unique because it does not go after the platforms on which AI deepfakes appear, but rather those who spread them. AB 2839 empowers California judges to order the posters of AI deepfakes to take them down or potentially face monetary penalties.

Perhaps unsurprisingly, the original poster of that AI deepfake -- an X user named Christopher Kohls -- filed a lawsuit to block California's new law as unconstitutional just a day after it was signed. Kohls' lawyer wrote in a complaint that the deepfake of Kamala Harris is satire that should be protected by the First Amendment. On Wednesday, United States district judge John Mendez sided with Kohls. Mendez ordered a preliminary injunction to temporarily block California's attorney general from enforcing the new law against Kohls or anyone else, with the exception of audio messages that fall under AB 2839. [...] In essence, he ruled the law is simply too broad as written and could result in serious overstepping by state authorities into what speech is permitted or not.

Facebook

Meta Confirms It Will Use Ray-Ban Smart Glasses Images for AI Training (techcrunch.com) 14

Meta has confirmed that it may use images analyzed by its Ray-Ban Meta AI smart glasses for AI training. The policy applies to users in the United States and Canada who share images with Meta AI, according to the company. While photos captured on the device are not used for training unless submitted to AI, any image shared for analysis falls under different policies, potentially contributing to Meta's AI model development.

Further reading: Meta's Smart Glasses Repurposed For Covert Facial Recognition.
The Courts

NSO Should Lose Spyware Case for Discovery Violations, Meta Says (bloomberglaw.com) 10

WhatsApp and its parent Meta asked a judge to award them a total win against spyware maker NSO Group as punishment for discovery violations in a years-long case accusing the Israeli company of violating anti-hacking laws. From a report: NSO Group violated the Federal Rules of Civil Procedure, repeatedly ignoring the court's orders and its discovery obligations, according to a motion for sanctions filed Wednesday in the US District Court for the Northern District of California. "NSO's discovery violations were willful, and unfairly skew the record on virtually every key issue in the case, from the merits, to jurisdiction, to damages, making a full and fair trial on the facts impossible," they said. Judge Phyllis J. Hamilton should award the companies judgment as a matter of law or, "if the court finds that the limited discovery produced in this case does not suffice," enter default judgment against NSO, WhatsApp and Meta wrote.

The social media platforms first filed their complaint in October 2019, accusing NSO of using WhatsApp to install NSO spyware on the phones of about 1,400 WhatsApp users.
The move follows Apple asking a court last month to dismiss its three-year-old hacking lawsuit against spyware pioneer NSO Group, arguing that it might never be able to get the most critical files about NSO's Pegasus surveillance tool and that its own disclosures could aid NSO and its increasing number of rivals.
The Courts

WP Engine Sues WordPress for Libel, Extortion 49

WP Engine, a major web hosting provider, has filed a federal lawsuit against WordPress [PDF] co-founder Matt Mullenweg and Automattic, alleging libel and attempted extortion. The suit stems from a public dispute over WordPress trademark usage and open-source licensing.

WP Engine, which hosts over 200,000 websites, accuses Mullenweg and Automattic of "abuse of power, extortion, and greed." The conflict escalated after Mullenweg called WP Engine a "cancer to WordPress" on his blog, prompting a cease-and-desist letter. Automattic subsequently demanded 8% of WP Engine's monthly revenue as royalties for alleged trademark infringement. The lawsuit includes 11 complaints, ranging from slander to violations of the Computer Fraud and Abuse Act.
Crime

Police Arrest Four Suspects Linked To LockBit Ransomware Gang (bleepingcomputer.com) 10

Law enforcement from 12 countries arrested four individuals linked to the LockBit ransomware gang, including a developer and a bulletproof hosting administrator. The operation also resulted in the seizure of LockBit infrastructure and involved sanctions targeting affiliates of both LockBit and Evil Corp. BleepingComputer reports: According to Europol, a suspected LockBit ransomware developer was arrested in August 2024 at the request of French authorities while on holiday outside of Russia. The same month, the U.K.'s National Crime Agency (NCA) arrested two more individuals linked to LockBit activity: one believed to be associated with a LockBit affiliate, while the second was apprehended on suspicion of money laundering. In a separate action, at Madrid airport, Spain's Guardia Civil arrested the administrator of a bulletproof hosting service used to shield LockBit's infrastructure. Today, Australia, the United Kingdom, and the United States also revealed sanctions against an individual the UK NCA believes is a prolific LockBit ransomware affiliate linked to Evil Corp.

The United Kingdom sanctioned 15 more Russian nationals involved in Evil Corp's criminal activities, while the United States sanctioned six individuals and Australia targeted two. "These actions follow the massive disruption of LockBit infrastructure in February 2024, as well as the large series of sanctions and operational actions that took place against LockBit administrators in May and subsequent months," Europol said.

The Courts

Meta Hit With New Author Copyright Lawsuit Over AI Training (reuters.com) 47

Novelist Christopher Farnsworth has filed a class-action lawsuit (PDF) against Meta, accusing the company of using his and other authors' pirated books to train its Llama AI model. Farnsworth seeks damages and an order to stop the alleged copyright infringement, joining a growing group of creators suing tech companies over unauthorized AI training. Reuters reports: Farnsworth said in the lawsuit on Tuesday that Meta fed Llama, which powers its AI chatbots, thousands of pirated books to teach it how to respond to human prompts. Other authors including Ta-Nehisi Coates, former Arkansas governor Mike Huckabee and comedian Sarah Silverman have brought similar class-action claims against Meta in the same court over its alleged use of their books in AI training. [...] Several groups of copyright owners including writers, visual artists and music publishers have sued major tech companies over the unauthorized use of their work to train generative AI systems. The companies have argued that their AI training is protected by the copyright doctrine of fair use and that the lawsuits threaten the burgeoning AI industry.
The Courts

Court Blocks Uber Crash Lawsuit After Couple's Daughter Agreed To Uber Eats TOS (npr.org) 122

An anonymous reader quotes a report from NPR: A New Jersey appeals court says a couple cannot sue Uber over a life-altering car accident because of the app's terms and conditions, even though they say it was their daughter who agreed to those terms while placing an Uber Eats order. John and Georgia McGinty -- a Mercer County couple both in their 50s -- filed a lawsuit against the ride-hailing company in February 2023, nearly a year after suffering "serious physical, psychological, and financial damages" when the Uber they were riding in crashed into another car, according to court filings. "There are physical scars, mental scars, and I don't think that they will ever really be able to go back to their full capacity that they were at before," says their attorney, Mike Shapiro.

Uber responded by filing a motion to dismiss the complaint and compel arbitration, which would require the parties to resolve their differences outside court instead -- ostensibly benefiting the company by lowering legal costs and keeping proceedings private. Uber argued that Georgia McGinty, a longtime customer of Uber Rides and Uber Eats, had agreed to arbitrate any disputes with the company when she signed off on the language in the app's terms of use on three occasions over the years. The McGintys fought back, saying it was actually their daughter -- who was and remains a minor -- who had most recently agreed to the terms when she used Georgia's phone to order food on their behalf. A lower court initially sided with the couple, denying Uber's motion to compel arbitration in November 2023. Uber appealed the decision, and late last month, the appeals court ruled in its favor.

"We hold that the arbitration provision contained in the agreement under review, which Georgia or her minor daughter, while using her cell phone agreed to, is valid and enforceable," the three-judge panel wrote in September. "We, therefore, reverse the portion of the order denying arbitration of the claims against Uber." Shapiro told NPR that the couple "100%" wants to keep pursuing their case and are mulling their options, including asking the trial court to reconsider it or potentially trying to bring it to the New Jersey Supreme Court. "Uber has just been extremely underhanded in their willingness to open the same cabinets that they're forcing the McGintys to open up and have to peek around in," Shapiro says. "It's unfortunate that that's the way that they're carrying on their business, because this is truly something that subjects millions and millions of Americans and people all over the world to a waiver of their hard-fought rights."
"While the plaintiffs continue to tell the press that it was their daughter who ordered Uber Eats and accepted the Terms of Use, it's worth noting that in court they could only 'surmise' that that was the case but could not recall whether 'their daughter ordered food independently or if Georgia assisted,'" Uber said in a statement.

The report cites another recent case where Disney "tried to block a man's wrongful death lawsuit on behalf of his wife -- who died following an allergic reaction after eating at a Disney World restaurant -- because he had signed up for a trial of Disney+." After negative media coverage, the company backtracked on its push for arbitration.
Privacy

Did Apple Just Kill Social Apps? (nytimes.com) 78

Apple's iOS 18 update has introduced changes to contact sharing that could significantly impact social app developers. The new feature allows users to selectively share contacts with apps, rather than granting access to their entire address book. While Apple touts this as a privacy enhancement, developers warn it may hinder the growth of new social platforms. Nikita Bier, a start-up founder, called it "the end of the world" for friend-based social apps. Critics argue the change doesn't apply to Apple's own services, potentially giving the tech giant an unfair advantage.

Slashdot Top Deals