Privacy

Did Apple Just Kill Social Apps? (nytimes.com) 78

Apple's iOS 18 update has introduced changes to contact sharing that could significantly impact social app developers. The new feature allows users to selectively share contacts with apps, rather than granting access to their entire address book. While Apple touts this as a privacy enhancement, developers warn it may hinder the growth of new social platforms. Nikita Bier, a start-up founder, called it "the end of the world" for friend-based social apps. Critics argue the change doesn't apply to Apple's own services, potentially giving the tech giant an unfair advantage.
Facebook

Meta's Smart Glasses Repurposed For Covert Facial Recognition (404media.co) 47

Two Harvard students have developed smart glasses with facial recognition capabilities, sparking debate over privacy and surveillance. The project, dubbed I-XRAY, uses Meta's Ray-Ban smart glasses coupled with facial recognition software to identify strangers and retrieve personal information about them. AnhPhu Nguyen and Caine Ardayfio, the creators, tested the technology on unsuspecting individuals in public spaces. The glasses scan faces, match them against online databases, and display personal details on a smartphone within seconds. The students claim their project aims to raise awareness about potential privacy risks.
Transportation

Bidirectional Charging May Be Required On EVs Soon Due To New California Law (electrek.co) 291

California Governor Gavin Newsom signed a law giving the California Energy Commission the authority to require bidirectional charging in electric vehicles (EVs) in the future -- although no timeline is set. Bidirectional charging allows EVs to not only charge from the grid but also supply electricity back to the grid, potentially enhancing grid resiliency, supporting renewable energy, and reducing peak electricity demand. Electrek reports: The idea started in 2023 when state Senator Nancy Skinner introduced a bill which would require EVs to have bidirectional charging by 2027. As this bill made its way through the legislative process, it got watered down from that ambitious timeline. So the current form of the bill, which is now called SB 59, took away that timeline and instead gave the California Energy Commission (CEC) the go-ahead to issue a requirement whenever they see it fit. The bill directs the CEC, the California Air Resources Board, and the California Public Utilities Commission to examine the use cases of bidirectional charging and give them the power to require specific weight classes of EVs to be bidirectional-capable if a compelling use case exists.

The state already estimates that integrating EVs into the grid could save $1 billion in costs annually, so there's definitely a use case there, but the question is the cost and immediacy of building those vehicles into the grid. The reason this can't be done immediately is that cars take time to design, and while adding bidirectional charging to an EV isn't the most difficult process, it also only really becomes useful with a whole ecosystem of services around the vehicle.

And that ecosystem has been a bit of a hard sell so far. It's all well and good to tell someone they can make $500/year by selling energy to the grid, but then you have to convince them to buy a more expensive charging unit and keep their car plugged in all the time, with someone else managing its energy storage. Some consumers might push back against that, so part of CEC's job is to wait to pull the trigger until it becomes apparent that people are actually interested in the end-user use case for V2G -- otherwise, no sense in requiring a feature that nobody is going to use.

The Courts

eBay Wins Dismissal of US Lawsuit Over Alleged Sale of Harmful Products (reuters.com) 35

An anonymous reader quotes a report from Reuters: A federal judge dismissed a U.S. Department of Justice lawsuit accusing eBay of violating environmental laws by allowing the sale of hundreds of thousands of harmful products on its platform, including pesticides and devices to evade motor vehicle pollution controls. U.S. District Judge Orelia Merchant in Brooklyn ruled on Monday that Section 230 of the federal Communications Decency Act, which protects online platforms from liability over user content, shielded eBay from liability in the civil lawsuit.

The judge said eBay's administrative and technical support to sellers "does not materially contribute to the products' alleged unlawfulness" and does not make the San Jose, California, company a "publisher or speaker" on sellers' behalf. Merchant also said eBay was not a "seller" of some of the challenged products, because it did not physically possess them or hold title. She rejected the government's argument that eBay was a seller because it exchanged the products for money.
The U.S. government argued eBay violated the Clean Air Act by allowing the sale of harmful products, including more than 343,000 aftermarket "defeat" devices that help vehicles generate more power and get better fuel economy by evading emissions controls. The company also was accused of allowing sales of 23,000 unregistered, misbranded or restricted-use pesticides, as well as distributing more than 5,600 paint and coating removal products that contained methylene chloride, a chemical linked to brain and liver cancer and non-Hodgkin lymphoma.
Privacy

Crooks Made Millions By Breaking Into Execs' Office365 Inboxes, Feds Say (arstechnica.com) 55

An anonymous reader quotes a report from Ars Technica: Federal prosecutors have charged a man for an alleged "hack-to-trade" scheme that earned him millions of dollars by breaking into the Office365 accounts of executives at publicly traded companies and obtaining quarterly financial reports before they were released publicly. The action, taken by the office of the US Attorney for the district of New Jersey, accuses UK national Robert B. Westbrook of earning roughly $3.75 million in 2019 and 2020 from stock trades that capitalized on the illicitly obtained information. After accessing it, prosecutors said, he executed stock trades. The advance notice allowed him to act and profit on the information before the general public could. The US Securities and Exchange Commission filed a separate civil suit against Westbrook seeking an order that he pay civil penalties and return all ill-gotten gains. [...]

By obtaining material information, Westbrook was able to predict how a company's stock would perform once it became public. When results were likely to drive down stock prices, he would place "put" options, which give the purchaser the right to sell shares at a specific price within a specified span of time. The practice allowed Westbrook to profit when shares fell after financial results became public. When positive results were likely to send stock prices higher, Westbrook allegedly bought shares while they were still low and later sold them for a higher price. The prosecutors charged Westbrook with one count each of securities fraud and wire fraud and five counts of computer fraud. The securities fraud count carries a maximum penalty of up to 20 years' prison time and $5 million in fines The wire fraud count carries a maximum penalty of up to 20 years in prison and a fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest. Each computer fraud count carries a maximum five years in prison and a maximum fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest.
"The SEC is engaged in ongoing efforts to protect markets and investors from the consequences of cyber fraud," Jorge G. Tenreiro, acting chief of the SEC's Crypto Assets and Cyber Unit, said in a statement. "As this case demonstrates, even though Westbrook took multiple steps to conceal his identity -- including using anonymous email accounts, VPN services, and utilizing bitcoin -- the Commission's advanced data analytics, crypto asset tracing, and technology can uncover fraud even in cases involving sophisticated international hacking."
Crime

Google Wins Lawsuit Against Scammers Who 'Weaponized' DMCA Takedowns (torrentfreak.com) 63

Google has obtained (PDF) a default judgment against two men who abused its DMCA takedown system to falsely target 117,000 URLs of competitors' online stores. With none of the defendants showing up in court, a California federal court sided with the search engine. Through an injunction, the men are now prohibited from sending false takedown notices and creating new Google accounts. TorrentFreak reports: Last November, Google decided to take action against the rampant DMCA abuse. In a lawsuit filed at a federal court in California, it accused Nguyen Van Duc and Pham Van Thien of sending over 100,000 fraudulent takedown requests. Many of these notices were allegedly filed against third-party T-shirt shops. [...] Following the complaint, the defendants, who are believed to reside in Vietnam, were summoned via their Gmail accounts and SMS. However, the pair remained quiet and didn't respond in court. Without the defendants representing themselves, Google requested a default judgment. According to the tech giant, it's clear that the duo violated the DMCA with their false takedown notices. In addition, they committed contract breach under California law.

Google said that, absent a default judgment, the defendants would continue to harm consumers and third-party businesses. These actions, in turn, will damage Google's reputation as a search engine. In July, U.S. Magistrate Judge Sallie Kim recommended granting Google's motion for default judgment. The recommendation included an injunction that prevents the two men from abusing Google's services going forward. However, the District Judge had the final say. Last Friday, U.S. District Court Judge Edward Davila adopted the recommendations, issuing a default judgment in favor of Google. The order confirms that defendants Nguyen Van Duc and Pham Van Thien violated the DMCA with their false takedown notices. In addition, they committed contract breach under California law.

In typical copyrights-related verdicts, most attention is paid to the monetary damages, but not here. While Google could have requested millions of dollars in compensation, it didn't request a penny. Google's primary goal was to put an end to the abusive behavior, not to seek financial compensation. Therefore, the company asked for an injunction to prohibit the defendants from sending false takedowns going forward. This includes a ban on registering any new Google accounts. The request ticked all the boxes and, without a word from the defendants, Judge Davila granted the default judgment as well as the associated injunction.

Google

Epic Games Sues Google and Samsung Over App Store Restrictions 45

Epic Games filed a new antitrust lawsuit against Google and Samsung, alleging they conspired to undermine third-party app stores. The suit focuses on Samsung's "Auto Blocker" feature, now enabled by default on new phones, which restricts app installations to "authorized sources" - primarily Google and Samsung's stores.

Epic claims Auto Blocker creates significant barriers for rival stores, requiring users to navigate a complex process to install third-party apps. The company argues this feature does not actually assess app safety, but is designed to stifle competition. Epic CEO Tim Sweeney stated the lawsuit aims to benefit all developers, not secure special privileges for Epic. The company seeks either default deactivation of Auto Blocker or creation of a fair whitelisting process for legitimate apps. This legal action follows Epic's December victory against Google in a separate antitrust case. Epic recently launched its own mobile app store, which it claims faces unfair obstacles due to Auto Blocker.
AI

California's Governor Just Vetoed Its Controversial AI Bill (techcrunch.com) 35

"California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have regulated the development of AI," reports TechCrunch. The bill "would have made companies that develop AI models liable for implementing safety protocols to prevent 'critical harms'." The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.

SB 1047 was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta's chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. That said, the bill had also been amended based on suggestions by AI company Anthropic and other opponents.

In a statement about today's veto, Newsom said, "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the.." bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."

"Over the past 30 days, Governor Newsom signed 17 bills covering the deployment and regulation of GenAI technology..." according to a statement from the governor's office, "cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation... The Newsom Administration will also immediately engage academia to convene labor stakeholders and the private sector to explore approaches to use GenAI technology in the workplace."

In a separate statement the governor pointed out California " is home to 32 of the world's 50 leading Al companies," and warned that the bill "could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good..."

"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.

"I do not believe this is the best approach to protecting the public from real threats posed by the technology."

Interestingly, the Los Angeles Times reported that the vetoed bill had been supported by Mark Hamill, J.J. Abrams, and "more than 125 Hollywood actors, directors, producers, music artists and entertainment industry leaders" who signed a letter of support. (And that bill also cited the support of "over a hundred current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI..."
AI

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
United States

EPA Must Address Fluoridated Water's Risk To Children's IQs, US Judge Rules (reuters.com) 153

An anonymous reader quotes a report from Reuters: A federal judge in California has ordered the U.S. Environmental Protection Agency to strengthen regulations for fluoride in drinking water, saying the compound poses an unreasonable potential risk to children at levels that are currently typical nationwide. U.S. District Judge Edward Chen in San Francisco on Tuesday sided (PDF) with several advocacy groups, finding the current practice of adding fluoride to drinking water supplies to fight cavities presented unreasonable risks for children's developing brains.

Chen said the advocacy groups had established during a non-jury trial that fluoride posed an unreasonable risk of harm sufficient to require a regulatory response by the EPA under the Toxic Substances Control Act. "The scientific literature in the record provides a high level of certainty that a hazard is present; fluoride is associated with reduced IQ," wrote Chen, an appointee of Democratic former President Barack Obama. But the judge stressed he was not concluding with certainty that fluoridated water endangered public health. [...] The EPA said it was reviewing the decision.
"The court's historic decision should help pave the way towards better and safer fluoride standards for all," Michael Connett, a lawyer for the advocacy groups, said in a statement on Wednesday.
The Courts

'Anne Frank' Copyright Dispute Triggers VPN, Geoblocking Questions At EU's Highest Court (torrentfreak.com) 98

An anonymous reader quotes a report from TorrentFreak: The Dutch Supreme Court has requested guidance from the EU's top court on geo-blocking, VPNs, and copyright in a case involving the online publication of Anne Frank's manuscripts. The CJEU's response has the potential to reshape the online content distribution landscape, impacting streaming platforms and other services that rely on geo-blocking. VPNs services will monitor the matter with great interest too. [...] While early versions are presumably in the public domain in several countries, the original manuscripts are protected by copyright in the Netherlands until 2037. As a result, the copies published by the Dutch Anne Frank Stichting, are blocked for Dutch visitors. "The scholarly edition of the Anne Frank manuscripts cannot be made available in all countries, due to copyright considerations," is the message disallowed visitors get to see.

This blocking effort is the result of a copyright battle. Ideally, Anne Frank Stichting would like to make the manuscripts available worldwide, but the Swiss 'Fonds' has not given permission for it to do so. And since some parts of the manuscript were first published in 1986, Dutch copyrights are still valid. In theory, geo-blocking efforts could alleviate the copyright concerns but, for the Fonds, these measures are not sufficient. After pointing out that people can bypass the blocking efforts with a VPN, it took the matter to court. Around the world, publishers and streaming services use geo-blocking as the standard measure to enforce geographical licenses. This applies to the Anne Frank Stichting, as well as Netflix, BBC iPlayer, news sites, and gaming platforms. The Anne Frank Fonds doesn't dispute this, but argued in court that people can circumvent these restrictions with a VPN, suggesting that the manuscripts shouldn't be published online at all. The lower court dismissed this argument, stating the defendants had taken reasonable measures to prevent access from the Netherlands. The Fonds appealed, but the appeal was also dismissed, and the case is now before the Dutch Supreme Court.

The Fonds argues that the manuscript website is (in part) directed at a Dutch audience. Therefore, the defendants are making the manuscripts available in the Netherlands, regardless of the use of any blocking measures. The defendants, in turn, argue that the use of state-of-the-art geo-blocking, along with additional measures like a user declaration, is sufficient to prevent a communication to the public in the Netherlands. The defense relied on the opinion in the GO4YU case, which suggests that circumventing geo-blocking with a VPN does not constitute a communication to the public in the blocked territory, unless the blocking is intentionally ineffective.

Movies

US Trademark Office Cancels Marvel, DC's 'Super Hero' Trademarks (reuters.com) 31

A U.S. Trademark Office tribunal canceled Marvel and DC's jointly owned "Super Hero" trademarks after the companies failed to respond to a request by London-based Superbabies Ltd, which argued the marks couldn't be owned collectively or monopolize the superhero genre. The ruling was "not just a win for our client but a victory for creativity and innovation," said Superbabies attorney Adam Adler of Reichman Jorgensen Lehman & Feldberg. "By establishing SUPER HEROES' place in the public domain, we safeguard it as a symbol of heroism available to all storytellers." Reuters reports: Rivals Marvel and DC jointly own four federal trademarks covering the terms "Super Hero" and "Super Heroes," the oldest of which dates back to 1967. Richold writes comics featuring a team of super-hero babies called the Super Babies. According to Richold, DC accused his company of infringing the "Super Hero" marks and threatened legal action after Superbabies Ltd applied for U.S. trademarks covering the "Super Babies" name. Marvel and DC have cited their marks in opposing dozens of superhero-related trademark applications at the USPTO, according to the office's records. Superbabies petitioned the office to cancel the marks in May. It argued that Marvel and DC cannot "claim ownership over an entire genre" with their trademarks, and that the two competitors cannot own trademarks together.
Privacy

Meta Fined $102 Million For Storing 600 Million Passwords In Plain Text (appleinsider.com) 28

Meta has been fined $101.5 million by the Irish Data Protection Commission (DPC) for storing over half a billion user passwords in plain text for years, with some engineers having access to this data for over a decade. The issue, discovered in 2019, predominantly affected non-US users, especially those using Facebook Lite. AppleInsider reports: Meta Ireland was found guilty of infringing four parts of GDPR, including how it "failed to notify the DPC of a personal data breach concerning storage of user passwords in plain text." Meta Ireland did report the failure, but only some months after it was discovered. "It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," said Graham Doyle, Deputy Commissioner at the DPC, in a statement about the fine. "It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users' social media accounts."

Other than the fine and an official reprimand, the full extent of the DPC's ruling is yet to be released publicly. The details published so far do not reveal whether the passwords included any of US users as well as ones in Ireland or across the rest of the European Union. It's most likely that the issue concerns only non-US users, however. That's because in 2019, Facebook told CNN that the majority of the plain text passwords were for a service called Facebook Lite, which it described as being a cut-down service for areas of the world with slower connectivity.

Businesses

If 23andMe Is Up for Sale, So Is All That DNA (msn.com) 56

23andMe is not doing well. Its stock is on the verge of being delisted. It shut down its in-house drug-development unit last month, only the latest in several rounds of layoffs. Last week, the entire board of directors quit, save for Anne Wojcicki, a co-founder and the company's CEO. Amid this downward spiral, Wojcicki has said she'll consider selling 23andMe -- which means the DNA of 23andMe's 15 million customers would be up for sale, too. The Atlantic: 23andMe's trove of genetic data might be its most valuable asset. For about two decades now, since human-genome analysis became quick and common, the A's, C's, G's, and T's of DNA have allowed long-lost relatives to connect, revealed family secrets, and helped police catch serial killers. Some people's genomes contain clues to what's making them sick, or even, occasionally, how their disease should be treated. For most of us, though, consumer tests don't have much to offer beyond a snapshot of our ancestors' roots and confirmation of the traits we already know about. 23andMe is floundering in part because it hasn't managed to prove the value of collecting all that sensitive, personal information. And potential buyers may have very different ideas about how to use the company's DNA data to raise the company's bottom line. This should concern anyone who has used the service.
Government

White House Agonizes Over UN Cybercrime Treaty (politico.com) 43

The United Nations is set to vote on a treaty later this year intended to create norms for fighting cybercrime -- and the Biden administration is fretting over whether to sign on. Politico: The uncertainty over the treaty stems from fears that countries including Russia, Iran and China could use the text as a guise for U.N. approval of their widespread surveillance measures and suppression of the digital rights of their citizens. If the United States chooses not to vote in favor of the treaty, it could become easier for these adversarial nations -- named by the Cybersecurity and Infrastructure Security Agency as the biggest state sponsors of cybercrime -- to take the lead on cyber issues in the future. And if the U.S. walks away from the negotiating table now, it could upset other nations that spent several years trying to nail down the global treaty with competing interests in mind.

While the treaty is not set for a vote during the U.N. General Assembly this week, it's a key topic of debate on the sidelines, following meetings in New York City last week, and committee meetings set for next month once the world's leaders depart. The treaty was troubled from its inception. A cybercrime convention was originally proposed by Russia, and the U.N. voted in late 2019 to start the process to draft it -- overruling objections by the U.S. and other Western nations. Those countries were worried Russia would use the agreement as an alternative to the Budapest Convention -- an existing accord on cybercrime administered by the Council of Europe, which Russia, China and Iran have not joined.

Crime

South Korea Criminalizes Watching Or Possessing Sexually Explicit Deepfakes (reuters.com) 69

An anonymous reader quotes a report from Reuters: South Korean lawmakers on Thursday passed a bill that criminalizes possessing or watching sexually explicit deepfake images and videos, with penalties set to include prison terms and fines. There has been an outcry in South Korea over Telegram group chats where sexually explicit and illegal deepfakes were created and widely shared, prompting calls for tougher punishment. Anyone purchasing, saving or watching such material could face up to three years in jail or be fined up to 30 million won ($22,600), according to the bill.

Currently, making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of 50 million won under the Sexual Violence Prevention and Victims Protection Act. When the new law takes effect, the maximum sentence for such crimes will also increase to seven years regardless of the intention. The bill will now need the approval of President Yoon Suk Yeol in order to be enacted. South Korean police have so far handled more than 800 deepfake sex crime cases this year, the Yonhap news agency reported on Thursday. That compares with 156 for all of 2021, when data was first collated. Most victims and perpetrators are teenagers, police say.

Privacy

NIST Proposes Barring Some of the Most Nonsensical Password Rules (arstechnica.com) 180

Ars Technica's Dan Goodin reports: Last week, NIST released its second public draft of SP 800-63-4, the latest version of its Digital Identity Guidelines. At roughly 35,000 words and filled with jargon and bureaucratic terms, the document is nearly impossible to read all the way through and just as hard to understand fully. It sets both the technical requirements and recommended best practices for determining the validity of methods used to authenticate digital identities online. Organizations that interact with the federal government online are required to be in compliance. A section devoted to passwords injects a large helping of badly needed common sense practices that challenge common policies. An example: The new rules bar the requirement that end users periodically change their passwords. This requirement came into being decades ago when password security was poorly understood, and it was common for people to choose common names, dictionary words, and other secrets that were easily guessed.

Since then, most services require the use of stronger passwords made up of randomly generated characters or phrases. When passwords are chosen properly, the requirement to periodically change them, typically every one to three months, can actually diminish security because the added burden incentivizes weaker passwords that are easier for people to set and remember. Another requirement that often does more harm than good is the required use of certain characters, such as at least one number, one special character, and one upper- and lowercase letter. When passwords are sufficiently long and random, there's no benefit from requiring or restricting the use of certain characters. And again, rules governing composition can actually lead to people choosing weaker passcodes.

The latest NIST guidelines now state that:
- Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords and
- Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator. ("Verifiers" is bureaucrat speak for the entity that verifies an account holder's identity by corroborating the holder's authentication credentials. Short for credential service provider, "CSPs" are a trusted entity that assigns or registers authenticators to the account holder.) In previous versions of the guidelines, some of the rules used the words "should not," which means the practice is not recommended as a best practice. "Shall not," by contrast, means the practice must be barred for an organization to be in compliance.
Several other common sense practices mentioned in the document include: 1. Verifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
2. Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
3. Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
4. Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a single character when evaluating password length.
5. Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
6. Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
7. Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
8. Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., "What was the name of your first pet?") or security questions when choosing passwords.
9. Verifiers SHALL verify the entire submitted password (i.e., not truncate it).

Mozilla

Mozilla Hit With Privacy Complaint In EU Over Firefox Tracking Tech (techcrunch.com) 21

Mozilla has been hit with a complaint by EU privacy group noyb, accusing it of violating GDPR by tracking Firefox users by default without their consent. TechCrunch reports: Mozilla calls the feature at issue "Privacy Preserving Attribution" (PPA). But noyb argues this is misdirection. And if EU privacy regulators agree with the complaint the Firefox-maker could be slapped with orders to change tack -- or even face a penalty (the GDPR allows for fines of up to 4% of global revenue). "Contrary to its reassuring name, this technology allows Firefox to track user behaviour on websites," noyb wrote in a press release. "In essence, the browser is now controlling the tracking, rather than individual websites. While this might be an improvement compared to even more invasive cookie tracking, the company never asked its users if they wanted to enable it. Instead, Mozilla decided to turn it on by default once people installed a recent software update. This is particularly worrying because Mozilla generally has a reputation for being a privacy-friendly alternative when most other browsers are based on Google's Chromium."

Another component of noyb's objection is that Mozilla's move "doesn't replace cookies either" -- Firefox simply wouldn't have the market share and power to shift industry practices -- so all it's done is produce another additional way for websites to target ads. [...] The noyb-backed complaint (PDF), which has been filed with the Austrian data protection authority, accuses Mozilla of failing to inform users about the processing of their personal data and of using an opt-out -- rather than an affirmative "opt-in" -- mechanism. The privacy rights group also wants the regulator to order the deletion of all data collected so far.
In a statement attributed to Christopher Hilton, its director of policy and corporate communications, Mozilla said that it has only conducted a "limited test" of a PPA prototype on its own websites.While acknowledging poor communication around the effort, the company emphasized that no user data has been collected or shared and expressed its commitment to engaging with stakeholders as it develops the technology further.
Government

US Justice Department Probes Super Micro Computer (yahoo.com) 22

According to the Wall Street Journal, the U.S. Department of Justice is investigating Super Micro Computer after short-seller Hindenburg Research alleged "accounting manipulation" at the AI server maker. Super Micro's shares fell about 12% following the report. Reuters reports: The WSJ report, which cited people familiar with the matter, said the probe was at an early stage and that a prosecutor at a U.S. attorney's office recently contacted people who may be holding relevant information. The prosecutor has asked for information that appeared to be connected to a former employee who accused the company of accounting violations, the report added.

Super Micro had late last month delayed filing its annual report, citing a need to assess "its internal controls over financial reporting," a day after Hindenburg disclosed a short position and made claims of "accounting manipulation." The short-seller had cited a three-month investigation that included interviews with former senior employees of Super Micro and litigation records. Hindenburg's allegations included evidence of undisclosed related-party transactions, failure to abide by export controls, among other issues. The company had denied Hindenburg's claims.

Piracy

US Court Orders LibGen To Pay $30 Million To Publishers, Issues Broad Injunction 27

A New York federal court has ordered (PDF) the operators of shadow library LibGen to pay $30 million in copyright damages to publishers. The default judgment also comes with a broad injunction that affects third-party services including domain registries, browser extensions, CDN providers, IPFS gateways, advertisers, and more. These parties must restrict access to the pirate site. An anonymous reader quotes a report from TorrentFreak: Yesterday, U.S. District Court Judge Colleen McMahon granted the default judgment without any changes. The anonymous LibGen defendants are responsible for willful copyright infringement and their activities should be stopped. "Plaintiffs have been irreparably harmed as a result of Defendants' unlawful conduct and will continue to be irreparably harmed should Defendants be allowed to continue operating the Libgen Sites," the order reads. The order requires the defendants to pay the maximum statutory damages of $150,000 per work, a total of $30 million, for which they are jointly and severally liable. While this is a win on paper, it's unlikely that the publishers will get paid by the LibGen operators, who remain anonymous.

To address this concern, the publishers' motion didn't merely ask for $30 million in damages, they also demanded a broad injunction. Granted by the court yesterday, the injunction requires third-party services such as advertising networks, payment processors, hosting providers, CDN services, and IPFS gateways to restrict access to the site. [...] The injunction further targets "browser extensions" and "other tools" that are used to provide direct access to the LibGen Sites. While site blocking by residential Internet providers is mentioned in reference to other countries, ISP blocking is not part of the injunction itself. In addition to the broad measures outlined above, the order further requires domain name registrars and registries to disable or suspend all active LibGen domains, or alternatively, transfer them to the publishers. This includes Libgen.is, the most used domain name with 16 million monthly visits, as well as Libgen.rs, Libgen.li and many others.

At the moment, it's unclear how actively managed the LibGen site is, as it has shown signs of decay in recent years. However, when faced with domain seizures, sites typically respond by registering new domains. The publishers are aware of this risk. Therefore, they asked the court to cover future domain names too. The court signed off on this request, which means that newly registered domain names can be taken over as well; at least in theory. [...] All in all, the default judgment isn't just a monetary win, on paper, it's also one of the broadest anti-piracy injunctions we've seen from a U.S. court.

Slashdot Top Deals