Crime

Zeus, IcedID Malware Kingpin Faces 40 Years In Prison (theregister.com) 39

Connor Jones reports via The Register: A Ukrainian cybercrime kingpin who ran some of the most pervasive malware operations faces 40 years in prison after spending nearly a decade on the FBI's Cyber Most Wanted List. Vyacheslav Igorevich Penchukov, 37, pleaded guilty this week in the US to two charges related to his leadership role in both the Zeus and IcedID malware operations that netted millions of dollars in the process. Penchukov's plea will be seen as the latest big win for US law enforcement in its continued fight against cybercrime and those that enable it. However, authorities took their time getting him in 'cuffs. [...]

"Malware like IcedID bleeds billions from the American economy and puts our critical infrastructure and national security at risk," said US attorney Michael Easley for the eastern district of North Carolina. "The Justice Department and FBI Cyber Squad won't stand by and watch it happen, and won't quit coming for the world's most wanted cybercriminals, no matter where they are in the world. This operation removed a key player from one of the world's most notorious cybercriminal rings. Extradition is real. Anyone who infects American computers had better be prepared to answer to an American judge."

This week, he admitted one count of conspiracy to commit a racketeer influenced and corrupt organizations (RICO) act offense relating to Zeus, and one count of conspiracy to commit wire fraud in relation to IcedID. Each count carries a maximum sentence of 20 years. His sentencing date is set for May 9, 2024.
Zeus malware, a banking trojan that formed a botnet for financial theft, caused over $100 million in losses before its 2014 dismantlement. Its successor, SpyEye, incorporated enhanced features for financial fraud. Despite the 2014 takedown of Zeus, Penchukov moved on to lead IcedID, a similar malware first found in 2017. IcedID evolved from banking fraud to ransomware, severely affecting the University of Vermont Medical Center in 2020 with over $30 million in damages.
Government

California Bill Wants To Scrap Environmental Reviews To Save Downtown San Francisco (sfchronicle.com) 177

An anonymous reader quotes a report from the San Francisco Chronicle: San Francisco's leaders have spent the past few years desperately trying to figure out how to deal with a glut of empty offices, shuttered retail and public safety concerns plaguing the city's once vibrant downtown. Now, a California lawmaker wants to try a sweeping plan to revive the city's core by exempting most new real estate projects from environmental review, potentially quickening development by months or even years. State Sen. Scott Wiener, D-San Francisco, introduced SB1227 on Friday as a proposal to exempt downtown projects from the California Environmental Quality Act, or CEQA, for a decade. The 1970 landmark law requires studies of a project's expected impact on air, water, noise and other areas, but Wiener said it has been abused to slow down or kill infill development near public transit.

"Downtown San Francisco matters to our city's future, and it's struggling -- to bring people back, we need to make big changes and have open minds," Wiener said in a statement. "That starts with remodeling, converting, or even replacing buildings that may have become outdated and that simply aren't going to succeed going forward." Eligible projects would include academic institutions, sports facilities, mixed-use projects including housing, biotech labs, offices, public works and even smaller changes such as modifying an existing building's exterior. The city's existing zoning and permit requirements would remain intact. "We're not taking away any local control," Wiener said in an interview with the Chronicle on Friday.

California Sen. Scott Wiener is proposing a bill that, he said, would make it easier for San Francisco's downtown area to recover from the pandemic. However, it's not clear how much of an impact the bill would have if it's eventually passed since other factors are at play. New construction has been nearly frozen in San Francisco since the pandemic, amid consistently high labor costs, elevated interest rates and weakening demand for both apartments and commercial space.Major developers have reiterated that they have no plans to start work on significant new projects any time soon. Last week, Kilroy Realty, which has approval for a massive 2.3 million-square-foot redevelopment ofSouth of Market's Flower Mart, said no groundbreakings are planned this year -- anywhere.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
The Courts

New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court (theverge.com) 47

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

Privacy

New 'Gold Pickaxe' Android, iOS Malware Steals Your Face For Fraud (bleepingcomputer.com) 13

An anonymous reader quotes a report from BleepingComputer: A new iOS and Android trojan named 'GoldPickaxe' employs a social engineering scheme to trick victims into scanning their faces and ID documents, which are believed to be used to generate deepfakes for unauthorized banking access. The new malware, spotted by Group-IB, is part of a malware suite developed by the Chinese threat group known as 'GoldFactory,' which is responsible for other malware strains such as 'GoldDigger', 'GoldDiggerPlus,' and 'GoldKefu.' Group-IB says its analysts observed attacks primarily targeting the Asia-Pacific region, mainly Thailand and Vietnam. However, the techniques employed could be effective globally, and there's a danger of them getting adopted by other malware strains. [...]

For iOS (iPhone) users, the threat actors initially directed targets to a TestFlight URL to install the malicious app, allowing them to bypass the normal security review process. When Apple remove the TestFlight app, the attackers switched to luring targets into downloading a malicious Mobile Device Management (MDM) profile that allows the threat actors to take control over devices. Once the trojan has been installed onto a mobile device in the form of a fake government app, it operates semi-autonomously, manipulating functions in the background, capturing the victim's face, intercepting incoming SMS, requesting ID documents, and proxying network traffic through the infected device using 'MicroSocks.'

Group-IB says the Android version of the trojan performs more malicious activities than in iOS due to Apple's higher security restrictions. Also, on Android, the trojan uses over 20 different bogus apps as cover. For example, GoldPickaxe can also run commands on Android to access SMS, navigate the filesystem, perform clicks on the screen, upload the 100 most recent photos from the victim's album, download and install additional packages, and serve fake notifications. The use of the victims' faces for bank fraud is an assumption by Group-IB, also corroborated by the Thai police, based on the fact that many financial institutes added biometric checks last year for transactions above a certain amount.

The Courts

RFK Jr. Wins Deferred Injunction In Vax Social Media Suit (bloomberglaw.com) 323

schwit1 writes: Robert F. Kennedy Jr. won a preliminary injunction against the White House and other federal defendants in his suit alleging government censorship of his statements against vaccines on social media. The injunction, however, will be stayed until the US Supreme Court rules in a related case brought by Missouri and Louisiana. An injunction is warranted because Kennedy showed he is likely to succeed on the merits of his claims, Judge Terry A. Doughty of the US District Court for the Western District of Louisiana said Wednesday.

The White House defendants, the Surgeon General defendants, the Centers for Disease Control and Prevention defendants, the Federal Bureau of Investigation defendants, and the Cybersecurity & Infrastructure Security Agency defendants likely violated the Free Speech Clause of the First Amendment, Doughty said. Kennedy's class action complaint, brought with health care professional Connie Sampognaro and Kennedy's nonprofit, Children's Health Defense, alleges that the federal government, beginning in early 2020, began a campaign to induce Facebook, Google (YouTube), and X, formerly known as Twitter, to censor constitutionally protected speech.

Specifically, Kennedy said, the government suppressed "facts and opinions about the COVID vaccines that might lead people to become 'hesitant' about COVID vaccine mandates." Kennedy has sufficiently shown that these defendants "jointly participated in the actions of the social media" platforms by '"insinuating' themselves into the social-media companies' private affairs and blurring the line between public and private action," Doughty said.

AI

Air Canada Found Liable For Chatbot's Bad Advice On Plane Tickets 72

An anonymous reader quotes a report from CBC.ca: Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot. In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote. "While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." In a decision released this week, Rivers ordered Air Canada to pay Jake Moffatt $812 to cover the difference between the airline's bereavement rates and the $1,630.36 they paid for full-price tickets to and from Toronto bought after their grandmother died.
Censorship

Leaked Emails Show Hugo Awards Self-Censoring To Appease China (404media.co) 89

samleecole shares a report from 404 Media: A trove of leaked emails shows how administrators of one of the most prestigious awards in science fiction censored themselves because the awards ceremony was being held in China. Earlier this month, the Hugo Awards came under fire with accusations of censorship when several authors were excluded from the awards, including Neil Gaiman, R. F. Kuang, Xiran Jay Zhao, and Paul Weimer. These authors' works had earned enough votes to make them finalists, but were deemed "ineligible" for reasons not disclosed by Hugo administrators. The Hugo Awards are one of the largest and most important science fiction awards. [...]

The emails, which show the process of compiling spreadsheets of the top 10 works in each category and checking them for "sensitive political nature" to see if they were "an issue in China," were obtained by fan writer Chris M. Barkley and author Jason Sanford, and published on fandom news site File 770 and Sanford's Patreon, where they uploaded the full PDF of the emails. They were provided to them by Hugo Awards administrator Diane Lacey. Lacey confirmed in an email to 404 Media that she was the source of the emails. "In addition to the regular technical review, as we are happening in China and the *laws* we operate under are different...we need to highlight anything of a sensitive political nature in the work," Dave McCarty, head of the 2023 awards jury, directed administrators in an email. "It's not necessary to read everything, but if the work focuses on China, taiwan, tibet, or other topics that may be an issue *in* China...that needs to be highlighted so that we can determine if it is safe to put it on the ballot of if the law will require us to make an administrative decision about it."

The email replies to this directive show administrators combing through authors' social media presences and public travel histories, including from before they were nominated for the 2023 awards, and their writing and bodies of work beyond just what they were nominated for. Among dozens of other posts and writings, they note Weimer's negative comments about the Chinese government in a Patreon post and misspell Zhao's name and work (calling their novel Iron Widow "The Iron Giant"). About author Naseem Jamnia, an administrator allegedly wrote, "Author openly describes themselves as queer, nonbinary, trans, (And again, good for them), and frequently writes about gender, particularly non-binary. The cited work also relies on these themes. I include them because I don't know how that will play in China. (I suspect less than well.)"

"As far as our investigation is concerned there was no reason to exclude the works of Kuang, Gaiman, Weimer or Xiran Jay Zhao, save for being viewed as being undesirable in the view of the Hugo Award admins which had the effect of being the proxies Chinese government," Sanford and Barkley wrote. In conjunction with the email trove, Sanford and Barkley also released an apology letter from Lacey, in which she explains some of her role in the awards vetting process and also blames McCarty for his role in the debacle. McCarty, along with board chair Kevin Standlee, resigned earlier this month.

The Courts

AMC To Pay $8 Million For Allegedly Sharing Subscribers' Viewing History With Tech Companies (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE. The settlement comes in response to allegations that AMC illegally shared subscribers' viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA). Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing "information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider." It was originally passed to protect individuals' right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called "Bork Tapes" revealed little -- other than that the judge frequently rented spy thrillers and British costume dramas -- but lawmakers recognized that speech could be chilled by monitoring anyone's viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint (PDF) that "the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before." According to subscribers suing, AMC allegedly installed tracking technologies -- including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology -- on its website, allowing their personally identifying information to be connected with their viewing history. [...]

If it's approved, AMC has agreed to "suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC's disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual." All registered users of AMC services who "requested or obtained video content on at least one of the six AMC services" between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9. In addition to distributing the $8.3 million settlement fund among class members, subscribers will also receive a free one-week digital subscription.

Privacy

Your AI Girlfriend Is a Data-Harvesting Horror Show (gizmodo.com) 135

"A lot of that AI chatbots that you spend days talking to push hard on getting more and more private information from you," writes longtime Slashdot reader michelcultivo, sharing a report from Gizmodo.

"To be perfectly blunt, AI girlfriends and boyfriends are not your friends," says Misha Rykov, a Mozilla Researcher from the company's *Privacy Not Included project. "Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you." Gizmodo reports: Mozilla dug into 11 different AI romance chatbots, including popular apps such as Replika, Chai, Romantic AI, EVA AI Chat Bot & Soulmate, and CrushOn.AI. Every single one earned the Privacy Not Included label, putting these chatbots among the worst categories of products Mozilla has ever reviewed. You've heard stories about data problems before, but according to Mozilla, AI girlfriends violate your privacy in "disturbing new ways." For example, CrushOn.AI collects details including information about sexual health, use of medication, and gender-affirming care. 90% of the apps may sell or share user data for targeted ads and other purposes, and more than half won't let you delete the data they collect. Security was also a problem. Only one app, Genesia AI Friend & Partner, met Mozilla's minimum security standards.

One of the more striking findings came when Mozilla counted the trackers in these apps, little bits of code that collect data and share them with other companies for advertising and other purposes. Mozilla found the AI girlfriend apps used an average of 2,663 trackers per minute, though that number was driven up by Romantic AI, which called a whopping 24,354 trackers in just one minute of using the app. The privacy mess is even more troubling because the apps actively encourage you to share details that are far more personal than the kind of thing you might enter into a typical app. EVA AI Chat Bot & Soulmate pushes users to "share all your secrets and desires," and specifically asks for photos and voice recordings. It's worth noting that EVA was the only chatbot that didn't get dinged for how it uses that data, though the app did have security issues. [...]

The Courts

NYC Sues Social Media Companies Over Youth Mental Health Crisis (abc7ny.com) 63

New York City Mayor Eric Adams announced a lawsuit against four of the nation's largest social media companies, accusing them of fueling a "national youth mental health crisis." From a report: The lawsuit was filed to hold TikTok, Instagram, Facebook, Snapchat, and YouTube Accountable for their damaging influence on the mental health of children, Adams said. The lawsuit, filed in California Superior Court, alleged the companies intentionally designed their platforms to purposefully manipulate and addict children and teens to social media applications. The lawsuit pointed to the use of algorithms to generate feeds that keep users on the platforms longer and encourage compulsive use.

"Over the past decade, we have seen just how addictive and overwhelming the online world can be, exposing our children to a non-stop stream of harmful content and fueling our national youth mental health crisis," Adams said. "Our city is built on innovation and technology, but many social media platforms end up endangering our children's mental health, promoting addiction, and encouraging unsafe behavior." The lawsuit accused the social media companies of manipulating users by making them feel compelled to respond to one positive action with another positive action.

"These platforms take advantage of reciprocity by, for example, automatically telling the sender when their message was seen or sending notifications when a message was delivered, encouraging teens to return to the platform again and again and perpetuating online engagement and immediate responses," the lawsuit said. The city is joining hundreds of school districts across the nation in filing litigation to force the tech companies to change their behavior and recover the costs of addressing the public health threat.

Privacy

US Military Notifies 20,000 of Data Breach After Cloud Email Leak (techcrunch.com) 11

An anonymous reader quotes a report from TechCrunch: The U.S. Department of Defense is notifying tens of thousands of individuals that their personal information was exposed in an email data spill last year. According to the breach notification letter sent out to affected individuals on February 1, the Defense Intelligence Agency -- the DOD's military intelligence agency -- said, "numerous email messages were inadvertently exposed to the Internet by a service provider," between February 3 and February 20, 2023. TechCrunch has learned that the breach disclosure letters relate to an unsecured U.S. government cloud email server that was spilling sensitive emails to the open internet. The cloud email server, hosted on Microsoft's cloud for government customers, was accessible from the internet without a password, likely due to a misconfiguration.

The DOD is sending breach notification letters to around 20,600 individuals whose information was affected. "As a matter of practice and operations security, we do not comment on the status of our networks and systems. The affected server was identified and removed from public access on February 20, 2023, and the vendor has resolved the issues that resulted in the exposure. DOD continues to engage with the service provider on improving cyber event prevention and detection. Notification to affected individuals is ongoing," said DOD spokesperson Cdr. Tim Gorman in an email to TechCrunch.

The Courts

Amazon Sued Over Prime Video Ads (variety.com) 68

Amazon faces a class-action lawsuit accusing the company of false advertising and deceptive practices because Prime Video now serves commercials by default. Variety reports: "For years, people purchased and renewed their Amazon Prime subscriptions believing that they would include ad-free streaming," the lawsuit says. "But last month, Amazon changed the deal. To stream movies and TV shows without ads, Amazon customers must now pay an additional $2.99 per month ... This is not fair, because these subscribers already paid for the ad-free version; these subscribers should not have to pay an additional $2.99/month for something that they already paid for."

The case was filed on behalf of Wilbert Napoleon, a resident of Eastvale, Calif., who says he's a Prime member. "Plaintiff brings this case for himself and for other Amazon Prime customers," the suit said. The complain alleged that Amazon violates Washington State and California state consumer protection laws that prohibit unfair competition and deceptive business acts and practices. Amazon's conduct, as alleged, "was immoral, unethical, oppressive, unscrupulous and substantially injurious to consumers,â according to the lawsuit. The suit seeks unspecific monetary damages, including punitive damages, as well as an injunction to block Amazon's alleged deceptive conduct.

The suit was filed Feb. 9, after Amazon starting on Jan. 29 began running ads in Prime Video content in major markets including the United States unless users opt to pay extra ($2.99/month in the U.S.) to have an ad-free experience. Some analysts have forecast Prime Video ads generating more than $3 billion in revenue in 2024.

Patents

US Patent Office Confirms AI Can't Hold Patents 44

The US Patent and Trademark Office (USPTO) asserts that only humans can be recognized as inventors on patent applications, not artificial intelligence systems, although the use of AI in the invention process is permitted and must be disclosed. The Verge reports: The agency published (PDF) its latest guidance following a series of "listening" tours to gather public feedback. It states that while AI systems and other "non-natural persons" can't be listed as inventors in patent applications, "the use of an AI system by a natural person does not preclude a natural person from qualifying as an inventor." People seeking patents must disclose if they used AI in the invention process, just as the USPTO asks all applicants to list all material information necessary to make a decision.

However, to be able to register a patent, the person using the AI must've contributed significantly to the invention's conception. A person simply asking an AI system to create something and overseeing it, the report says, does not make them an inventor. The office says that a person who simply presents the problem to an AI system or "recognizes and appreciates" its output as a good invention can't claim credit for that patent.

"However, a significant contribution could be shown by the way the person constructs the prompt in view of a specific problem to elicit a particular solution from the AI system," the USPTO says. The office also says that "maintaining 'intellectual domination' over an AI system does not, on its own, make a person an inventor" -- so simply overseeing or owning an AI that creates things doesn't mean you can file a patent for them.
Encryption

Backdoors That Let Cops Decrypt Messages Violate Human Rights, EU Court Says (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court's decision could potentially disrupt the European Commission's proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users' messages. This ruling came after Russia's intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users' encrypted messages to deter "terrorism-related activities" in 2017, ECHR's ruling said. [...] In the end, the ECHR concluded that the Telegram user's rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram's position that complying with the FSB's disclosure order would force changes impacting all its users.

The "confidentiality of communications is an essential element of the right to respect for private life and correspondence," the ECHR's ruling said. Thus, requiring messages to be decrypted by law enforcement "cannot be regarded as necessary in a democratic society." [...] "Weakening encryption by creating backdoors would apparently make it technically possible to perform routine, general, and indiscriminate surveillance of personal electronic communications," the ECHR's ruling said. "Backdoors may also be exploited by criminal networks and would seriously compromise the security of all users' electronic communications. The Court takes note of the dangers of restricting encryption described by many experts in the field."

Martin Husovec, a law professor who helped to draft EISI's testimony, told Ars that EISI is "obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone's privacy." [...] EISI's Husovec told Ars that ECHR's ruling is "indeed very important," because "it clearly signals to the EU legislature that weakening encryption is a huge problem and that the states must explore alternatives." If the Court of Justice of the European Union endorses this ruling, which Husovec said is likely, the consequences for the EU's legislation proposing scanning messages to stop illegal content like CSAM from spreading "could be significant," Husovec told Ars. During negotiations this spring, lawmakers may have to make "major concessions" to ensure the proposed rule isn't invalidated in light of the ECHR ruling, Husovec told Ars.
Europol and the European Union Agency for Cybersecurity (ENISA) said in a statement: "Solutions that intentionally weaken technical protection mechanisms to support law enforcement will intrinsically weaken the protection against criminals as well, which makes an easy solution impossible."
Piracy

Apple Pulls Popular Movie Piracy App Kimi From the App Store (wired.com) 25

After climbing the charts of Apple's App Store, the trendy Kimi app, with its collection of bootlegged movies, has disappeared. From a report: Pretending to be a spot-the-difference vision-testing game, the widely downloaded app ranked above Netflix, Hulu, and Amazon Prime Video in Apple's charts this week for free entertainment apps before it was removed. Without having to pay for anything or log in to any kind of account, iPhone owners could previously use Kimi to browse a wide selection of bootlegs for popular movies and TV shows. Many of the movies up for Best Picture at this year's Oscars were on Kimi, at varying levels of quality.

Poor Things was included in a grainy, pixelated state, but a high-quality version of Killers of the Flower Moon was on Kimi to stream, although an intrusive ad for online casinos was splashed across the top. That definitely isn't the viewing experience Martin Scorsese imagined for audiences. Not just limited to movies, viewers were also able to access episodes of currently airing TV shows, like RuPaul's Drag Race, through the Kimi app. Who was behind this piracy app? It remains a mystery. The developer was listed as "Marcus Evans" in the app store before Kimi was taken down, and this was the only app listed under that name, likely a pseudonym.

Crime

Wi-Fi Jamming To Knock Out Cameras Suspected In Nine Minnesota Burglaries (tomshardware.com) 174

Mark Tyson reports via Tom's Hardware: A serial burglar in Edina, Minnesota is suspected of using a Wi-Fi jammer to knock out connected security cameras before stealing and making off with the victim's prized possessions. [...] Edina police suspect that nine burglaries in the last six months have been undertaken with Wi-Fi jammer(s) deployed to ensure incriminating video evidence wasnâ(TM)t available to investigators. The modus operandi of the thief or thieves is thought to be something like this:

- Homes in affluent areas are found - Burglars carefully watch the homes - The burglars avoid confrontation, so appear to wait until homes are empty - Seizing the opportunity of an empty home, the burglars will deploy Wi-Fi jammer(s) - "Safes, jewelry, and other high-end designer items," are usually taken

A security expert interviewed by the source publication, KARE11, explained that the jammers simply confused wireless devices rather than blocking signals. They usually work by overloading wireless traffic âoeso that real traffic cannot get through,â the news site was told. [...] Worryingly, Wi-Fi jamming is almost a trivial activity for potential thieves in 2024. KARE11 notes that it could buy jammers online very easily and cheaply, with prices ranging from $40 to $1,000. Jammers are not legal to use in the U.S. but they are very easy to buy online.

The Courts

OpenAI Gets Some of Sarah Silverman's Suit Cut in Mixed Ruling (bloomberglaw.com) 64

OpenAI must face a claim that it violated California unfair competition law by using copyrighted books from comedian Sarah Silverman and other authors to train ChatGPT without permission. From a report: But US District Judge Araceli Martinez-Olguin on Monday also dismissed a number of Silverman and her coplaintiffs' other legal claims, including allegations of vicarious copyright infringement, violations of the Digital Millennium Copyright Act, negligence, and unjust enrichment. The judge gave the authors the opportunity to amend their proposed class action by March 13 to fix the defects in the complaint.

The core of the lawsuit remains alive, as OpenAI's motion to dismiss, filed last summer, didn't address Silverman's claim of direct copyright infringement for copying millions of books across the internet without permission. Courts haven't yet determined whether using copyrighted work to train AI models falls under copyright law's fair use doctrine, shielding the companies from liability. Although Martinez-Olguin allowed the unfair competition claim to advance, she said the claim could be preempted by the federal Copyright Act, which prohibits state law claims that allege the same violation as a copyright claim.

Patents

Cloudflare Defeats Another Patent Troll With Crowd-Sourced Prior-Art Army (theregister.com) 23

When it comes to defeating patent trolls with crowd-sourced prior art, Cloudflare is now two-for-two after winning its latest case against Sable Networks. The Register: Sable Networks, which owns patents originally given to defunct "flow-based router" company Caspian Networks, sued Cloudflare and five other companies in 2021 alleging a whole host of violations of four patents now owned by Sable. A lot has changed since the case was filed in the US District Court for the Western District of Texas, leading to a jury verdict last week that found Cloudflare not only didn't infringe on the single patent that made it to trial, but that the final patent claim at issue was invalid as well. It took the jury just two hours to return the result, Cloudflare said.

"Since Sable first sued us, we've invalidated significant parts of three Sable patents, hamstringing their ability to bring lawsuits against other companies," Cloudflare's in-house counsel boasted on Monday. Cloudflare said that it managed to whittle the case down from four patents and "approximately 100 claims" to a single claim on one patent -- number 7,012,919 -- over the past three years. This is thanks in part to the assistance of outside investigators on Project Jengo, a scheme first launched in 2017 to get help digging up prior-art patents when Cloudflare sued by another patent troll, Blackbird Technologies.
More: Cloudflare blog.
The Courts

Amazon Hides Cheaper Items With Faster Delivery, Lawsuit Alleges (arstechnica.com) 23

A class-action lawsuit alleges (PDF) that Amazon manipulates its platform through a biased algorithm to favor the "Buy Box" for items that generate higher fees for Amazon, often leading consumers to overpay for products that could be obtained cheaper and just as quickly from other sellers on the platform. Ars Technica reports: The lawsuit claims that a biased algorithm drives Amazon's "Buy Box," which appears on an item's page and prompts shoppers to "Buy Now" or "Add to Cart." According to customers suing, nearly 98 percent of Amazon sales are of items featured in the Buy Box, because customers allegedly "reasonably" believe that featured items offer the best deal on the platform.

"But they are often wrong," the complaint said, claiming that instead, Amazon features items from its own retailers and sellers that participate in Fulfillment By Amazon (FBA), both of which pay Amazon higher fees and gain secret perks like appearing in the Buy Box. "The result is that consumers routinely overpay for items that are available at lower prices from other sellers on Amazonâ"not because consumers don't care about price, or because they're making informed purchasing decisions, but because Amazon has chosen to display the offers for which it will earn the highest fees," the complaint said.

Authorities in the US and the European Union have investigated Amazon's allegedly anticompetitive Buy Box algorithm, confirming that it's "favored FBA sellers since at least 2016," the complaint said. In 2021, Amazon was fined more than $1 billion by the Italian Competition Authority over these unfair practices, and in 2022, the European Commission ordered Amazon to "apply equal treatment to all sellers when deciding what to feature in the Buy Box." These investigations served as the first public notice that Amazon's Buy Box couldn't be trusted, customers suing said. Amazon claimed that the algorithm was fixed in 2020, but so far, Amazon does not appear to have addressed all concerns over its Buy Box algorithm. As of 2023, European regulators have continued pushing Amazon "to take further action to remedy its Buy Box bias in their respective jurisdictions," the customers' complaint said.

Slashdot Top Deals