×
Sci-Fi

First US Navy Pilot To Publicly Report UAPs Says 'Congress Must Reveal the Truth To the American People' (thehill.com) 192

Ryan Graves, former Lt. U.S. Navy and F/A-18F pilot who was the first active-duty fighter pilot to come forward publicly about regular sightings of UAP, says more data is needed about unidentified anomalous phenomena (UAP). "We should encourage pilots and other witnesses to come forward and keep the pressure on Congress to prioritize UAP as a matter of national security," writes Graves in an opinion piece for The Hill. An anonymous Slashdot reader shares an excerpt from his report: As a former U.S. Navy F/A-18 fighter pilot who witnessed unidentified anomalous phenomena (UAP) on a regular basis, let me be clear. The U.S. government, former presidents, members of Congress of both political parties and directors of national intelligence are trying to tell the American public the same uncomfortable truth I shared: Objects demonstrating extreme capabilities routinely fly over our military facilities and training ranges. We don't know what they are, and we are unable to mitigate their presence. The Office of the Director of National Intelligence (ODNI) last week published its second ever report on UAP activity. While the unclassified version is brief, its findings are sobering. Over the past year, the government has collected hundreds of new reports of enigmatic objects from military pilots and sensor systems that cannot be identified and "represent a hazard to flight safety." The report also preserves last year's review of the 26-year reporting period that some UAP may represent advanced technology, noting "unusual flight characteristics or performance capabilities."

Mysteriously, no UAP reports have been confirmed to be foreign so far. However, just this past week, a Chinese surveillance balloon shut down air traffic across the United States. How are we supposed to make sense of hundreds of reports of UAP that violate restricted airspace uncontested and interfere with both civilian and military pilots? Here is the hard truth. We don't know. UAP are a national security problem, and we urgently need more data.

Why don't we have more data? Stigma. I know the fear of stigma is a major problem because I was the first active-duty fighter pilot to come forward publicly about regular sightings of UAP, and it was not easy. There has been little support or incentive for aircrew to speak publicly on this topic. There was no upside to reporting hard-to-explain sightings within the chain of command, let alone doing so publicly. For pilots to feel comfortable, it will require a culture shift inside organizations and in society at large. I have seen for myself on radar and talked with the pilots who have experienced near misses with mysterious objects off the Eastern Seaboard that have triggered unsafe evasive actions and mandatory safety reports. There were 50 or 60 people who flew with me in 2014-2015 and could tell you they saw UAP every day. Yet only one other pilot has confirmed this publicly. I spoke out publicly in 2019, at great risk personally and professionally, because nothing was being done. The ODNI report itself notes that concentrated efforts to reduce stigma have been a major reason for the increase in reports this year. To get the data and analyze it scientifically, we must uproot the lingering cultural stigma of tin foil hats and "UFOs" from the 1950s that stops pilots from reporting the phenomena and scientists from studying it.
Last September, the U.S. Navy said that all of the government's UFO videos are classified information and releasing any additional UFO videos would "harm national security."
Crime

Finland's Most-Wanted Hacker Nabbed In France (krebsonsecurity.com) 17

An anonymous reader quotes a report from KrebsOnSecurity: Julius "Zeekill" Kivimaki, a 25-year-old Finnish man charged with extorting a local online psychotherapy practice and leaking therapy notes for more than 22,000 patients online, was arrested this week in France. A notorious hacker convicted of perpetrating tens of thousands of cybercrimes, Kivimaki had been in hiding since October 2022, when he failed to show up in court and Finland issued an international warrant for his arrest. [...] According to the French news site actu.fr, Kivimaki was arrested around 7 a.m. on Feb. 3, after authorities in Courbevoie responded to a domestic violence report. Kivimaki had been out earlier with a woman at a local nightclub, and later the two returned to her home but reportedly got into a heated argument. Police responding to the scene were admitted by another woman -- possibly a roommate -- and found the man inside still sleeping off a long night. When they roused him and asked for identification, the 6 3 blonde, green-eyed man presented an ID that stated he was of Romanian nationality. The French police were doubtful. After consulting records on most-wanted criminals, they quickly identified the man as Kivimaki and took him into custody.

Kivimaki initially gained notoriety as a self-professed member of the Lizard Squad, a mainly low-skilled hacker group that specialized in DDoS attacks. But American and Finnish investigators say Kivimaki's involvement in cybercrime dates back to at least 2008, when he was introduced to a founding member of what would soon become HTP. Finnish police said Kivimaki also used the nicknames "Ryan", "RyanC" and "Ryan Cleary" (Ryan Cleary was actually a member of a rival hacker group -- LulzSec -- who was sentenced to prison for hacking). Kivimaki and other HTP members were involved in mass-compromising web servers using known vulnerabilities, and by 2012 Kivimaki's alias Ryan Cleary was selling access to those servers in the form of a DDoS-for-hire service. Kivimaki was 15 years old at the time. In 2013, investigators going through devices seized from Kivimaki found computer code that had been used to crack more than 60,000 web servers using a previously unknown vulnerability in Adobe's ColdFusion software.

Multiple law enforcement sources told KrebsOnSecurity that Kivimaki was responsible for making an August 2014 bomb threat against former Sony Online Entertainment President John Smedley that grounded an American Airlines plane. That incident was widely reported to have started with a tweet from the Lizard Squad, but Smedley and others said it started with a call from Kivimaki. Kivimaki also was involved in calling in multiple fake bomb threats and "swatting" incidents -- reporting fake hostage situations at an address to prompt a heavily armed police response to that location.

Microsoft

Microsoft Swears It's Not Coming For Your Data With Scan For Old Office Versions (theregister.com) 94

Microsoft wants everyone to know that it isn't looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software. From a report: In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support. Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April). The company stressed that it would run only one time and would not install anything on the user's Windows system, adding that the file for the update is scanned to ensure it's not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

The update caused some discussion among users, at least enough to convince Microsoft to make another pitch that it is respecting user privacy and won't access private data despite scanning their systems. The update collects diagnostic and performance data so that it can determine the use of various versions of Office and how to best support and service them, the software maker wrote in an expanded note this week. The update will silently run once to collect the data and no files are left on the user's systems once the scan is completed.

Government

Are Citywide Surveillance Cameras Effective? (msn.com) 95

The Washington Post looks at the effectiveness — and the implications — of "citywide surveillance" networks, including Memphis's SkyCop , "built on 2,100 cameras that broadcast images back to a police command center every minute of every day." Known for their blinking blue lights, the SkyCop cameras now blanket many of the city's neighborhoods, gas stations, sidewalks and parks. The company that runs SkyCop, whose vice president of sales previously worked for the Memphis police, promotes it as a powerful crime deterrent that can help "neighborhoods take back their streets." But after a decade in which Memphis taxpayers have paid $10 million to expand the surveillance system, crime in the city has gone up....

No agency tracks nationwide camera installation statistics, but major cities have invested heavily in such networks. Police in Washington, D.C., said they had deployed cameras at nearly 300 intersections by 2021, up from 48 in 2007. In Chicago, more than 30,000 cameras are viewable by police; in parts of New York City, the cameras watch every block. Yet researchers have found no substantive evidence that the cameras actually reduce crime....

In federal court, judges have debated whether round-the-clock police video recording could constitute an unreasonable search as prohibited by the Fourth Amendment. Though the cameras are installed in public areas, they also capture many corners of residential life, including people's doors and windows. "Are we just going to put these cameras in front of everybody's house and monitor them and see if anybody's up to anything?" U.S. Circuit Judge O. Rogeriee Thompson said during oral arguments for one such case in 2021....

Dave Maass, a director at the digital rights group Electronic Frontier Foundation who researches police surveillance technology, said these systems have expanded rapidly in the United States without real evidence that they have led to a drop in crime. "This often isn't the community coming in and asking for it, it's police going to conferences where ... vendors are promising the world and that they'll miraculously solve crimes," Maass said. "But it's just a commercial thing. It's just business."

Nonetheless, the Post notes that in Memphis many SkyCop cameras are even outfitted "with license-plate recognition software that records the time and location of every passing car."
EU

After Cracking Another 'Secure' Messaging App, European Police Arrest 42 (barrons.com) 38

Slashdot reader lexios shares this report from the French international news agency Agence France-Press: European police arrested 42 suspects and seized guns, drugs and millions in cash, after cracking another encrypted online messaging service used by criminals, Dutch law enforcement said Friday. Police launched raids on 79 premises in Belgium, Germany and the Netherlands following an investigation that started back in September 2020 and led to the shutting down of the covert Exclu Messenger service.

After police and prosecutors got into the Exclu secret communications system, they were able to read the messages passed between criminals for five months before the raids, said Dutch police. Those arrested include users of the app, as well as its owners and controllers. Police in France, Italy and Sweden, as well as Europol and Eurojust, its justice agency twin, also took part in the investigation. The police raids uncovered at least two drugs labs, one cocaine-processing facility, several kilograms of drugs, four million euros in cash, luxury goods and guns, Dutch police said.

The "secure" messaging app was used by around 3 000 people who paid 800 euros (roughly $866 USD) for a six-month subscription.
Privacy

Dashlane Publishes Its Source Code To GitHub In Transparency Push (techcrunch.com) 8

Password management company Dashlane has made its mobile app code available on GitHub for public perusal, a first step it says in a broader push to make its platform more transparent. TechCrunch reports: The Dashlane Android app code is available now alongside the iOS incarnation, though it also appears to include the codebase for its Apple Watch and Mac apps even though Dashlane hasn't specifically announced that. The company said that it eventually plans to make the code for its web extension available on GitHub too. Initially, Dashlane said that it was planning to make its codebase "fully open source," but in response to a handful of questions posed by TechCrunch, it appears that won't in fact be the case.

At first, the code will be open for auditing purposes only, but in the future it may start accepting contributions too --" however, there is no suggestion that it will go all-in and allow the public to fork or otherwise re-use the code in their own applications. Dashlane has released the code under a Creative Commons Attribution-NonCommercial 4.0 license, which technically means that users are allowed to copy, share and build upon the codebase so long as it's for non-commercial purposes. However, the company said that it has stripped out some key elements from its release, effectively hamstringing what third-party developers are able to do with the code. [...]

"The main benefit of making this code public is that anyone can audit the code and understand how we build the Dashlane mobile application," the company wrote. "Customers and the curious can also explore the algorithms and logic behind password management software in general. In addition, business customers, or those who may be interested, can better meet compliance requirements by being able to review our code." On top of that, the company says that a benefit of releasing its code is to perhaps draw-in technical talent, who can inspect the code prior to an interview and perhaps share some ideas on how things could be improved. Moreover, so-called "white-hat hackers" will now be better equipped to earn bug bounties. "Transparency and trust are part of our company values, and we strive to reflect those values in everything we do," Dashlane continued. "We hope that being transparent about our code base will increase the trust customers have in our product."

AI

Judge Uses ChatGPT To Make Court Decision (vice.com) 59

An anonymous reader quotes a report from Motherboard: A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator -- or at least, the first time we know about it. Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document (PDF) dated January 30, 2023.

"The arguments for this decision will be determined in line with the use of artificial intelligence (AI)," Garcia wrote in the decision, which was translated from Spanish. "Accordingly, we entered parts of the legal questions posed in these proceedings." "The purpose of including these AI-produced texts is in no way to replace the judge's decision," he added. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."

The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included "Is an autistic minor exonerated from paying fees for their therapies?" and "Has the jurisprudence of the constitutional court made favorable decisions in similar cases?" Garcia included the chatbot's full responses in the decision, apparently marking the first time a judge has admitted to doing so. The judge also included his own insights into applicable legal precedents, and said the AI was used to "extend the arguments of the adopted decision." After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision.

AI

Replika, a 'Virtual Friendship' AI Chatbot, Hit With Data Ban in Italy Over Child Safety (techcrunch.com) 5

An anonymous reader shares a report: San Francisco-based AI chatbot maker, Replika -- which operates a freemium 'virtual friendship' service based on customizable digital avatars whose "personalized" responses are powered by artificial intelligence (and designed, per its pitch, to make human users feel better) -- has been ordered by Italy's privacy watchdog to stop processing local users' data. The Garante said it's concerned Replika's chatbot technology poses risks to minors -- and also that the company lacks a proper legal basis for processing children's data under the EU's data protection rules.

Additionally, the regulator is worried about the risk the AI chatbots could pose to emotionally vulnerable people. It's also accusing Luka, the developer behind the Replika app, of failing to fulfil regional legal requirements to clearly convey how it's using people's data. The order to stop processing Italians' data is effective immediately. In a press release announcing its intervention, the watchdog said: "The AI-powered chatbot, which generates a 'virtual friend' using text and video interfaces, will not be able to process [the] personal data of Italian users for the time being. A provisional limitation on data processing was imposed by the Italian Garante on the U.S.-based company that has developed and operates the app; the limitation will take effect immediately."

Crime

Former Ubiquiti Employee Pleads Guilty To Attempted Extortion Scheme (theverge.com) 15

A former employee of network technology provider Ubiquiti pleaded guilty to multiple felony charges after posing as an anonymous hacker in an attempt to extort almost $2 million worth of cryptocurrency while employed at the company. From a report: Nickolas Sharp, 37, worked as a senior developer for Ubiquiti between 2018 and 2021 and took advantage of his authorized access to Ubiquiti's network to steal gigabytes worth of files from the company during an orchestrated security breach in December 2020.

Prosecutors said that Sharp used the Surfshark VPN service to hide his home IP address and intentionally damaged Ubiquiti's computer systems during the attack in an attempt to conceal his unauthorized activity. Sharp later posed as an anonymous hacker who claimed to be behind the incident while working on an internal team that was investigating the security breach. While concealing his identity, Sharp attempted to extort Ubiquiti, sending a ransom note to the company demanding 50 Bitcoin (worth around $1.9 million at that time) in exchange for returning the stolen data and disclosing the security vulnerabilities used to acquire it. When Ubiquiti refused the ransom demands, Sharp leaked some of the stolen data to the public.
The FBI was prompted to investigate Sharp's home around March 24th, 2021, after it was discovered that a temporary internet outage had exposed Sharp's IP address during the security breach.

Further reading:
Ubiquiti Files Case Against Security Blogger Krebs Over 'False Accusations';
Former Ubiquiti Dev Charged For Trying To Extort His Employer.
Encryption

Kremlin's Tracking of Russian Dissidents Through Telegram Suggests App's Encryption Has Been Compromised (wired.com) 56

Russian antiwar activists placed their faith in Telegram, a supposedly secure messaging app. How does Putin's regime seem to know their every move? From a report: Matsapulina's case [anecdote in the story] is hardly an isolated one, though it is especially unsettling. Over the past year, numerous dissidents across Russia have found their Telegram accounts seemingly monitored or compromised. Hundreds have had their Telegram activity wielded against them in criminal cases. Perhaps most disturbingly, some activists have found their "secret chats" -- Telegram's purportedly ironclad, end-to-end encrypted feature -- behaving strangely, in ways that suggest an unwelcome third party might be eavesdropping.

These cases have set off a swirl of conspiracy theories, paranoia, and speculation among dissidents, whose trust in Telegram has plummeted. In many cases, it's impossible to tell what's really happening to people's accounts -- whether spyware or Kremlin informants have been used to break in, through no particular fault of the company; whether Telegram really is cooperating with Moscow; or whether it's such an inherently unsafe platform that the latter is merely what appears to be going on.

Facebook

Documents Show Meta Paid For Data Scraping Despite Years of Denouncing It (engadget.com) 11

An anonymous reader quotes a report from Engadget: Meta has routinely fought data scrapers, but it also participated in that practice itself -- if not necessarily for the same reasons. Bloomberg has obtained legal documents from a Meta lawsuit against a former contractor, Bright Data, indicating that the Facebook owner paid its partner to scrape other websites. Meta spokesperson Andy Stone confirmed the relationship in a discussion with Bloomberg, but said his company used Bright Data to build brand profiles, spot "harmful" sites and catch phishing campaigns, not to target competitors.

Stone added that data scraping could serve "legitimate integrity and commercial purposes" so long as it was done legally and honored sites' terms of service. Meta terminated its arrangement with Bright Data after the contractor allegedly violated company terms when gathering and selling data from Facebook and Instagram. Neither Bright Data nor Meta is saying which sites they scraped. Bright Data is countersuing Meta in a bid to keep scraping Facebook and Instagram, arguing that it only collects publicly available information and respects both European Union and US regulations.

Security

Anker Finally Comes Clean About Its Eufy Security Cameras (theverge.com) 30

An anonymous reader quotes a report from The Verge: First, Anker told us it was impossible. Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn't answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams -- among other questions -- we would publish a story about the company's lack of answers. It worked.

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted -- they can and did produce unencrypted video streams for Eufy's web portal, like the ones we accessed from across the United States using an ordinary media player. But Anker says that's now largely fixed. Every video stream request originating from Eufy's web portal will now be end-to-end encrypted -- like they are with Eufy's app -- and the company says it's updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

That's not all Anker is disclosing today. The company has apologized for the lack of communication and promised to do better, confirming it's bringing in outside security and penetration testing companies to audit Eufy's practices, is in talks with a "leading and well-known security expert" to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail. Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists. It's a little hard to take the company at its word! But we also think Anker Eufy customers, security researchers and journalists deserve to read and weigh those words, particularly after so little initial communication from the company. That's why we're publishing Anker's full responses [here].
As highlighted by Ars Technica, some of the notable statements include: - Its web portal now prohibits users from entering "debug mode."
- Video stream content is encrypted and inaccessible outside the portal.
- While "only 0.1 percent" of current daily users access the portal, it "had some issues," which have been resolved.
- Eufy is pushing WebRTC to all of its security devices as the end-to-end encrypted stream protocol.
- Facial recognition images were uploaded to the cloud to aid in replacing/resetting/adding doorbells with existing image sets, but has been discontinued. No recognition data was included with images sent to the cloud.
- Outside of the "recent issue with the web portal," all other video uses end-to-end encryption.
- A "leading and well-known security expert" will produce a report about Eufy's systems.
- "Several new security consulting, certification, and penetration testing" firms will be brought in for risk assessment.
- A "Eufy Security bounty program" will be established.
- The company promises to "provide more timely updates in our community (and to the media!)."

Privacy

GoodRx Leaked User Health Data To Facebook and Google, FTC Says (nytimes.com) 31

An anonymous reader quotes a report from The New York Times: Millions of Americans have used GoodRx, a drug discount app, to search for lower prices on prescriptions like antidepressants, H.I.V. medications and treatments for sexually transmitted diseases at their local drugstores. But U.S. regulators say the app's coupons and convenience came at a high cost for users: wrongful disclosure of their intimate health information. On Wednesday, the Federal Trade Commission accused the app's developer, GoodRx Holdings, of sharing sensitive personal data on millions of users' prescription medications and illnesses with companies like Facebook and Google without authorization. [...]

From 2017 to 2020, GoodRx uploaded the contact information of users who had bought certain medications, like birth control or erectile dysfunction pills, to Facebook so that the drug discount app could identify its users' social media profiles, the F.T.C. said in a legal complaint. GoodRx then used the personal information to target users with ads for medications on Facebook and Instagram, the complaint said, "all of which was visible to Facebook." GoodRx also targeted users who had looked up information on sexually transmitted diseases on HeyDoctor, the company's telemedicine service, with ads for HeyDoctor's S.T.D. testing services, the complaint said. Those data disclosures, regulators said, flouted public promises the company had made to "never provide advertisers any information that reveals a personal health condition."

The company's information-sharing practices, the agency said, violated a federal rule requiring health apps and fitness trackers that collect personal health details to notify consumers of data breaches. While GoodRx agreed to settle the case, it said it disagreed with the agency's allegations and admitted no wrongdoing. The F.T.C.'s case against GoodRx could upend widespread user-profiling and ad-targeting practices in the multibillion-dollar digital health industry, and it puts companies on notice that regulators intend to curb the nearly unfettered trade in consumers' health details. [...] If a judge approves the proposed federal settlement order, GoodRx will be permanently barred from sharing users' health information for advertising purposes. To settle the case, the company also agreed to pay a $1.5 million civil penalty for violating the health breach notification rule.

Privacy

Stable Diffusion 'Memorizes' Some Images, Sparking Privacy Concerns (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

However, Carlini's results are not as clear-cut as they may first appear. Discovering instances of memorization in Stable Diffusion required 175 million image generations for testing and preexisting knowledge of trained images. Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario. Also, the researchers note that the "memorization" they've discovered is approximate since the AI model cannot produce identical byte-for-byte copies of the training images. By definition, Stable Diffusion cannot memorize large amounts of data because the size of the 160,000 million-image training dataset is many orders of magnitude larger than the 2GB Stable Diffusion AI model. That means any memorization that exists in the model is small, rare, and very difficult to accidentally extract.

Still, even when present in very small quantities, the paper appears to show that approximate memorization in latent diffusion models does exist, and that could have implications for data privacy and copyright. The results may one day affect potential image synthesis regulation if the AI models become considered "lossy databases" that can reproduce training data, as one AI pundit speculated. Although considering the 0.03 percent hit rate, they would have to be considered very, very lossy databases -- perhaps to a statistically insignificant degree. [...] Eric Wallace, one of the paper's authors, shared some personal thoughts on the research in a Twitter thread. As stated in the paper, he suggested that AI model-makers should de-duplicate their data to reduce memorization. He also noted that Stable Diffusion's model is small relative to its training set, so larger diffusion models are likely to memorize more. And he advised against applying today's diffusion models to privacy-sensitive domains like medical imagery.

Crime

'Pig-Butchering' Scam Apps Sneak Into Apple's App Store and Google Play (arstechnica.com) 44

In the past year, a new term has arisen to describe an online scam raking in millions, if not billions, of dollars per year. It's called "pig butchering," and now even Apple is getting fooled into participating. From a report: Researchers from security firm Sophos said on Wednesday that they uncovered two apps available in the App Store that were part of an elaborate network of tools used to dupe people into putting large sums of money into fake investment scams. At least one of those apps also made it into Google Play, but that market is notorious for the number of malicious apps that bypass Google vetting. Sophos said this was the first time it had seen such apps in the App Store and that a previous app identified in these types of scams was a legitimate one that was later exploited by bad actors.

Pig butchering relies on a rich combination of apps, websites, web hosts, and humans -- in some cases human trafficking victims -- to build trust with a mark over a period of weeks or months, often under the guise of a romantic interest, financial adviser, or successful investor. Eventually, the online discussion will turn to investments, usually involving cryptocurrency, that the scammer claims to have earned huge sums of money from. The scammer then invites the victim to participate. Once a mark deposits money, the scammers will initially allow them to make withdrawals. The scammers eventually lock the account and claim they need a deposit of as much as 20 percent of their balance to get it back. Even when the deposit is paid, the money isn't returned, and the scammers invent new reasons the victim should send more money. The pig-butchering term derives from a farmer fattening up a hog months before it's butchered.

Facebook

Hacker Finds Bug That Allowed Anyone To Bypass Facebook 2FA (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: A bug in a new centralized system that Meta created for users to manage their logins for Facebook and Instagram could have allowed malicious hackers to switch off an account's two-factor protections just by knowing their phone number. Gtm Manoz, a security researcher from Nepal, realized that Meta did not set up a limit of attempts when a user entered the two-factor code used to log into their accounts on the new Meta Accounts Center, which helps users link all their Meta accounts, such as Facebook and Instagram.

With a victim's phone number, an attacker would go to the centralized accounts center, enter the phone number of the victim, link that number to their own Facebook account, and then brute force the two-factor SMS code. This was the key step, because there was no upper limit to the amount of attempts someone could make. Once the attacker got the code right, the victim's phone number became linked to the attacker's Facebook account. A successful attack would still result in Meta sending a message to the victim, saying their two-factor was disabled as their phone number got linked to someone else's account.

Manoz found the bug in the Meta Accounts Center last year, and reported it to the company in mid-September. Meta fixed the bug a few days later, and paid Manoz $27,200 for reporting the bug. Meta spokesperson Gabby Curtis told TechCrunch that at the time of the bug the login system was still at the stage of a small public test. Curtis also said that Meta's investigation after the bug was reported found that there was no evidence of exploitation in the wild, and that Meta saw no spike in usage of that particular feature, which would signal the fact that no one was abusing it.

The Internet

Massive Yandex Code Leak Reveals Russian Search Engine's Ranking Factors (arstechnica.com) 24

An anonymous reader quotes a report from Ars Technica: Nearly 45GB of source code files, allegedly stolen by a former employee, have revealed the underpinnings of Russian tech giant Yandex's many apps and services. It also revealed key ranking factors for Yandex's search engine, the kind almost never revealed in public. [...] While it's not clear whether there are security or structural implications of Yandex's source code revelation, the leak of 1,922 ranking factors in Yandex's search algorithm is certainly making waves. SEO consultant Martin MacDonald described the hack on Twitter as "probably the most interesting thing to have happened in SEO in years" (as noted by Search Engine Land). In a thread detailing some of the more notable factors, researcher Alex Buraks suggests that "there is a lot of useful information for Google SEO as well."

Yandex, the fourth-ranked search engine by volume, purportedly employs several ex-Google employees. Yandex tracks many of Google's ranking factors, identifiable in its code, and competes heavily with Google. Google's Russian division recently filed for bankruptcy after losing its bank accounts and payment services. Buraks notes that the first factor in Yandex's list of ranking factors is "PAGE_RANK," which is seemingly tied to the foundational algorithm created by Google's co-founders.

As detailed by Buraks (in two threads), Yandex's engine favors pages that: - Aren't too old
- Have a lot of organic traffic (unique visitors) and less search-driven traffic
- Have fewer numbers and slashes in their URL
- Have optimized code rather than "hard pessimization," with a "PR=0"
- Are hosted on reliable servers
- Happen to be Wikipedia pages or are linked from Wikipedia
- Are hosted or linked from higher-level pages on a domain
- Have keywords in their URL (up to three)

Security

JD Sports Admits Intruder Accessed 10 Million Customers' Data (theregister.com) 6

Sports fashion retailer JD Sports has confirmed miscreants broke into a system that contained data on a whopping 10 million customers, but no payment information was among the mix. The Register reports: In a post to investors this morning, the London Stock Exchange-listed business said the intrusion related to infrastructure that housed data for online orders from sub-brands including JD, Size? Millets, Blacks, Scotts and MilletSport between November 2018 and October 2020. The data accessed consisted of customer name, billing address, delivery address, phone number, order details and the final four digits of payment cards "of approximately 10 million unique customers." The company does "not hold full payment card details" and said that it has "no reason to believe that account passwords were accessed."

As is customary in such incidents, JD Sports has contacted the relevant authorities such as the Information Commissioner's Office and says it has enlisted the help of "leading cyber security experts." The chain has stores across Europe, with some operating in North America and Canada. It also operates some footwear brands including Go Outdoors and Shoe Palace.
"We want to apologize to those customers who may have been affected by this incident," said Neil Greenhalgh, chief financial officer at JD Sports. "We are advising them to be vigilant about potential scam emails, calls and texts and providing details on now to report these."

He added: "We are continuing with a full review of our cyber security in partnership with external specialists following this incident. Protecting that data of our customers is an absolute priority for JS."
Biotech

A Drug Company Made $114 Billion Gaming America's Patent System (msn.com) 92

The New York Times looks at the AbbVie's anti-inflammatory drug Humira and their "savvy but legal exploitation of the U.S. patent system." Though AbbVie's patent was supposed to expire in 2016, since then it's maintained a monopoly that generated $114 billion in revenue by using "a formidable wall of intellectual property protection and suing would-be competitors before settling with them to delay their product launches until this year." AbbVie did not invent these patent-prolonging strategies; companies like Bristol Myers Squibb and AstraZeneca have deployed similar tactics to maximize profits on drugs for the treatment of cancer, anxiety and heartburn. But AbbVie's success with Humira stands out even in an industry adept at manipulating the U.S. intellectual-property regime.... AbbVie and its affiliates have applied for 311 patents, of which 165 have been granted, related to Humira, according to the Initiative for Medicines, Access and Knowledge, which tracks drug patents. A vast majority were filed after Humira was on the market.

Some of Humira's patents covered innovations that benefited patients, like a formulation of the drug that reduced the pain from injections. But many of them simply elaborated on previous patents. For example, an early Humira patent, which expired in 2016, claimed that the drug could treat a condition known as ankylosing spondylitis, a type of arthritis that causes inflammation in the joints, among other diseases. In 2014, AbbVie applied for another patent for a method of treating ankylosing spondylitis with a specific dosing of 40 milligrams of Humira. The application was approved, adding 11 years of patent protection beyond 2016.

AbbVie has been aggressive about suing rivals that have tried to introduce biosimilar versions of Humira. In 2016, with Amgen's copycat product on the verge of winning regulatory approval, AbbVie sued Amgen, alleging that it was violating 10 of its patents. Amgen argued that most of AbbVie's patents were invalid, but the two sides reached a settlement in which Amgen agreed not to begin selling its drug until 2023.

Over the next five years, AbbVie reached similar settlements with nine other manufacturers seeking to launch their own versions of Humira. All of them agreed to delay their market entry until 2023.

A drug pricing expert at Washington University in St. Louis tells the New York Times that AbbVie and its strategy with Humira "showed other companies what it was possible to do."

But the article concludes that last year such tactics "became a rallying cry" for U.S. lawmakers "as they successfully pushed for Medicare to have greater control over the price of widely used drugs that, like Humira, have been on the market for many years but still lack competition."
Advertising

How to Handle Web Sites Asking for Your Email Address (seattletimes.com) 117

When you share your email, "you're sharing a lot more," warns the New York Times' lead consumer technology writer: [I]t can be linked to other data, including where you went to school, the make and model of the car you drive, and your ethnicity....

For many years, the digital ad industry has compiled a profile on you based on the sites you visit on the web.... An email could contain your first and last name, and assuming you've used it for some time, data brokers have already compiled a comprehensive profile on your interests based on your browsing activity. A website or an app can upload your email address into an ad broker's database to match your identity with a profile containing enough insights to serve you targeted ads.

The article recommends creating several email addresses to "make it hard for ad tech companies to compile a profile based on your email handle... Apple and Mozilla offer tools that automatically create email aliases for logging in to an app or a site; emails sent to the aliases are forwarded to your real email address." Apple's Hide My Email tool, which is part of its iCloud+ subscription service that costs 99 cents a month, will create aliases, but using it will make it more difficult to log in to the accounts from a non-Apple device. Mozilla's Firefox Relay will generate five email aliases at no cost; beyond that, the program charges 99 cents a month for additional aliases.

For sites using the UID 2.0 framework for ad targeting, you can opt out by entering your email address [or phone number] at https://transparentadvertising.org.

Slashdot Top Deals