×
Facebook

Meta, Activision Sued By Parents of Children Killed in Last Year's School Shooting (msn.com) 153

Exactly one year after the fatal shooting of 19 elementary school students in Texas, their parents filed a lawsuit against the publisher of the videogame Call of Duty, against Meta, and against the manufacturer of the AR-15-style weapon used in the attack, Daniel Defense.

The Washington Post says the lawsuits "may be the first of their kind to connect aggressive firearms marketing tactics on social media and gaming platforms to the actions of a mass shooter." The complaints contend the three companies are responsible for "grooming" a generation of "socially vulnerable" young men radicalized to live out violent video game fantasies in the real world with easily accessible weapons of war...

Several state legislatures, including California and Hawaii, passed consumer safety laws specific to the sale and marketing of firearms that would open the industry to more civil liability. Texas is not one of them. But it's just one vein in the three-pronged legal push by Uvalde families. The lawsuit against Activision and Meta, which is being filed in California, accuses the tech companies of knowingly promoting dangerous weapons to millions of vulnerable young people, particularly young men who are "insecure about their masculinity, often bullied, eager to show strength and assert dominance."

"To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters," the lawsuit states...

The lawsuit alleges that Meta, which owns Instagram, easily allows gun manufacturers like Daniel Defense to circumvent its ban on paid firearm advertisements to reach scores of young people. Under Meta's rules, gunmakers are not allowed to buy advertisements promoting the sale of or use of weapons, ammunition or explosives. But gunmakers are free to post promotional material about weapons from their own account pages on Facebook and Instagram — a freedom the lawsuit alleges Daniel Defense often exploited.

According to the complaint, the Robb school shooter downloaded a version of "Call of Duty: Modern Warfare," in November 2021 that featured on the opening title page the DDM4V7 model rifle [shooter Salvador] Ramos would later purchase. Drawing from the shooter's social media accounts, Koskoff argued he was being bombarded with explicit marketing and combat imagery from the company on Instagram... The complaint cites Meta's practice, first reported by The Washington Post in 2022, of giving gun sellers wide latitude to knowingly break its rules against selling firearms on its websites. The company has allowed buyers and sellers to violate the rule 10 times before they are kicked off, The Post reported.

The article adds that the lawsuit against Meta "echoes some of the complaints by dozens of state attorneys general and school districts that have accused the tech giant of using manipulative practices to hook... while exposing them to harmful content." It also includes a few excerpts from the text of the lawsuit.
  • It argues that both Meta and Activision "knowingly exposed the Shooter to the weapon, conditioned him to see it as the solution to his problems, and trained him to use it."
  • The lawsuit also compares their practices to another ad campaign accused of marketing harmful products to children: cigarettes. "Over the last 15 years, two of America's largest technology companies — Defendants Activision and Meta — have partnered with the firearms industry in a scheme that makes the Joe Camel campaign look laughably harmless, even quaint."

Meta and Daniel Defense didn't respond to the reporters' requests for comment. But they did quote a statement from Activision expressing sympathy for the communities and families impacted by the "horrendous and heartbreaking" shooting.

Activision also added that "Millions of people around the world enjoy video games without turning to horrific acts."


AI

FTC Chair: AI Models Could Violate Antitrust Laws (thehill.com) 42

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists' creations or people's personal information could be in violation of antitrust laws. At The Wall Street Journal's "Future of Everything Festival," Khan said the FTC is examining ways in which major companies' data scraping could hinder competition or potentially violate people's privacy rights. "The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices," Khan said at the event. "So, you can imagine, if somebody's content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition."

Khan said concern also lies in companies using people's data without their knowledge or consent, which can also raise legal concerns. "We've also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you're signing up to use them, but then are secretly or quietly using the data you're feeding them -- be it your personal data, be it, if you're a business, your proprietary data, your competitively significant data -- if they're then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns," she said.

Khan also recognized people's concerns about companies retroactively changing their terms of service to let them use customers' content, including personal photos or family videos, to feed into their AI models. "I think that's where people feel a sense of violation, that that's not really what they signed up for and oftentimes, they feel that they don't have recourse," Khan said. "Some of these services are essential for navigating day to day life," she continued, "and so, if the choice -- 'choice' -- you're being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that's a really tough spot to put people in." Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, "I think in Washington, there's increasingly a recognition that we can't, as a government, just be totally hands off and stand out of the way."
You can watch the interview with Khan here.
EU

UK Law Will Let Regulators Fine Big Tech Without Court Approval (theverge.com) 34

Emma Roth reports via The Verge: The UK could subject big tech companies to hefty fines if they don't comply with new rules meant to promote competition in digital markets. On Thursday, lawmakers passed the Digital Markets, Competition and Consumer Bill (DMCC) through Parliament, which will let regulators enforce rules without the help of the courts. The DMCC also addresses consumer protection issues by banning fake reviews, forcing companies to be more transparent about their subscription contracts, regulating secondary ticket sales, and getting rid of hidden fees. It will also force certain companies to report mergers to the UK's Competition and Markets Authority (CMA). The European Union enacted a similar law, called the Digital Markets Act (DMA).

Only the companies the CMA designates as having Strategic Market Status (SMS) have to comply. These SMS companies are described as having "substantial and entrenched market power" and "a position of strategic significance" in the UK. They must have a global revenue of more than 25 billion euros or UK revenue of more than 1 billion euros. The law will also give the CMA the authority to determine whether a company has broken a law, require compliance, and issue a fine -- all without going through the court system. The CMA can fine companies up to 10 percent of the total value of a business's global revenue for violating the new rules.

Encryption

Signal Slams Telegram's Security (techcrunch.com) 33

Messaging app Signal's president Meredith Whittaker criticized rival Telegram's security on Friday, saying Telegram founder Pavel Durov is "full of s---" in his claims about Signal. "Telegram is a social media platform, it's not encrypted, it's the least secure of messaging and social media services out there," Whittaker told TechCrunch in an interview. The comments come amid a war of words between Whittaker, Durov and Twitter owner Elon Musk over the security of their respective platforms. Whittaker said Durov's amplification of claims questioning Signal's security was "incredibly reckless" and "actually harms real people."

"Play your games, but don't take them into my court," Whittaker said, accusing Durov of prioritizing being "followed by a professional photographer" over getting facts right about Signal's encryption. Signal uses end-to-end encryption by default, while Telegram only offers it for "secret chats." Whittaker said many in Ukraine and Russia use Signal for "actual serious communications" while relying on Telegram's less-secure social media features. She said the "jury is in" on the platforms' comparative security and that Signal's open source code allows experts to validate its privacy claims, which have the trust of the security community.
The Courts

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges (apnews.com) 49

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden's voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general's office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections "remain free from unlawful interference."

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. "Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately," Kramer said in February.

The Almighty Buck

IRS Extends Free File Tax Program Through 2029 (cnbc.com) 21

The IRS has extended the Free File program through 2029, "continuing its partnership with a coalition of private tax software companies that allow most Americans to file federal taxes for free," reports CNBC. From the report: This season, Free File processed 2.9 million returns through May 11, a 7.3% increase compared to the same period last year, according to the IRS. "Free File has been an important partner with the IRS for more than two decades and helped tens of millions of taxpayers," Ken Corbin, chief of IRS taxpayer services, said in a statement Wednesday. "This extension will continue that relationship into the future."

"This multi-year agreement will also provide certainty for private-sector partners to help with their future Free File planning," Corbin added. IRS Free File remains open through the Oct. 15 federal tax extension deadline. You can use Free File for 2023 returns with an adjusted gross income of $79,000 or less, which is up from $73,000 in 2022. Fillable Forms are also still available for all income levels.

IT

Leaked Contract Shows Samsung Forces Repair Shop To Snitch On Customers (404media.co) 34

Speaking of Samsung, samleecole shares a report about the contract the South Korean firm requires repair shops to sign: In exchange for selling them repair parts, Samsung requires independent repair shops to give Samsung the name, contact information, phone identifier, and customer complaint details of everyone who gets their phone repaired at these shops, according to a contract obtained by 404 Media. Stunningly, it also requires these nominally independent shops to "immediately disassemble" any phones that customers have brought them that have been previously repaired with aftermarket or third-party parts and to "immediately notify" Samsung that the customer has used third-party parts.

"Company shall immediately disassemble all products that are created or assembled out of, comprised of, or that contain any Service Parts not purchased from Samsung," a section of the agreement reads. "And shall immediately notify Samsung in writing of the details and circumstances of any unauthorized use or misappropriation of any Service Part for any purpose other than pursuant to this Agreement. Samsung may terminate this Agreement if these terms are violated."

Businesses

iFixit is Breaking Up With Samsung (theverge.com) 13

iFixit and Samsung are parting ways. Two years after they teamed up on one of the first direct-to-consumer phone repair programs, iFixit CEO and co-founder Kyle Wiens tells The Verge the two companies have failed to renegotiate a contract -- and says Samsung is to blame. From a report: "Samsung does not seem interested in enabling repair at scale," Wiens tells me, even though similar deals are going well with Google, Motorola, and HMD. He believes dropping Samsung shouldn't actually affect iFixit customers all that much. Instead of being Samsung's partner on genuine parts and approved repair manuals, iFixit will simply go it alone, the same way it's always done with Apple's iPhones. While Wiens wouldn't say who technically broke up with whom, he says price is the biggest reason the Samsung deal isn't working: Samsung's parts are priced so high, and its phones remain so difficult to repair, that customers just aren't buying.
United States

US Sues To Break Up Ticketmaster Owner, Live Nation (nytimes.com) 60

The Justice Department on Thursday said it was suing Live Nation Entertainment [non-paywalled link], the concert giant that owns Ticketmaster, asking a court to break up the company over claims it illegally maintained a monopoly in the live entertainment industry. From a report: In the lawsuit, which is joined by 29 states and the District of Columbia, the government accuses Live Nation of dominating the industry by locking venues into exclusive ticketing contracts, pressuring artists to use its services and threatening its rivals with financial retribution. Those tactics, the government argues, have resulted in higher ticket prices for consumers and have stifled innovation and competition throughout the industry.

"It is time to break up Live Nation-Ticketmaster," Merrick Garland, the attorney general, said in a statement announcing the suit, which is being filed in the U.S. District Court for the Southern District of New York. The lawsuit is a direct challenge to the business of Live Nation, a colossus of the entertainment industry and a force in the lives of musicians and fans alike. The case, filed 14 years after the government approved Live Nation's merger with Ticketmaster, has the potential to transform the multibillion-dollar concert industry. Live Nation's scale and reach far exceed those of any competitor, encompassing concert promotion, ticketing, artist management and the operation of hundreds of venues and festivals around the world.

Mozilla

Mozilla Says It's Concerned About Windows Recall (theregister.com) 67

Microsoft's Windows Recall feature is attracting controversy before even venturing out of preview. From a report: The principle is simple. Windows takes a snapshot of a user's active screen every few seconds and dumps it to disk. The user can then scroll through the snapshots and, when something is selected, the user is given options to interact with the content.

Mozilla's Chief Product Officer, Steve Teixeira, told The Register: "Mozilla is concerned about Windows Recall. From a browser perspective, some data should be saved, and some shouldn't. Recall stores not just browser history, but also data that users type into the browser with only very coarse control over what gets stored. While the data is stored in encrypted format, this stored data represents a new vector of attack for cybercriminals and a new privacy worry for shared computers.

"Microsoft is also once again playing gatekeeper and picking which browsers get to win and lose on Windows -- favoring, of course, Microsoft Edge. Microsoft's Edge allows users to block specific websites and private browsing activity from being seen by Recall. Other Chromium-based browsers can filter out private browsing activity but lose the ability to block sensitive websites (such as financial sites) from Recall. "Right now, there's no documentation on how a non-Chromium based, third-party browser, such as Firefox, can protect user privacy from Recall. Microsoft did not engage our cooperation on Recall, but we would have loved for that to be the case, which would have enabled us to partner on giving users true agency over their privacy, regardless of the browser they choose."

Encryption

Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message (theintercept.com) 38

WhatsApp's security team warned that despite the app's encryption, users are vulnerable to government surveillance through traffic analysis, according to an internal threat assessment obtained by The Intercept. The document suggests that governments can monitor when and where encrypted communications occur, potentially allowing powerful inferences about who is conversing with whom. The report adds: Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. "Even assuming WhatsApp's encryption is unbreakable," the assessment reads, "ongoing 'collect and correlate' attacks would still break our intended privacy model."

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques. As war has grown increasingly computerized, metadata -- information about the who, when, and where of conversations -- has come to hold immense value to intelligence, military, and police agencies around the world. "We kill people based on metadata," former National Security Agency chief Michael Hayden once infamously quipped.
Meta said "WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works." Though the assessment describes the "vulnerabilities" as "ongoing," and specifically mentions WhatsApp 17 times, a Meta spokesperson said the document is "not a reflection of a vulnerability in WhatsApp," only "theoretical," and not unique to WhatsApp.
Android

Google Brings Back Group Speaker Controls After Sonos Lawsuit Win (arstechnica.com) 16

Android Authority's Mishaal Rahman reports that the group speaker volume controls feature is back in Android 15 Beta 2. "Google intentionally disabled this functionality on Pixel phones back in late 2021 due to a legal dispute with Sonos," reports Rahman. "In late 2023, Google announced it would bring back several features they had to remove, following a judge's overturning of a jury verdict that was in favor of Sonos." From the report: When you create a speaker group consisting of one or more Assistant-enabled devices in the Google Home app, you're able to cast audio to that group from your phone using a Cast-enabled app. For example, let's say I make a speaker group named "Nest Hubs" that consists of my bedroom Nest Hub and my living room Nest Hub. If I open the YouTube Music app, start playing a song, and then tap the cast icon, I can select "Nest Hubs" to start playback on both my Nest Hubs simultaneously.

If I keep the YouTube Music app open, I can control the volume of my speaker group by pressing the volume keys on my phone. This functionality is available no matter what device I use. However, if I open another app while YouTube Music is casting, whether I'm able to still control the volume of my speaker group using my phone's volume keys depends on what phone I'm using and what software version it's running. If I'm using a Pixel phone that's running a software version before Android 15 Beta 2, then I'm unable to control the volume of my speaker group unless I re-open the YouTube Music app. If I'm using a phone from any other manufacturer, then I won't have any issues controlling the volume of my speaker group.

The reason for this weird discrepancy is that Google intentionally blocked Pixel devices from being able to control the volume of Google Home speaker groups while casting. Google did this out of an abundance of caution while they were fighting a legal dispute. [...] With the release of last week's Android 15 Beta 2, we can confirm that Google finally restored this functionality.

AI

DOJ Makes Its First Known Arrest For AI-Generated CSAM (engadget.com) 98

In what's believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ "looks to establish a judicial precedent that exploitative materials are still illegal," reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16." The government says Anderegg's images showed "nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men." The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of "minors lasciviously displaying their genitals." To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He's currently in federal custody before a hearing scheduled for May 22.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

The Courts

Apple Says US Antitrust Lawsuit Should Be Dismissed 64

Apple said on Tuesday it plans to ask a U.S. judge to dismiss a lawsuit filed by the Justice Department and 15 states in March that alleged the iPhone maker monopolized the smartphone market, hurt smaller rivals and drove up prices. From a report: In a letter to U.S. District Judge Julien X. Neals in New Jersey, Apple said "far from being a monopolist, Apple faces fierce competition from well-established rivals, and the complaint fails to allege that Apple has the ability to charge supra-competitive prices or restrict output in the alleged smartphone markets." In the letter to the judge, Apple said the DOJ relies on a new "theory of antitrust liability that no court has recognized."

The government is expected to respond within seven days to the Apple letter, which the court requires parties to submit, hoping to expedite cases before advancing to a potentially more robust and expensive effort to dismiss a lawsuit. The Justice Department alleges that Apple uses its market power to get more money from consumers, developers, content creators, artists, publishers, small businesses and merchants. The civil lawsuit accuses Apple of an illegal monopoly on smartphones maintained by imposing contractual restrictions on, and withholding critical access from, developers.
Google

Google Cuts Mystery Check To US In Bid To Sidestep Jury Trial (reuters.com) 38

An anonymous reader quotes a report from Reuters: Alphabet's Google has preemptively paid damages to the U.S. government, an unusual move aimed at avoiding a jury trial in the Justice Department's antitrust lawsuit over its digital advertising business. Google disclosed (PDF) the payment, but not the amount, in a court filing last week that said the case should be heard and decided by a judge directly. Without a monetary damages claim, Google argued, the government has no right to a jury trial. The Justice Department, which has not said if it will accept the payment, declined to comment on the filing. Google asserted that its check, which it said covered its alleged overcharges for online ads, allows it to sidestep a jury trial whether or not the government takes it.

The Justice Department filed the case last year with Virginia and other states, alleging Google was stifling competition for advertising technology. The government has said Google should be forced to sell its ad manager suite. Google, which has denied the allegations, said in a statement that the Justice Department "manufactured a damages claim at the last minute in an attempt to secure a jury trial." Without disclosing the size of its payment, Google said that after months of discovery, the Justice Department could only point to estimated damages of less than $1 million. The company said the government has said the case is "highly technical" and "outside the everyday knowledge of most prospective jurors."

Privacy

Police Found Ways to Use Facial Recognition Tech After Their Cities Banned It (yahoo.com) 32

An anonymous reader shared this report from the Washington Post: As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents...

Austin police officers received the results of at least 13 face searches from a neighboring police department since the city's 2020 ban — and appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity. "That's him! Thank you very much," one Austin police officer wrote in response to an array of photos sent to him by an officer in Leander, Tex., who ran a facial recognition search, documents show. The man displayed in the pictures, John Curry Jr., was later charged with aggravated assault for allegedly charging toward someone with a knife, and is currently in jail awaiting trial. Curry's attorney declined to comment.

"Police officers' efforts to skirt these bans have not been previously reported and highlight the challenge of reining in police use of facial recognition," the article concludes.

It also points out that the technology "has played a role in the wrongful arrests of at least seven innocent Americans," according to the lawsuits they filed after charges against them were dismissed.
Crime

What Happened After a Reporter Tracked Down The Identity Thief Who Stole $5,000 (msn.com) 46

"$5,000 in cash had been withdrawn from my checking account — but not by me," writes journalist Linda Matchan in the Boston Globe. A police station manager reviewed footage from the bank — which was 200 miles away — and deduced that "someone had actually come into the bank and spoken to a teller, presented a driver's license, and then correctly answered some authentication questions to validate the account..." "You're pitting a teller against a national crime syndicate with massive resources behind them," says Paul Benda, executive vice president for risk, fraud, and cybersecurity at the American Bankers Association. "They're very well-funded, well-resourced criminal gangs doing this at an industrial scale."
The reporter writes that "For the past two years, I've worked to determine exactly who and what lay behind this crime..." [N]ow I had something new to worry about: Fraudsters apparently had a driver's license with my name on it... "Forget the fake IDs adolescents used to get into bars," says Georgia State's David Maimon, who is also head of fraud insights at SentiLink, a company that works with institutions across the United States to support and solve their fraud and risk issues. "Nowadays fraudsters are using sophisticated software and capable printers to create virtually impossible-to-detect fake IDs." They're able to create synthetic identities, combining legitimate personal information, such as a name and date of birth, with a nine-digit number that either looks like a Social Security number or is a real, stolen one. That ID can then be used to open financial accounts, apply for a bank or car loan, or for some other dodgy purpose that could devastate their victims' financial lives.



And there's a complex supply chain underpinning it all — "a whole industry on the dark web," says Eva Velasquez, president and CEO of the Identity Theft Resource Center, a nonprofit that helps victims undo the damage wrought by identity crime. It starts with the suppliers, Maimon told me — "the people who steal IDs, bring them into the market, and manufacture them. There's the producers who take the ID and fake driver's licenses and build the facade to make it look like they own the identity — trying to create credit reports for the synthetic identities, for example, or printing fake utility bills." Then there are the distributors who sell them in the dark corners of the web or the street or through text messaging apps, and finally the customers who use them and come from all walks of life. "We're seeing females and males and people with families and a lot of adolescents, because social media plays a very important role in introducing them to this world," says Maimon, whose team does surveillance of criminals' activities and interactions on the dark web. "In this ecosystem, folks disclose everything they do."

The reporter writes that "It's horrifying to discover, as I have recently, that someone has set up a tech company that might not even be real, listing my home as its principal address."

Two and a half months after the theft the stolen $5,000 was back in their bank account — but it wasn't until a year later that the thief was identified. "The security video had been shared with New York's Capital Region Crime Analysis Center, where analysts have access to facial recognition technology, and was run through a database of booking photos. A possible match resulted.... She was already in custody elsewhere in New York... Evidently, Deborah was being sought by law enforcement in at least three New York counties. [All three cases involved bank-related identity fraud.]"

Deborah was finally charged with two separate felonies: grand larceny in the third degree for stealing property over $3,000, and identity theft. But Deborah missed her next two court dates, and disappeared. "She never came back to court, and now there were warrants for her arrest out of two separate courts."

After speaking to police officials the reporter concludes "There was a good chance she was only doing the grunt work for someone else, maybe even a domestic or foreign-organized crime syndicate, and then suffering all the consequences."

The UK minister of state for security even says that "in some places people are literally captured and used as unwilling operators for fraudsters."
The Courts

Amazon Defends Its Use of Signal Messages in Court (geekwire.com) 54

America's Federal Trade Commission and 17 states filed an antitrust suit against Amazon in September. This week Amazon responded in court about its usage of Signal's "disappearing messages" feature.

Long-time Slashdot reader theodp shares GeekWire's report: At a company known for putting its most important ideas and strategies into comprehensive six-page memos, quick messages between executives aren't the place for meaningful business discussions. That's one of the points made by Amazon in its response Monday to the Federal Trade Commission's allegations about executives' use of the Signal encrypted communications app, known for its "disappearing messages" feature. "For these individuals, just like other short-form messaging, Signal was not a means to send 'structured, narrative text'; it was a way to get someone's attention or have quick exchanges on sensitive topics like public relations or human resources," the company says as part of its response, filed Monday in U.S. District Court in Seattle. Of course, for regulators investigating the company's business practices, these offhanded private comments between Amazon executives could be more revealing than carefully crafted memos meant for wider internal distribution. But in its filing this week, Amazon says there is no evidence that relevant messages have been lost, or that Signal was used to conceal communications that would have been responsive to the FTC's discovery requests. The company says "the equally logical explanation — made more compelling by the available evidence — is that such messages never existed."

In an April 25 motion, the FTC argued that the absence of Signal messages from Amazon discussing substantive business issues relevant to the case was a strong indication that such messages had disappeared. "Amazon executives deleted many Signal messages during Plaintiffs' pre-Complaint investigation, and Amazon did not instruct its employees to preserve Signal messages until over fifteen months after Amazon knew that Plaintiffs' investigation was underway," the FTC wrote in its motion. "It is highly likely that relevant information has been destroyed as a result of Amazon's actions and inactions...."

Amazon's filing quotes the company's founder, Jeff Bezos, saying in a deposition in the case that "[t]o discuss anything in text messaging or Signal messaging or anything like that of any substance would be akin to business malpractice. It's just too short of a messaging format...." The company's filing traces the initial use of Signal by executives back to the suspected hacking of Bezos' phone in 2018, which prompted the Amazon founder to seek ways to send messages more securely.

Crime

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns (cnn.com) 19

In an elaborate scam in January, "a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff," remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million. According to police, the worker had initially suspected he had received a phishing email from the company's UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.
Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices: A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. "Unfortunately, we can't go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used," the spokesperson said in an emailed statement. "Our financial stability and business operations were not affected and none of our internal systems were compromised," the person added...

Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup's East Asia regional chairman, Michael Kwok, said the "frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers."

The company's global CIO emailed CNN this statement. "Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.

"What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months."

Slashdot reader st33ld13hl adds that in a world of Deep Fakes, insurance company USAA is now asking its customers to authenticate with voice. (More information here.)

Thanks to Slashdot reader quonset for sharing the news.

Slashdot Top Deals