Security

Jack Dorsey Says His 'Secure' New Bitchat App Has Not Been Tested For Security (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver "secure" and "private" messaging without a centralized infrastructure. The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet. By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible. According to Dorsey's white paper detailing the app's protocols and privacy mechanisms, Bitchat's system design "prioritizes" security.

But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all -- by Dorsey's own admission. Since launching, Dorsey has added a warning to Bitchat's GitHub page: "This software has not received external security review and may contain vulnerabilities and does not necessarily meet its stated security goals. Do not use it for production use, and do not rely on its security whatsoever until it has been reviewed." This warning now also appears on Bitchat's main GitHub project page but was not there at the time the app debuted.

As of Wednesday, Dorsey added: "Work in progress," next to the warning on GitHub. This latest disclaimer came after security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact, as the researcher explained in a blog post. Radocea wrote that Bitchat has a "broken identity authentication/verification" system that allows an attacker to intercept someone's "identity key" and "peer id pair" -- essentially a digital handshake that is supposed to establish a trusted connection between two people using the app. Bitchat calls these "Favorite" contacts and marks them with a star icon. The goal of this feature is to allow two Bitchat users to interact, knowing that they are talking to the same person they talked to before.

Security

Activision Took Down Call of Duty Game After PC Players Hacked (techcrunch.com) 18

Activision removed "Call of Duty: WWII" from Microsoft Store and Game Pass after hackers exploited a security vulnerability that allowed them to compromise players' computers, TechCrunch reported Tuesday, citing a source. The gaming giant took the 2017 first-person shooter offline last week while investigating what it initially described only as "reports of an issue."

Players posted on social media claiming their systems had been hacked while playing the game. The vulnerability was a remote code execution exploit that enables attackers to install malware and take control of victims' devices. The Microsoft Store and Game Pass versions contained an unpatched security flaw that had been fixed in other versions of the game.
Social Networks

X Says It's 'Deeply Concerned' About India Press Censorship (aljazeera.com) 42

X said Tuesday it is "deeply concerned about ongoing press censorship in India" after the Indian government ordered the platform to block 2,355 accounts on July 3, including two Reuters news agency handles. The social media company said the order came under India's Section 69A of the Information Technology Act, with non-compliance risking criminal liability.

The Indian Ministry of Electronics and Information Technology demanded immediate action within one hour without providing justification, X said. After public outcry, the government requested X to unblock the Reuters accounts.
Wireless Networking

Jack Dorsey Launches a WhatsApp Messaging Rival Built On Bluetooth (cnbc.com) 66

Jack Dorsey has launched Bitchat, a decentralized, peer-to-peer messaging app that uses Bluetooth mesh networks for encrypted, ephemeral chats without requiring accounts, servers, or internet access. The beta version is live on TestFlight, with a full white paper available on GitHub. CNBC reports: In a post on X Sunday, Dorsey called it a personal experiment in "bluetooth mesh networks, relays and store and forward models, message encryption models, and a few other things."

Bitchat enables ephemeral, encrypted communication between nearby devices. As users move through physical space, their phones form local Bluetooth clusters and pass messages from device to device, allowing them to reach peers beyond standard range -- even without Wi-Fi or cell service. Certain "bridge" devices connect overlapping clusters, expanding the mesh across greater distances. Messages are stored only on device, disappear by default and never touch centralized infrastructure -- echoing Dorsey's long-running push for privacy-preserving, censorship-resistant communication.

Like the Bluetooth-based apps used during Hong Kong's 2019 protests, Bitchat is designed to keep working even when the internet is blocked, offering a censorship-resistant way to stay connected during outages, shutdowns or surveillance. The app also supports optional group chats, or "rooms," which can be named with hashtags and protected by passwords. It includes store and forward functionality to deliver messages to users who are temporarily offline. A future update will add WiFi Direct to increase speed and range, pushing Dorsey's vision for off-grid, user-owned communication even further.

AI

Is China Quickly Eroding America's Lead in the Global AI Race? (msn.com) 134

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," reports the Wall Street Journal.

And now Chinese AI companies "are loosening the U.S.'s global stranglehold on AI," reports the Wall Street Journal, "challenging American superiority and setting the stage for a global arms race in the technology." In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT... Saudi Aramco, the world's largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company's app on some government devices over data-security concerns.

OpenAI's ChatGPT remains the world's predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek's 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry's gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace...

Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service... On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek's model, according to co-founder Oleg Zankov. "DeepSeek is overall the same quality but 17 times cheaper," Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren't as plentiful...

The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China's ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. "If they are dependent on the global ecosystem, then we can govern it," said Gupta. "If not, China is going to do what it is going to do, and we won't have visibility."

The article also warns of other potential issues:
  • "Further down the line, a breakdown in U.S.-China cooperation on safety and security could cripple the world's capacity to fight future military and societal threats from unrestrained AI."
  • "The fracturing of global AI is already costing Western makers of computer chips and other hardware billions in lost sales... Adoption of Chinese models globally could also mean lost market share and earnings for AI-related U.S. firms such as Google and Meta."

AI

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media (boston.com) 93

A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it."

It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images.

"It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

The Internet

Websites Hosting Major US Climate Reports Taken Down (apnews.com) 77

An anonymous reader quotes a report from the Associated Press: Websites that displayed legally mandated U.S. national climate assessments seem to have disappeared, making it harder for state and local governments and the public to learn what to expect in their backyards from a warming world. Scientists said the peer-reviewed authoritative reports save money and lives. Websites for the national assessments and the U.S. Global Change Research Program were down Monday and Tuesday with no links, notes or referrals elsewhere. The White House, which was responsible for the assessments, said the information will be housed within NASA to comply with the law, but gave no further details. Searches for the assessments on NASA websites did not turn them up.

"It's critical for decision makers across the country to know what the science in the National Climate Assessment is. That is the most reliable and well-reviewed source of information about climate that exists for the United States," said University of Arizona climate scientist Kathy Jacobs, who coordinated the 2014 version of the report. "It's a sad day for the United States if it is true that the National Climate Assessment is no longer available," Jacobs said. "This is evidence of serious tampering with the facts and with people's access to information, and it actually may increase the risk of people being harmed by climate-related impacts."

"This is a government resource paid for by the taxpayer to provide the information that really is the primary source of information for any city, state or federal agency who's trying to prepare for the impacts of a changing climate," said Texas Tech climate scientist Katharine Hayhoe, who has been a volunteer author for several editions of the report. Copies of past reports are still squirreled away in NOAA's library. NASA's open science data repository includes dead links to the assessment site. [...] Additionally, NOAA's main climate.gov website was recently forwarded to a different NOAA website. Social media and blogs at NOAA and NASA about climate impacts for the general public were cut or eliminated. "It's part of a horrifying big picture," [said Harvard climate scientist John Holdren, who was President Obama's science advisor and whose office directed the assessments]. "It's just an appalling whole demolition of science infrastructure."
National climate assessments are more detailed and locally relevant than UN reports and undergo rigorous peer review and validation by scientific and federal institutions, Hayhoe and Jacobs said. Suppressing these reports would be censoring science, Jacobs said.
Education

Hacker With 'Political Agenda' Stole Data From Columbia, University Says (therecord.media) 28

A politically motivated hacker breached Columbia University's IT systems, stealing vast amounts of sensitive student and employee data -- including admissions decisions and Social Security numbers. The Record reports: The hacker reportedly provided Bloomberg News with 1.6 gigabytes of data they claimed to have stolen from the university, including information from 2.5 million applications going back decades. The stolen data the outlet reviewed reportedly contains details on whether applicants were rejected or accepted, their citizenship status, their university ID numbers and which academic programs they sought admission to. While the hacker's claims have not been independently verified, Bloomberg said it compared data provided by the hacker to that belonging to eight Columbia applicants seeking admission between 2019 and 2024 and found it matched.

The threat actor reportedly told Bloomberg he was seeking information that would indicate whether the university continues to use affirmative action in admissions despite a 2023 Supreme Court decision prohibiting the practice. The hacker told Bloomberg he obtained 460 gigabytes of data in total -- after spending two months targeting and penetrating increasingly privileged layers of the university's servers -- and said he harvested information about financial aid packages, employee pay and at least 1.8 million Social Security numbers belonging to employees, applicants, students and their family members.

Privacy

Tinder To Require Facial Recognition Check For New Users In California (axios.com) 42

An anonymous reader quotes a report from Axios: Tinder is mandating new users in California verify their profiles using facial recognition technology starting Monday, executives exclusively tell Axios. The move aims to reduce impersonation and is part of Tinder parent Match Group's broader effort to improve trust and safety amid ongoing user frustration. The Face Check feature prompts users to take a short video selfie during onboarding. The biometric face scan, powered by FaceTec, then confirms the person is real and present and whether their face matches their profile photos. It also checks if the face is used across multiple accounts. If the criteria are met, the user receives a photo verified badge on their profile. The selfie video is then deleted. Tinder stores a non-reversible, encrypted face map to detect duplicate profiles in the future.

Face Check is separate from Tinder's ID Check, which uses a government-issued ID to verify age and identity. "We see this as one part of a set of identity assurance options that are available to users," Match Group's head of trust and safety Yoel Roth says. "Face Check ... is really meant to be about confirming that this person is a real, live person and not a bot or a spoofed account." "Even if in the short term, it has the effect of potentially reducing some top-line user metrics, we think it's the right thing to do for the business," Rascoff said.

AT&T

AT&T Now Lets Customers Lock Down Account To Prevent SIM Swapping Attacks (theverge.com) 10

AT&T has launched a new Account Lock feature designed to protect customers from SIM swapping attacks. The security tool, available through the myAT&T app, prevents unauthorized changes to customer accounts including phone number transfers, SIM card changes, billing information updates, device upgrades, and modifications to authorized users.

SIM swapping attacks occur when criminals obtain a victim's phone number through social engineering techniques, then intercept messages and calls to access two-factor authentication codes for sensitive accounts. The attacks have become increasingly common in recent years. AT&T began gradually rolling out Account Lock earlier this year, joining T-Mobile, Verizon, and Google Fi, which already offer similar fraud prevention features.
Social Networks

Tumblr's Move To WordPress and Fediverse Integration Is 'On Hold' (theverge.com) 6

Automattic has put its plan to migrate Tumblr's backend to WordPress on hold, with CEO Matt Mullenweg citing a shift in focus toward features users are actively requesting. "I still want to do it," Mullenweg says. "It's just cleaner. But right now, we're not working on it." The Verge reports: The decision to halt the change also appears to mean that Tumblr posts won't be available in the fediverse in the near future. WordPress.com currently offers an ActivityPub plug-in, so Tumblr moving onto WordPress would theoretically let people bring Tumblr posts to the fediverse. "That would've been a free way to get it," Mullenweg says. "And so that was one of the arguments for migrating everything to WordPress."

In the meantime, however, "I think if there was a big push to implement fediverse, we would just do it on the Tumblr code base," according to Mullenweg.

Biotech

UK Scientists Plan to Construct Synthetic Human Genetic Material From Scratch (theguardian.com) 23

"Researchers are embarking on an ambitious project to construct human genetic material from scratch," reports the Guardian, "to learn more about how DNA works and pave the way for the next generation of medical therapies." Scientists on the Synthetic Human Genome (SynHG) project will spend the next five years developing the tools and knowhow to build long sections of human genetic code in the lab. These will be inserted into living cells to understand how the code operates.

Armed with the insights, scientists hope to devise radical new therapies for the treatment of diseases. Among the possibilities are living cells that are resistant to immune attack or particular viruses, which could be transplanted into patients with autoimmune diseases or with liver damage from chronic viral infections. "The information gained from synthesising human genomes may be directly useful in generating treatments for almost any disease," said Prof Jason Chin, who is leading the project at the MRC's Laboratory of Molecular Biology (LMB) in Cambridge...

For the SynHG project, researchers will start by making sections of a human chromosome and testing them in human skin cells. The project involves teams from the universities of Cambridge, Kent, Manchester, Oxford and Imperial College London... Embedded in the project is a parallel research effort into the social and ethical issues that arise from making genomes in the laboratory, led by Prof Joy Zhang at the University of Kent. "We're a little way off having anything tangible that can be used as a therapy, but this is the time to start the discussion on what we want to see and what we don't want to see," said Dr Julian Sale, a group leader at the LMB.

AI

Has an AI Backlash Begun? (wired.com) 134

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong."

"The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since...

[F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources."

The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people."

The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...
Social Networks

To Spam AI Chatbots, Companies Spam Reddit with AI-Generated Posts (9to5mac.com) 38

The problem? "Companies want their products and brands to appear in chatbot results," reports 9to5Mac. And "Since Reddit forms a key part of the training material for Google's AI, then one effective way to make that happen is to spam Reddit." Huffman has confirmed to the Financial Times that this is happening, with companies using AI bots to create fake posts in the hope that the content will be regurgitated by chatbots:

"For 20 years, we've been fighting people who have wanted to be popular on Reddit," Huffman said... "If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it's the same thing. If you want to be in the LLMs, you can do it through Reddit."

Multiple ad agency execs confirmed to the FT that they are indeed "posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots." Huffman says that AI bots are increasingly being used to make spam posts, and Reddit is trying to block them: For Huffman, success comes down to making sure that posts are "written by humans and voted on by humans [...] It's an arms race, it's a never ending battle." The company is exploring a number of new ways to do this, including the World ID eyeball-scanning device being touted by OpenAI's Sam Altman.

It's Reddit's 20th anniversary, notes CNBC. And while "MySpace, Digg and Flickr have faded into oblivion," Reddit "has refused to die, chugging along and gaining an audience of over 108 million daily users..."

But now Reddit "faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up." [I]n the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers."
But in response, CNBC's headline argues that Reddit "is fighting AI with AI." It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week.
AI

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 174

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

IT

Duolingo Stock Plummets After Slowing User Growth, Possibly Caused By 'AI-First' Backlash (fool.com) 24

"Duolingo stock fell for the fourth straight trading day on Wednesday," reported Investor's Business Daily, "as data shows user growth slowing for the language-learning software provider."

Jefferies analyst John Colantuoni said he was "concerned" by this drop — saying it "may be the result of Duolingo's poorly received AI-driven hiring announcement in late April (later clarified in late May)." Also Wednesday, DA Davidson analyst Wyatt Swanson slashed his price target on Duolingo stock to 500 from 600, but kept his buy rating. He noted that the "'AI-first' backlash" on social media is hurting Duolingo's brand sentiment. However, he expects the impact to be temporary.
Colantuoni also maintained a "hold" rating on Duolingo stock — though by Monday Duolingo fell below its 50-day moving average line (which Investor's Business Daily calls "a key sell signal.")

And Thursday afternoon (2:30 p.m. EST) Duolingo's stock had dropped 14% for the week, notes The Motley Fool: While 30 days' worth of disappointing daily active user (DAU) data isn't bad in and of itself, it extends a worrying trend. Over the last five months, the company's DAU growth declined from 56% in February to 53% in March, 41% in April, 40% in May [the month after the "AI-first" announcement], and finally 37% in June.

This deceleration is far from a death knell for Duolingo's stock. But the market may be justified in lowering the company's valuation until it sees improving data. Even after this drop, the company trades at 106 times free cash flow, including stock-based compensation.

Maybe everyone's just practicing their language skills with ChatGPT?
Graphics

Graphics Artists In China Push Back On AI and Its Averaging Effect (theverge.com) 33

Graphic artists in China are pushing back against AI image generators, which they say "profoundly shifts clients' perception of their work, specifically in terms of how much that work costs and how much time it takes to produce," reports The Verge. "Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk." From the report: Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer. Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

"I think it'd be easier to replace me if I didn't embrace [AI]," the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study. "I think it forces both designers and clients to rethink the value of designers," Jia says. "Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?" [...]

Across the board, though, artists and designers say that AI hype has negatively impacted clients' view of their work's value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers' output decreases. "There is now a significant misperception about the workload of designers," [says Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname]. "Some clients think that since AI must have improved efficiency, they can halve their budget." But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

AI

Fed Chair Powell Says AI Is Coming For Your Job 68

Federal Reserve Chair Jerome Powell told the U.S. Senate that while AI hasn't yet dramatically impacted the economy or labor market, its transformative effects are inevitable -- though the timeline remains uncertain. The Register reports: Speaking to the US Senate Banking Committee on Wednesday to give his semiannual monetary policy report, Powell told elected officials that AI's effect on the economy to date is "probably not great" yet, but it has "enormous capabilities to make really significant changes in the economy and labor force." Powell declined to predict how quickly that change could happen, only noting that the final few leaps to get from a shiny new technology to practical implementation can be a slow one.

"What's happened before with technology is that it seems to take a long time to be implemented," Powell said. "That last phase has tended to take longer than people expect." AI is likely to follow that trend, Powell asserted, but he has no idea what sort of timeline that puts on the eventual economy-transforming maturation point of artificial intelligence. "There's a tremendous uncertainty about the timing of [economic changes], what the ultimate consequences will be and what the medium term consequences will be," Powell said. [...]

That continuation will be watched by the Fed, Powell told Senators, but that doesn't mean he'll have the power to do anything about it. "The Fed doesn't have the tools to address the social issues and the labor market issues that will arise from this," Powell said. "We just have interest rates."
Privacy

Facebook Is Asking To Use Meta AI On Photos In Your Camera Roll You Haven't Yet Shared (techcrunch.com) 19

Facebook is prompting users to opt into a feature that uploads photos from their camera roll -- even those not shared on the platform -- to Meta's servers for AI-driven suggestions like collages and stylized edits. While Meta claims the content is private and not used for ads, opting in allows the company to analyze facial features and retain personal data under its broad AI terms, raising privacy concerns. TechCrunch reports: The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions. As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.

The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping "Allow," you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. [...] According to Meta's AI Terms around image processing, "once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image," the text states.

The same AI terms also give Meta's AIs the right to "retain and use" any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes "information you submit as Prompts, Feedback, or other Content." We have to wonder whether the photos you've shared for "cloud processing" also count here.

Canada

Canada's Digital Services Tax To Stay In Place Despite G7 Deal (financialpost.com) 51

An anonymous reader quotes a report from Bloomberg: Canada is proceeding with its digital services tax on technology companies such as Meta despite a Group of Seven agreement that resulted in removing the Section 899 "revenge tax" proposal from U.S. President Donald Trump's tax bill. The first payment for Canada's digital tax is still due Monday, the country's Finance Department confirmed, and covers revenue retroactively to 2022. The tax is three percent of the digital services revenue a firm makes from Canadian users above $20 million in a calendar year.

Keeping the digital tax will not affect the G7 agreement, which focuses on global minimum taxes, the Finance Department said. The Section 899 provision would have targeted companies and investors from countries that the U.S. determines are unfairly taxing American companies. [...] Finance Minister Francois-Philippe Champagne suggested to reporters last week that the digital tax may be negotiated as part of broader, ongoing U.S.-Canada trade discussions. "Obviously all of that is something that we're considering as part of broader discussions that you may have," he said.

Business groups in the country have opposed the tax since it was announced, arguing it would increase the cost of digital services and invite retaliation from the U.S. It also raised the ire of U.S. businesses and lawmakers. A group of 21 members of U.S. Congress wrote to Trump earlier this month asking him to push for the tax's removal, estimating the June 30 payment will cost U.S. companies $2 billion. Before scrapping its digital services tax, Canada wants to see an OECD deal on policies that expand a country's authority to tax profits earned within that country even if a company doesn't have a physical location there -- which is different from a global minimum tax.
Earlier today, President Trump said the U.S. is immediately ending trade talks with Canada in response to the tax, calling it a "direct and blatant attack on our country."

"Based on this egregious Tax, we are hereby terminating ALL discussions on Trade with Canada, effective immediately," Trump wrote in a post on Truth Social. "We will let Canada know the Tariff that they will be paying to do business with the United States of America within the next seven day period."

Slashdot Top Deals