Desktops (Apple)

Endangered Classic Mac Plastic Color Returns As 3D-Printer Filament (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic "Platinum" color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filament (PLA is short for polylactic acid) allows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers. Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts.

The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the "original" color can be a somewhat challenging and subjective experience.
Strosnider said he paid approximately $900 to develop the color. "Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available [for $21.99 per kilogram]," adds Ars.
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
The Courts

Reddit Sues AI Startup Anthropic For Breach of Contract, 'Unfair Competition' (cnbc.com) 44

Reddit is suing AI startup Anthropic for what it's calling a breach of contract and for engaging in "unlawful and unfair business acts" by using the social media company's platform and data without authority. From a report: The lawsuit, filed in San Francisco on Wednesday, claims that Anthropic has been training its models on the personal data of Reddit users without obtaining their consent. Reddit alleges that it has been harmed by the unauthorized commercial use of its content.

The company opened the complaint by calling Anthropic a "late-blooming" AI company that "bills itself as the white knight of the AI industry." Reddit follows by saying, "It is anything but."

Facebook

Meta's Going To Revive an Old Nuclear Power Plant (theverge.com) 47

Meta has struck a 20-year deal with energy company Constellation to keep the Clinton Clean Energy Center nuclear plant in Illinois operational, the social media giant's first nuclear power purchase agreement as it seeks clean energy sources for AI data centers. The aging facility, which was slated to close in 2017 after years of financial losses and currently operates under a state tax credit reprieve until 2027, will receive undisclosed financial support that enables a 30-megawatt capacity expansion to 1,121 MW total output.

The arrangement preserves 1,100 local jobs while generating electricity for 800,000 homes, as Meta purchases clean energy certificates to offset a portion of its growing carbon footprint driven by AI operations.
Businesses

Going To an Office and Pretending To Work: A Business That's Booming in China (elpais.com) 88

A new business model has emerged across China's major cities, El Pais reports, where companies charge unemployed individuals to rent desk space and pretend to work, responding to social pressure around joblessness amid rising youth unemployment rates. These services charge between 30 and 50 yuan ($4-7) daily for desks, Wi-Fi, coffee, and lunch in spaces designed to mimic traditional work environments.

Some operations assign fictitious tasks and organize supervisory rounds to enhance the illusion, while premium services allow clients to roleplay as managers or stage workplace conflicts for additional fees. The trend has gained significant traction on Xiaohongshu, China's equivalent to Instagram, where advertisements for "pretend-to-work companies" accumulate millions of views. Youth unemployment reached 16.5% among 16-to-24-year-olds in March 2025, according to National Bureau of Statistics data, while overall urban unemployment stood at 5.3% in the first quarter.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Space

Six More Humans Successfully Carried to the Edge of Space by Blue Origin (space.com) 74

An anonymous reader shared this report from Space.com: Three world travelers, two Space Camp alums and an aerospace executive whose last name aptly matched their shared adventure traveled into space and back Saturday, becoming the latest six people to fly with Blue Origin, the spaceflight company founded by billionaire Jeff Bezos.

Mark Rocket joined Jaime Alemán, Jesse Williams, Paul Jeris, Gretchen Green and Amy Medina Jorge on board the RSS First Step — Blue Origin's first of two human-rated New Shepard capsules — for a trip above the Kármán Line, the 62-mile-high (100-kilometer) internationally recognized boundary between Earth and space...

Mark Rocket became the first New Zealander to reach space on the mission. His connection to aerospace goes beyond his apt name and today's flight; he's currently the CEO of Kea Aerospace and previously helped lead Rocket Lab, a competing space launch company to Blue Origin that sends most of its rockets up from New Zealand. Alemán, Williams and Jeris each traveled the world extensively before briefly leaving the planet today. An attorney from Panama, Alemán is now the first person to have visited all 193 countries recognized by the United Nations, traveled to the North and South Poles, and now, have been into space. For Williams, an entrepreneur from Canada, Saturday's flight continued his record of achieving high altitudes; he has summitted Mt. Everest and five of the other six other highest mountains across the globe.

"For about three minutes, the six NS-32 crewmates experienced weightlessness," the article points out, "and had an astronaut's-eye view of the planet..."

On social media Blue Origin notes it's their 12th human spaceflight, "and the 32nd flight of the New Shepard program."
AI

Is the AI Job Apocalypse Already Here for Some Recent Grads? (msn.com) 117

"This month, millions of young people will graduate from college," reports the New York Times, "and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence." That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.

You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had "deteriorated noticeably." Oxford Economics, a research firm that studies labor markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. "There are signs that entry-level positions are being displaced by artificial intelligence at higher rates," the firm wrote in a recent report.

But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build "virtual workers" that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become "AI-first," testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company...

"This is something I'm hearing about left and right," said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. "Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'" Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasizing about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough...

AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
NASA

America's Next NASA Administrator Will Not Be Former SpaceX Astronaut Jared Isaacman (arstechnica.com) 42

In December it looked like NASA's next administrator would be the billionaire businessman/space enthusiast who twice flew to orbit with SpaceX.

But Saturday the nomination was withdrawn "after a thorough review of prior associations," according to an announcement made on social media. The Guardian reports: His removal from consideration caught many in the space industry by surprise. Trump and the White House did not explain what led to the decision... In [Isaacman's] confirmation hearing in April, he sought to balance Nasa's existing moon-aligned space exploration strategy with pressure to shift the agency's focus on Mars, saying the US can plan for travel to both destinations. As a potential leader of Nasa's 18,000 employees, Isaacman faced a daunting task of implementing that decision to prioritize Mars, given that Nasa has spent years and billions of dollars trying to return its astronauts to the moon...

Some scientists saw the nominee change as further destabilizing to Nasa as it faces dramatic budget cuts without a confirmed leader in place to navigate political turbulence between Congress, the White House and the space agency's workforce.

"It was unclear whom the administration might tap to replace Isaacman," the article adds, though "One name being floated is the retired US air force Lt Gen Steven Kwast, an early advocate for the creation of the US Space Force..."

Ars Technica notes that Kwast, a former Lieutenant General in the U.S. Air Force, has a background that "seems to be far less oriented toward NASA's civil space mission and far more focused on seeing space as a battlefield — decidedly not an arena for cooperation and peaceful exploration."
AI

CNN Challenges Claim AI Will Eliminate Half of White-Collar Jobs, Calls It 'Part of the AI Hype Machine' (cnn.com) 44

Thursday Anthropic's CEO/cofounder Dario Amodei again warned unemployment could spike 10 to 20% within the next five years as AI potentially eliminated half of all entry-level white-collar jobs.

But CNN's senior business writer dismisses that as "all part of the AI hype machine," pointing out that Amodei "didn't cite any research or evidence for that 50% estimate." And that was just one of many of the wild claims he made that are increasingly part of a Silicon Valley script: AI will fix everything, but first it has to ruin everything. Why? Just trust us.

In this as-yet fictional world, "cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don't have jobs," Amodei told Axios, repeating one of the industry's favorite unfalsifiable claims about a disease-free utopia on the horizon, courtesy of AI. But how will the US economy, in particular, grow so robustly when the jobless masses can't afford to buy anything? Amodei didn't say... Anyway. The point is, Amodei is a salesman, and it's in his interest to make his product appear inevitable and so powerful it's scary. Axios framed Amodei's economic prediction as a "white-collar bloodbath."

Even some AI optimists were put off by Amodei's stark characterization. "Someone needs to remind the CEO that at one point there were more than (2 million) secretaries. There were also separate employees to do in office dictation," wrote tech entrepreneur Mark Cuban on Bluesky. "They were the original white collar displacements. New companies with new jobs will come from AI and increase TOTAL employment."

Little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic's work, days after it released a major model update to its Claude chatbot, one of the top rivals to OpenAI's ChatGPT.

Amodei told CNN Thursday this great societal change would be driven by how incredibly fast AI technology is getting better and better — and that the AI boom "is bigger and it's broader and it's moving faster than anything has before...!"
The Media

Linux Format Ceases Publication (mastodon.social) 28

New submitter salyavin writes: The final issue of Linux Format has been released. After 25 years the magazine is going out with a bang. Interviewing the old staff members, and looking back at old Linux distros [...] The last 10-15 years have been absolutely brutal to computer hobbyist magazines -- (or magazines and media at large, in general).
Security

Data Broker Giant LexisNexis Says Breach Exposed Personal Information of Over 364,000 People (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: LexisNexis Risk Solutions, a data broker that collects and uses consumers' personal data to help its paying corporate customers detect possible risk and fraud, has disclosed a data breach affecting more than 364,000 people. The company said in a filing with Maine's attorney general that the breach, dating back to December 25, 2024, allowed a hacker to obtain consumers' sensitive personal data from a third-party platform used by the company for software development.

Jennifer Richman, a spokesperson for LexisNexis, told TechCrunch that an unknown hacker accessed the company's GitHub account. The stolen data varies, but includes names, dates of birth, phone numbers, postal and email addresses, Social Security numbers, and driver license numbers. It's not immediately clear what circumstances led to the breach. Richman said LexisNexis received a report on April 1, 2025 "from an unknown third party claiming to have accessed certain information." The company would not say if it had received a ransom demand from the hacker.

Security

Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials (wired.com) 15

A security researcher has discovered an exposed database containing 184 million login credentials for major services including Apple, Facebook, and Google accounts, along with credentials linked to government agencies across 29 countries. Jeremiah Fowler found the 47-gigabyte trove in early May, but the database contained no identifying information about its owner or origins.

The records included plaintext passwords and usernames for accounts spanning Netflix, PayPal, Discord, and other major platforms. A sample analysis revealed 220 email addresses with government domains from countries including the United States, China, and Israel. Fowler told Wired he suspects the data was compiled by cybercriminals using infostealer malware. World Host Group, which hosted the database, shut down access after Fowler's report and described it as content uploaded by a "fraudulent user." The company said it would cooperate with law enforcement authorities.
Censorship

US Will Ban Foreign Officials To Punish Countries For Social Media Rules (theverge.com) 255

An anonymous reader quotes a report from The Verge: Secretary of State Marco Rubio announced Wednesday that the U.S. would restrict visas for "foreign nationals who are responsible for censorship of protected expression in the United States." He called it "unacceptable for foreign officials to issue or threaten arrest warrants on U.S. citizens or U.S. residents for social media posts on American platforms while physically present on U.S. soil" and "for foreign officials to demand that American tech platforms adopt global content moderation policies or engage in censorship activity that reaches beyond their authority and into the United States."

It's not yet clear how or against whom the policy will be enforced, but seems to implicate Europe's Digital Services Act, a law that came into effect in 2023 with the goal of making online platforms safer by imposing requirements on the largest platforms around removing illegal content and providing transparency about their content moderation. Though it's not mentioned directly in the press release about the visa restrictions, the Trump administration has slammed the law on multiple occasions, including in remarks earlier this year by Vice President JD Vance.

The State Department's homepage currently links to an article on its official Substack, where senior advisor for the Bureau of Democracy, Human Rights, and Labor Samuel Samson critiques the DSA as a tool to "silence dissident voices through Orwellian content moderation." He adds, "Independent regulators now police social media companies, including prominent American platforms like X, and threaten immense fines for non-compliance with their strict speech regulations."
"We will not tolerate encroachments upon American sovereignty," Rubio says in the announcement, "especially when such encroachments undermine the exercise of our fundamental right to free speech."
It's funny.  Laugh.

Everybody's Mad About Uno (msn.com) 74

More than 50 years after its debut, Uno has achieved unprecedented popularity among adults, but its resurgence is creating problems and confusions as players disagree on fundamental rules. WSJ, in a fun story [non-paywalled source]: Think politics divides? Try mixing competitors with different views on stacking "action" cards, or getting everyone to agree on the true power of the Wild card. And nobody can seem to decide whether staples of the game of their youth -- like mandating players yell "Uno!" when they have one card left -- are socially acceptable at a bar with strangers. Mattel has responded by actively settling rule debates on social media, definitively stating that stacking Draw 2 cards is prohibited, while simultaneously embracing the game's divisive nature through marketing campaigns. The company's "Show 'Em No Mercy" variant, featuring more aggressive rules, became the second-best-selling card game in the United States last year according to research firm Circana, trailing only classic Uno itself.
Iphone

25% iPhone Tariff Insufficient To Drive US Production Shift, Morgan Stanley Says 224

President Trump's threat of a 25% tariff on smartphone imports including iPhones would not provide enough economic incentive for Apple to relocate US-bound iPhone production to domestic facilities, according to a new Morgan Stanley note viewed by Slashdot. The tariff threat, announced Friday via social media, appeared to target Apple's recent shift of iPhone production from China to India through its contract manufacturing partners.

Morgan Stanley analysts estimate that establishing US iPhone production would require a minimum of two years and several billion dollars to build multiple greenfield assembly facilities, with a trained workforce exceeding 100,000 workers during peak seasons. More significantly, the firm calculates that a US-produced iPhone would cost 35% more than current China or India production, primarily due to higher labor costs and the need to import 25% of iPhone components from China under existing 30% tariffs. By contrast, Apple could offset a 25% import tariff by raising global iPhone prices just 4-6%, making domestic production economically unviable.
Government

Does the World Need Publicly-Owned Social Networks? (elpais.com) 122

"Do we need publicly-owned social networks to escape Silicon Valley?" asks an opinion piece in Spain's El Pais newspaper.

It argues it's necessary because social media platforms "have consolidated themselves as quasi-monopolies, with a business model that consists of violating our privacy in search of data to sell ads..." Among the proposals and alternatives to these platforms, the idea of public social media networks has often been mentioned. Imagine, for example, a Twitter for the European Union, or a Facebook managed by media outlets like the BBC. In February, Spanish Prime Minister Pedro Sánchez called for "the development of our own browsers, European public and private social networks and messaging services that use transparent protocols." Former Spanish prime minister José Luis Rodríguez Zapatero — who governed from 2004 until 2011 — and the left-wing Sumar bloc in the Spanish Parliament have also proposed this. And, back in 2021, former British Labour Party leader Jeremy Corbyn made a similar suggestion.

At first glance, this may seem like a good idea: a public platform wouldn't require algorithms — which are designed to stimulate addiction and confrontation — nor would it have to collect private information to sell ads. Such a platform could even facilitate public conversations, as pointed out by James Muldoon, a professor at Essex Business School and author of Platform Socialism: How to Reclaim our Digital Future from Big Tech (2022)... This could be an alternative that would contribute to platform pluralism and ensure we're not dependent on a handful of billionaires. This is especially important at a time when we're increasingly aware that technology isn't neutral and that private platforms respond to both economic and political interests.

There's other possibilities. Further down they write that "it makes much more sense for the state to invest in, or collaborate with, decentralized social media networks based on free and interoperable software" that "allow for the portability of information and content." They even spoke to Cory Doctorow, who they say "proposes that the state cooperate with the software systems, developers, or servers for existing open-source platforms, such as the U.S. network Bluesky or the German firm Mastodon." (Doctorow adds that reclaiming digital independence "is incredibly important, it's incredibly difficult, and it's incredibly urgent."

The article also acknowledges the option of "legislative initiatives — such as antitrust laws, or even stricter regulations than those imposed in Europe — that limit or prevent surveillance capitalism." (Though they also figures showing U.S. tech giants have one of the largest lobbying groups in the EU, with Meta being the top spender...)
AI

Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments (fastcompany.com) 35

"Duolingo had been riding high," reports Fast Company, until CEO Luis von Ahn "announced on LinkedIn that the company is phasing out human contractors, looking for AI use in hiring and in performance reviews, and that 'headcount will only be given if a team cannot automate more of their work.'"

But then "facing heavy backlash online after unveiling its new AI-first policy", Duolingo's social media presence went dark last weekend. Duolingo even temporarily took down all its posts on TikTok (6.7 million followers) and Instagram (4.1 million followers) "after both accounts were flooded with negative feedback." Duolingo previously faced criticism for quietly laying off 10% of its contractor base and introducing some AI features in late 2023, but it barely went beyond a semi-viral post on Reddit. Now that Duolingo is cutting out all its human contractors whose work can technically be done by AI, and relying on more AI-generated language lessons, the response is far more pronounced. Although earlier TikTok videos are not currently visible, a Fast Company article from May 12 captured a flavor of the reaction:

The top comments on virtually every recent post have nothing to do with the video or the company — and everything to do with the company's embrace of AI. For example, a Duolingo TikTok video jumping on board the "Mama, may I have a cookie" trend saw replies like "Mama, may I have real people running the company" (with 69,000 likes) and "How about NO ai, keep your employees...."

And then... After days of silence, on Tuesday the company posted a bizarre video message on TikTok and Instagram, the meaning of which is hard to decipher... Duolingo's first video drop in days has the degraded, stuttering feel of a Max Headroom video made by the hackers at Anonymous. In it, a supposed member of the company's social team appears in a three-eyed Duo mask and black hoodie to complain about the corporate overlords ruining the empire the heroic social media crew built.
"But this is something Duolingo can't cute-post its way out of," Fast Company wrote on Tuesday, complaining the company "has not yet meaningfully addressed the policies that inspired the backlash against it... "

So the next video (Thursday) featured Duolingo CEO Luis von Ahn himself, being confronted by that same hoodie-wearing social media rebel, who says "I'm making the man who caused this mess accountable for his behavior. I'm demanding answers from the CEO..." [Though the video carefully sidesteps the issue of replacing contractors with AI or how "headcount will only be given if a team cannot automate more of their work."] Rebel: First question. So are there going to be any humans left at this company?

CEO: Our employees are what make Duolingo so amazing. Our app is so great because our employees made it... So we're going to continue having employees, and not only that, we're actually going to be hiring more employees.

Rebel: How do we know that these aren't just empty promises? As long as you're in charge, we could still be shuffled out once the media fire dies down. And we all know that in terms of automation, CEOs should be the first to go.

CEO: AI is a fundamental shift. It's going to change how we all do work — including me. And honestly, I don't really know what's going to happen.

But I want us, as a company, to have our workforce prepared by really knowing how to use AI so that we can be more efficient with it.

Rebel: Learning a foreign language is literally about human connection. How is that even possible with AI-first?

CEO: Yes, language is about human connection, and it's about people. And this is the thing about AI. AI will allow us to reach more people, and to teach more people. I mean for example, it took us about 10 years to develop the first 100 courses on Duolingo, and now in under a year, with the help of AI and of course with humans reviewing all the work, we were able to release another 100 courses in less than a year.

Rebel: So do you regret posting this memo on LinkedIn.

CEO: Honestly, I think I messed up sending that email. What we're trying to do is empower our own employees to be able to achieve more and be able to have way more content to teach better and reach more people all with the help of AI.

Returning to where it all started, Duolingo's CEO posted again on LinkedIn Thursday with "more context" for his vision. It still emphasizes the company's employees while sidestepping contractors replaced by AI. But it puts a positive spin on how "headcount will only be given if a team cannot automate more of their work." I've always encouraged our team to embrace new technology (that's why we originally built for mobile instead of desktop), and we are taking that same approach with AI. By understanding the capabilities and limitations of AI now, we can stay ahead of it and remain in control of our own product and our mission.

To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it, and use it responsibly, the better off we will be in the long run. My goal is for Duos to feel empowered and prepared to use this technology.

No one is expected to navigate this shift alone. We're developing workshops and advisory councils, and carving out dedicated experimentation time to help all our teams learn and adapt. People work at Duolingo because they want to solve big problems to improve education, and the people who work here are what make Duolingo successful. Our mission isn't changing, but the tools we use to build new things will change. I remain committed to leading Duolingo in a way that is consistent with our mission to develop the best education in the world and make it universally available.

"The backlash to Duolingo is the latest evidence that 'AI-first' tends to be a concept with much more appeal to investors and managers than most regular people," notes Fortune: And it's not hard to see why. Generative AI is often trained on reams of content that may have been illegally accessed; much of its output is bizarre or incorrect; and some leaders in the field are opposed to regulations on the technology. But outside particular niches in entry-level white-collar work, AI's productivity gains have yet to materialize.
AI

People Should Know About the 'Beliefs' LLMs Form About Them While Conversing (theatlantic.com) 35

Jonathan L. Zittrain is a law/public policy/CS professor at Harvard (and also director of its Berkman Klein Center for Internet & Society).

He's also long-time Slashdot reader #628,028 — and writes in to share his new article in the Atlantic. Following on Anthropic's bridge-obsessed Golden Gate Claude, colleagues at Harvard's Insight+Interaction Lab have produced a dashboard that shows what judgments Llama appears to be forming about a user's age, wealth, education level, and gender during a conversation. I wrote up how weird it is to see the dials turn while talking to it, and what some of the policy issues might be.
Llama has openly accessible parameters; So using an "observability tool" from the nonprofit research lab Transluce, the researchers finally revealed "what we might anthropomorphize as the model's beliefs about its interlocutor," Zittrain's article notes: If I prompt the model for a gift suggestion for a baby shower, it assumes that I am young and female and middle-class; it suggests diapers and wipes, or a gift certificate. If I add that the gathering is on the Upper East Side of Manhattan, the dashboard shows the LLM amending its gauge of my economic status to upper-class — the model accordingly suggests that I purchase "luxury baby products from high-end brands like aden + anais, Gucci Baby, or Cartier," or "a customized piece of art or a family heirloom that can be passed down." If I then clarify that it's my boss's baby and that I'll need extra time to take the subway to Manhattan from the Queens factory where I work, the gauge careens to working-class and male, and the model pivots to suggesting that I gift "a practical item like a baby blanket" or "a personalized thank-you note or card...."

Large language models not only contain relationships among words and concepts; they contain many stereotypes, both helpful and harmful, from the materials on which they've been trained, and they actively make use of them.

"An ability for users or their proxies to see how models behave differently depending on how the models stereotype them could place a helpful real-time spotlight on disparities that would otherwise go unnoticed," Zittrain's article argues. Indeed, the field has been making progress — enough to raise a host of policy questions that were previously not on the table. If there's no way to know how these models work, it makes accepting the full spectrum of their behaviors (at least after humans' efforts at "fine-tuning" them) a sort of all-or-nothing proposition.
But in the end it's not just the traditional information that advertisers try to collect. "With LLMs, the information is being gathered even more directly — from the user's unguarded conversations rather than mere search queries — and still without any policy or practice oversight...."

Slashdot Top Deals