×
Open Source

Peplum: F/OSS Distributed Parallel Computing and Supercomputing At Home With Ruby Infrastructure (ecsypno.com) 20

Slashdot reader Zapotek brings an update from the Ecsypno skunkworks, where they've been busy with R&D for distributed computing systems: Armed with Cuboid, Qmap was built, which tackled the handling of nmap in a distributed environment, with great results. Afterwards, an iterative clean-up process led to a template of sorts, for scheduling most applications in such environments.

With that, Peplum was born, which allows for OS applications, Ruby code and C/C++/Rust code (via Ruby extensions) to be distributed across machines and tackle the processing of neatly grouped objects.

In essence, Peplum:

- Is a distributed computing solution backed by Cuboid.
- Its basic function is to distribute workloads and deliver payloads across multiple machines and thus parallelize otherwise time consuming tasks.
- Allows you to combine several machines and built a cluster/supercomputer of sorts with great ease.

After that was dealt with, it was time to port Qmap over to Peplum for easier long-term maintenance, thus renamed Peplum::Nmap.

We have high hopes for Peplum as it basically means easy, simple and joyful cloud/clustering/super-computing at home, on-premise, anywhere really. Along with the capability to turn a lot of security oriented apps into super versions of themselves, it is quite the infrastructure.

Yes, this means there's a new solution if you're using multiple machines for "running simulations, to network mapping/security scans, to password cracking/recovery or just encoding your collection of music and video" -- or anything else: Peplum is a F/OSS (MIT licensed) project aimed at making clustering/super-computing affordable and accessible, by making it simple to setup a distributed parallel computing environment for abstract applications... TLDR: You no longer have to only imagine a Beowulf cluster of those, you can now easily build one yourself with Peplum.
Some technical specs: It is written in the Ruby programming language, thus coming with an entire ecosystem of libraries and the capability to run abstract Ruby code, execute external utilities, run OS commands, call C/C++/Rust routines and more...

Peplum is powered by Cuboid, a F/OSS (MIT licensed) abstract framework for distributed computing — both of them are funded by Ecsypno Single Member P.C., a new R&D and Consulting company.

Supercomputing

IBM Wants To Build a 100,000-Qubit Quantum Computer (technologyreview.com) 27

IBM has announced its goal to build a 100,000-qubit quantum computing machine within the next 10 years in collaboration with the University of Tokyo and the University of Chicago. MIT Technology Review reports: Late last year, IBM took the record for the largest quantum computing system with a processor that contained 433 quantum bits, or qubits, the fundamental building blocks of quantum information processing. Now, the company has set its sights on a much bigger target: a 100,000-qubit machine that it aims to build within 10 years. IBM made the announcement on May 22 at the G7 summit in Hiroshima, Japan. The company will partner with the University of Tokyo and the University of Chicago in a $100 million dollar initiative to push quantum computing into the realm of full-scale operation, where the technology could potentially tackle pressing problems that no standard supercomputer can solve.

Or at least it can't solve them alone. The idea is that the 100,000 qubits will work alongside the best "classical" supercomputers to achieve new breakthroughs in drug discovery, fertilizer production, battery performance, and a host of other applications. "I call this quantum-centric supercomputing," IBM's VP of quantum, Jay Gambetta, told MIT Technology Review in an in-person interview in London last week. [...] IBM has already done proof-of-principle experiments (PDF) showing that integrated circuits based on "complementary metal oxide semiconductor" (CMOS) technology can be installed next to the cold qubits to control them with just tens of milliwatts. Beyond that, he admits, the technology required for quantum-centric supercomputing does not yet exist: that is why academic research is a vital part of the project.

The qubits will exist on a type of modular chip that is only just beginning to take shape in IBM labs. Modularity, essential when it will be impossible to put enough qubits on a single chip, requires interconnects that transfer quantum information between modules. IBM's "Kookaburra," a 1,386-qubit multichip processor with a quantum communication link, is under development and slated for release in 2025. Other necessary innovations are where the universities come in. Researchers at Tokyo and Chicago have already made significant strides in areas such as components and communication innovations that could be vital parts of the final product, Gambetta says. He thinks there will likely be many more industry-academic collaborations to come over the next decade. "We have to help the universities do what they do best," he says.

Intel

Intel Gives Details on Future AI Chips as It Shifts Strategy (reuters.com) 36

Intel on Monday provided a handful of new details on a chip for artificial intelligence (AI) computing it plans to introduce in 2025 as it shifts strategy to compete against Nvidia and Advanced Micro Devices. From a report: At a supercomputing conference in Germany on Monday, Intel said its forthcoming "Falcon Shores" chip will have 288 gigabytes of memory and support 8-bit floating point computation. Those technical specifications are important as artificial intelligence models similar to services like ChatGPT have exploded in size, and businesses are looking for more powerful chips to run them.

The details are also among the first to trickle out as Intel carries out a strategy shift to catch up to Nvidia, which leads the market in chips for AI, and AMD, which is expected to challenge Nvidia's position with a chip called the MI300. Intel, by contrast, has essentially no market share after its would-be Nvidia competitor, a chip called Ponte Vecchio, suffered years of delays. Intel on Monday said it has nearly completed shipments for Argonne National Lab's Aurora supercomputer based on Ponte Vecchio, which Intel claims has better performance than Nvidia's latest AI chip, the H100. But Intel's Falcon Shores follow-on chip won't be to market until 2025, when Nvidia will likely have another chip of its own out.

Supercomputing

UK To Invest 900 Million Pounds In Supercomputer In Bid To Build Own 'BritGPT' (theguardian.com) 35

An anonymous reader quotes a report from The Guardian: The UK government is to invest 900 million pounds in a cutting-edge supercomputer as part of an artificial intelligence strategy that includes ensuring the country can build its own "BritGPT". The treasury outlined plans to spend around 900 million pounds on building an exascale computer, which would be several times more powerful than the UK's biggest computers, and establishing a new AI research body. An exascale computer can be used for training complex AI models, but also have other uses across science, industry and defense, including modeling weather forecasts and climate projections. The Treasury said the investment will "allow researchers to better understand climate change, power the discovery of new drugs and maximize our potential in AI.".

An exascale computer is one that can carry out more than one billion billion simple calculations a second, a metric known as an "exaflops". Only one such machine is known to exist, Frontier, which is housed at America's Oak Ridge National Laboratory and used for scientific research -- although supercomputers have such important military applications that it may be the case that others already exist but are not acknowledged by their owners. Frontier, which cost about 500 million pounds to produce and came online in 2022, is more than twice as powerful as the next fastest machine.

The Treasury said it would award a 1 million-pound prize every year for the next 10 years to the most groundbreaking AI research. The award will be called the Manchester Prize, in memory of the so-called Manchester Baby, a forerunner of the modern computer built at the University of Manchester in 1948. The government will also invest 2.5 billion pounds over the next decade in quantum technologies. Quantum computing is based on quantum physics -- which looks at how the subatomic particles that make up the universe work -- and quantum computers are capable of computing their way through vast numbers of different outcomes.

Supercomputing

Satoshi Matsuoka Mocks 12 Myths of High-Performance Computing (insidehpc.com) 25

insideHPC reports that Satoshi Matsuoka, the head of Japan's largest supercomputing center, has co-authored a high-performance computing paper challenging conventional wisdom. In a paper entitled "Myths and Legends of High-Performance Computing" appearing this week on the Arvix site, Matsuoka and four colleagues offer opinions and analysis on such issues as quantum replacing classical HPC, the zettascale timeline, disaggregated computing, domain-specific languages (DSLs) vs. Fortran and cloud subsuming HPC, among other topics.

"We believe (these myths and legends) represent the zeitgeist of the current era of massive change, driven by the end of many scaling laws, such as Dennard scaling and Moore's law," the authors said.

In this way they join the growing "end of" discussions in HPC. For example, as the industry moves through 3nm, 2nm, and 1.4nm chips – then what? Will accelerators displace CPUs altogether? What's next after overburdened electrical I/O interconnects? How do we get more memory per core?

The paper's abstract promises a "humorous and thought provoking" discussion — for example, on the possibility of quantum computing taking over high-performance computing. ("Once a quantum state is constructed, it can often be "used" only once because measurements destroy superposition. A second limitation stems from the lack of algorithms with high speedups....")

The paper also tackles myths like "all high-performance computing will be subsumed by the clouds" and "everything will be deep learning."

Thanks to guest reader for submitting the article.
AI

Microsoft and OpenAI Working On ChatGPT-Powered Bing In Challenge To Google 61

Microsoft is in the works to launch a version of its search engine Bing using the artificial intelligence behind OpenAI-launched chatbot ChatGPT, The Information reported on Tuesday, citing two people with direct knowledge of the plans. Reuters reports: Microsoft could launch the new feature before the end of March, and hopes to challenge Alphabet-owned search engine Google, the San Francisco-based technology news website said in a report. Microsoft said in a blog post last year that it planned to integrate image-generation software from OpenAI, DALL-E 2, into Bing.

Microsoft had in 2019 backed San Francisco-based artificial intelligence company OpenAI, offering $1 billion in funding. The two had formed a multi-year partnership to develop artificial intelligence supercomputing technologies on Microsoft's Azure cloud computing service.
Further reading: ChatGPT Is a 'Code Red' For Google's Search Business
AI

Driverless Electric Robot Tractors are Here, Powered by NVIDIA AI Chips (theverge.com) 82

NVIDIA is proud of its role in the first commercially available smart tractor (which began rolling off the production line Thursday). Monarch Tractor's MK-V "combines electrification, automation, and data analysis to help farmers reduce their carbon footprint, improve field safety, streamline farming operations, and increase their bottom lines," according to NVIDIA's blog.

NVIDIA's been touting the ability to accelerate machine learning applications with its low-power Jetson boards (each with a system on a chip integrating an ARM-architecture CPU) , and they write that the new tractor "cuts energy costs and diesel emissions, while also helping reduce harmful herbicides, which are expensive and deplete the soil." Mark Schwager, former Tesla Gigafactory chief, is president; Zachary Omohundro, a robotics Ph.D. from Carnegie Mellon, is CTO; Praveen Penmetsa, CEO of Monarch Tractor, is an autonomy and mobility engineer. Penmetsa likens the revolutionary new tractor to paradigm shifts in PCs and smartphones, enablers of world-changing applications. Monarch's role, he said, is as the hub to enable smart implements — precision sprayers, harvesters and more — for computer vision applications to help automate farming....

Tapping into six NVIDIA Jetson Xavier NX SOMs (system on modules), Monarch's Founder Series MK-V tractors are essentially roving robots packing supercomputing. Monarch has harnessed Jetson to deliver tractors that can safely traverse rows within agriculture fields using only cameras. "This is important in certain agriculture environments because there may be no GPS signal," said Penmetsa. "It's also crucial for safety as the Monarch is intended for totally driverless operation."The Founder Series MK-V runs two 3D cameras and six standard cameras.

In one pilot test a tractor lowered energy costs (compared to a diesel tractor) by $2,600 a year, according to NVIDIA's blog post. And the tractor collects and analyzes crop data daily, so hopes are high for the system. Monarch has already raised more than $110 million in funding, reports the Verge: Many tractors out in farming fields have semiautonomous modes but largely require a driver to be seated. They also mostly run on diesel gas, so the MK-V, with its fully electric design and driver-optional smarts, is claiming it's the first production model of its kind.
Supercomputing

IBM Unveils Its 433 Qubit Osprey Quantum Computer (techcrunch.com) 29

An anonymous reader quotes a report from TechCrunch: IBM wants to scale up its quantum computers to over 4,000 qubits by 2025 -- but we're not quite there yet. For now, we have to make do with significantly smaller systems and today, IBM announced the launch of its Osprey quantum processor, which features 433 qubits, up from the 127 qubits of its 2021 Eagle processor. And with that, the slow but steady march toward a quantum processor with real-world applications continues.

IBM's quantum roadmap includes two additional stages -- the 1,121-qubit Condor and 1,386-qubit Flamingo processors in 2023 and 2024 -- before it plans to hit the 4,000-qubit stage with its Kookaburra processor in 2025. So far, the company has generally been able to make this roadmap work, but the number of qubits in a quantum processor is obviously only one part of a very large and complex puzzle, with longer coherence times and reduced noise being just as important.

The company also today detailed (Link: YouTube) its Quantum System Two -- basically IBM's quantum mainframe -- which will be able to house multiple quantum processors and integrate them into a single system with high-speed communication links. The idea here is to launch this system by the end of 2023.
"The new 433 qubit 'Osprey' processor brings us a step closer to the point where quantum computers will be used to tackle previously unsolvable problems," said Dario Gil, senior vice president, IBM and director of Research. "We are continuously scaling up and advancing our quantum technology across hardware, software and classical integration to meet the biggest challenges of our time, in conjunction with our partners and clients worldwide. This work will prove foundational for the coming era of quantum-centric supercomputing."

Further reading: IBM Held Talks With Biden Administration on Quantum Controls
Intel

Intel CEO Calls New US Restrictions on Chip Exports To China Inevitable (wsj.com) 9

Intel Chief Executive Pat Gelsinger said that recently imposed U.S. restrictions on semiconductor-industry exports to China were inevitable as America seeks to maintain technological leadership in competition with China. From a report: Speaking at The Wall Street Journal's annual Tech Live conference, Mr. Gelsinger said the restrictions, which require chip companies to obtain a license to export certain advanced artificial-intelligence and supercomputing chips as well as equipment used in advanced manufacturing, are part of a necessary shift of chip supply chains. "I viewed this geopolitically as inevitable," Mr. Gelsinger said. "And that's why the rebalancing of supply chains is so critical." His comments Monday followed high-profile public lobbying of Congress to pass the bipartisan Chips and Science Act, which extends nearly $53 billion in subsidies for research and development and to build or expand fabs in the U.S., in July. Mr. Gelsinger was a leading advocate for the legislation.

Mr. Gelsinger has embarked on a massive expansion of chip plants, referred to as fabs. The company has announced plans to erect new facilities in Ohio, Germany and elsewhere since Mr. Gelsinger took over last year at a combined cost potentially topping $100 billion. "Where the oil reserves are defined geopolitics for the last five decades. Where the fabs are for the next five decades is more important," Mr. Gelsinger said Monday. Mr. Gelsinger said the ambition for efforts to boost domestic chip manufacturing in Western countries was to shift from about 80% in Asia to about 50% by the end of the decade, with the U.S. taking 30% and Europe the remaining 20%. "We would all feel so good" if that were to happen, he said.

China

US Eyes Expanding China Tech Ban To Quantum Computing and AI (bloomberg.com) 47

An anonymous reader quotes a report from Bloomberg: The Biden administration is exploring the possibility of new export controls that would limit China's access to some of the most powerful emerging computing technologies, according to people familiar with the situation. The potential plans, which are in an early stage, are focused on the still-experimental field of quantum computing, as well as artificial intelligence software, according to the people, who asked not to be named discussing private deliberations. Industry experts are weighing in on how to set the parameters of the restrictions on this nascent technology, they said. The efforts, if implemented, would follow separate restrictions announced earlier this month aimed at stunting Beijing's ability to deploy cutting-edge semiconductors in weapons and surveillance systems.

National Security Advisor Jake Sullivan, in a speech last month on technology, competitiveness and national security, referred to "computing-related technologies, including microelectronics, quantum information systems and artificial intelligence" as among developments "set to play an outsized importance over the coming decade." He also noted the importance of export controls to "maintain as large of a lead as possible" over adversaries. Expanding the wall around advanced technologies risks further antagonizing China and forcing other countries to pick sides between the world's two top economies. The new ideas have been shared with US allies, according to the people. Officials are still determining how to frame the controls on quantum computing, which will probably focus on the level of output and the so-called error correction rate, the people said. [...] The Biden administration is also working on an executive order for an outbound investment review mechanism that would scrutinize money heading to certain Chinese technologies, and the quantum computing and artificial intelligence controls could be included, one of the people said. That could incorporate some aspects similar to a measure pushed by senators Bob Casey, a Pennsylvania Democrat, and John Cornyn, a Texas Republican.

AI

US Said To Plan New Limits on China's AI and Supercomputing Firms (nytimes.com) 53

The Biden administration is expected to announce new measures to restrict Chinese companies from accessing technologies that enable high-performance computing, The New York Times reported Monday, citing several people familiar with the matter, the latest in a series of moves aimed at hobbling Beijing's ambitions to craft next-generation weapons and automate large-scale surveillance systems. From a report: The measures, which could be announced as soon as this week, would be some of the most significant steps taken by the Biden administration to cut off China's access to advanced semiconductor technology. They would build on a Trump-era rule that struck a blow to the Chinese telecom giant Huawei by prohibiting companies around the world from sending it products made with the use of American technology, machinery or software. A number of Chinese firms, government research labs and other entities are expected to face restrictions similar to Huawei, according to two people with knowledge of the plans. In effect, any firm that uses American-made technologies would be blocked from selling to the Chinese entities that are targeted by the administration. It's not yet clear which Chinese firms and labs would be impacted. The broad expansion of what is known as the foreign direct product rule is just one part of Washington's planned restrictions. The administration is also expected to try to control the sale of cutting-edge U.S.-made tools to China's domestic chip makers.
Supercomputing

Tesla Unveils New Dojo Supercomputer So Powerful It Tripped the Power Grid (electrek.co) 106

An anonymous reader quotes a report from Electrek: Tesla has unveiled its latest version of its Dojo supercomputer and it's apparently so powerful that it tripped the power grid in Palo Alto. Dojo is Tesla's own custom supercomputer platform built from the ground up for AI machine learning and more specifically for video training using the video data coming from its fleet of vehicles. [...] Last year, at Tesla's AI Day, the company unveiled its Dojo supercomputer, but the company was still ramping up its effort at the time. It only had its first chip and training tiles, and it was still working on building a full Dojo cabinet and cluster or "Exapod." Now Tesla has unveiled the progress made with the Dojo program over the last year during its AI Day 2022 last night.

The company confirmed that it managed to go from a chip and tile to now a system tray and a full cabinet. Tesla claims it can replace 6 GPU boxes with a single Dojo tile, which the company claims costs less than one GPU box. There are 6 of those tiles per tray. Tesla says that a single tray is the equivalent of "3 to 4 fully-loaded supercomputer racks." The company is integrating its host interface directly on the system tray to create a big full host assembly. Tesla can fit two of these system trays with host assembly into a single Dojo cabinet. That's pretty much where Tesla is right now as the automaker is still developing and testing the infrastructure needed to put a few cabinets together to create the first "Dojo Exapod."

Bill Chang, Tesla's Principal System Engineer for Dojo, said: "We knew that we had to reexamine every aspect of the data center infrastructure in order to support our unprecedented cooling and power density." They had to develop their own high-powered cooling and power system to power the Dojo cabinets. Chang said that Tesla tripped their local electric grid's substation when testing the infrastructure earlier this year: "Earlier this year, we started load testing our power and cooling infrastructure and we were able to push it over 2 MW before we tripped our substation and got a call from the city." Tesla released the main specs of a Dojo Exapod: 1.1 EFLOP, 1.3 TB SRAM, and 13 TB high-bandwidth DRAM.

Supercomputing

China's Baidu Reveals Its First Quantum Computer, 'Qianshi' (reuters.com) 15

Chinese search engine giant Baidu revealed its first quantum computer on Thursday and is ready to make it available to external users, joining the global race to apply the technology to practical uses. Reuters reports: The Baidu-developed quantum computer, dubbed "Qianshi," has a 10-quantum-bit (qubit) processor, Baidu said in a statement. The Beijing-based company has also developed a 36-qubit quantum chip, it said. Governments and companies around the world for years have touted the potential of quantum computing, a form of high-speed calculation at extraordinarily cold temperatures that will bring computers to unprecedented processing speeds. However, current real-world applications in the field are still very basic and limited to a small group of early clients.
Supercomputing

Are the World's Most Powerful Supercomputers Operating In Secret? (msn.com) 42

"A new supercomputer called Frontier has been widely touted as the world's first exascale machine — but was it really?"

That's the question that long-time Slashdot reader MattSparkes explores in a new article at New Scientist... Although Frontier, which was built by the Oak Ridge National Laboratory in Tennessee, topped what is generally seen as the definitive list of supercomputers, others may already have achieved the milestone in secret....

The definitive list of supercomputers is the Top500, which is based on a single measurement: how fast a machine can solve vast numbers of equations by running software called the LINPACK benchmark. This gives a value in float-point operations per second, or FLOPS. But even Jack Dongarra at Top500 admits that not all supercomputers are listed, and will only feature if its owner runs the benchmark and submits a result. "If they don't send it in it doesn't get entered," he says. "I can't force them."

Some owners prefer not to release a benchmark figure, or even publicly reveal a machine's existence. Simon McIntosh-Smith at the University of Bristol, UK points out that not only do intelligence agencies and certain companies have an incentive to keep their machines secret, but some purely academic machines like Blue Waters, operated by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, are also just never entered.... Dongarra says that the consensus among supercomputer experts is that China has had at least two exascale machines running since 2021, known as OceanLight and Tianhe-3, and is working on an even larger third called Sugon. Scientific papers on unconnected research have revealed evidence of these machines when describing calculations carried out on them.

McIntosh-Smith also believes that intelligence agencies would rank well, if allowed. "Certainly in the [US], some of the security forces have things that would put them at the top," he says. "There are definitely groups who obviously wouldn't want this on the list."

The Internet

Ask Slashdot: Why Haven't They Increased Size Limits for Email Attachments? 260

"Email system are quite capable of sending and receiving large attachments," writes long-term Slashdot reader Stonefish "However, size limits are generally tiny."

And then he tells a story... In the late 1990s I worked for a research organisation maintaining their mail system, and had recently introduced mail size constraints. Within the first day it had blocked a number of emails — including a 700MB attachment.

Being a master of all thing Internet I called up the sender to tell him firstly how such a large email would cause problems for the receiver, and secondly how there were far more efficient ways of sending things. Given that he was on the same campus he invited me down to his lab to discuss this further. (After showing me round his lab, which was pretty impressive apart from the large "Biohazard" and "Radioactive" materials labels on the doors.) He told me that the facility he was sending the attachments to was a supercomputing hub with similar "Fat" pipes to the Internet so the large emails weren't a problem. I then spoke about the "efficiency" of the mail protocol and he said that he'd show me what efficient was and did a quick, "drag, drop and send" of another 700MB file of his latest research results.

He was right, I was wrong, it was efficient from his perspective and all his previous emails were easily available demonstrating when and where they were sent. As a result of this we changed our architecture and bought bulk cheap storage for email as it was a cheap, searchable and business focused approach to communications.

However 20 years plus later, even though networks are tens of thousands of times faster and storage is tens of thousands of times cheaper — email size limits remain about the same. Email remains cheap, efficient and ubiquitous — but we expect people to upload a file to a site and generate a link and embed in a manner that means we lose control of our data or it disappears in 12 months.

What's missing from this analysis? (Wikipedia's page on email attachments notes the intermediate "mail transfer agents" that store and forward email "and may therefore also impose size limits.") But even that page admits some attachment limits are arbitrary.

I always assumed it was an anti-piracy measure. Anyone know the real answer? Share your own thoughts in the comments.

Why haven't they increased size limits for email attachments?
United States

US Retakes First Place From Japan on Top500 Supercomputer Ranking (engadget.com) 29

The United States is on top of the supercomputing world in the Top500 ranking of the most powerful systems. From a report: The Frontier system from Oak Ridge National Laboratory (ORNL) running on AMD EPYC CPUs took first place from last year's champ, Japan's ARM A64X Fugaku system. It's still in the integration and testing process at the ORNL in Tennessee, but will eventually be operated by the US Air Force and US Department of Energy. Frontier, powered by Hewlett Packard Enterprise's (HPE) Cray EX platform, was the top machine by a wide margin, too. It's the first (known) true exascale system, hitting a peak 1.1 exaflops on the Linmark benchmark. Fugaku, meanwhile, managed less than half that at 442 petaflops, which was still enough to keep it in first place for the previous two years. Frontier was also the most efficient supercomputer, too. Running at just 52.23 gigaflops per watt, it beat out Japan's MN-3 system to grab first place on the Green500 list. "The fact that the world's fastest machine is also the most energy efficient is just simply amazing," ORNL lab director Thomas Zacharia said at a press conference.
IBM

IBM Wants Its Quantum Supercomputers Running at 4,000-Plus Qubits by 2025 (engadget.com) 60

An anonymous reader shares a report: Forty years after it first began to dabble in quantum computing, IBM is ready to expand the technology out of the lab and into more practical applications -- like supercomputing! The company has already hit a number of development milestones since it released its previous quantum roadmap in 2020, including the 127-qubit Eagle processor that uses quantum circuits and the Qiskit Runtime API. IBM announced on Wednesday that it plans to further scale its quantum ambitions and has revised the 2020 roadmap with an even loftier goal of operating a 4,000-qubit system by 2025.

Before it sets about building the biggest quantum computer to date, IBM plans release its 433-qubit Osprey chip later this year and migrate the Qiskit Runtime to the cloud in 2023, "bringing a serverless approach into the core quantum software stack," per Wednesday's release. Those products will be followed later that year by Condor, a quantum chip IBM is billing as "the world's first universal quantum processor with over 1,000 qubits." This rapid four-fold jump in quantum volume (the number of qubits packed into a processor) will enable users to run increasingly longer quantum circuits, while increasing the processing speed -- measured in CLOPS (circuit layer operations per second) -- from a maximum of 2,900 OPS to over 10,000. Then it's just a simple matter of quadrupaling that capacity in the span of less than 24 months.

Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

Slashdot Top Deals