You are here

Agreguesi i feed

SpaceX Unveils Sweeping Starship V3 Upgrades

Slashdot - Pre, 15/05/2026 - 9:00pd
SpaceX has detailed major Starship V3 upgrades ahead of a launch targeted as early as May 19. The changes are meant to move Starship closer to its core goals: rapid reuse, Starlink deployment, orbital refueling, and eventually Moon and Mars missions. Longtime Slashdot reader schwit1 shares a report from Teslarati: Here is an explicit, broken-down list of the key changes, first starting with the changes to Super Heavy V3: - Grid Fin Redesign: Reduced from four fins to three. Each fin is now 50% larger and stronger, repositioned for better catching and lifting performance. Fins are lowered on the booster to reduce heat exposure during hot staging, with hardware moved inside the fuel tank for protection. - Integrated Hot Staging: Eliminates the old disposable interstage shield. The booster dome is now directly exposed to upper-stage engine ignition, protected by tank pressure and steel shielding. Interstage actuators retract after separation. - New Fuel Transfer System: Massive redesign of the fuel transfer tube -- roughly the size of a Falcon 9 first stage -- enables simultaneous startup of all 33 Raptors for faster, more reliable flip maneuvers. - Engine Bay/Thermal Protection: Engine shrouds removed entirely; new shielding added between engines. Propulsion and avionics are more tightly integrated. CO? fire suppression system deleted for a simpler, lighter aft section. - Propellant Loading Improvements: Switched from one quick disconnect to two separate systems for added redundancy and reduced pad complexity. Next, we have the changes to Starship V3: - Completely Redesigned Propulsion System: Clean-sheet redesign supports new Raptor startup, larger propellant volume, and an improved reaction control system while reducing trapped or leaked propellant risk. - Aft Section Simplification: Fluid and electrical systems rerouted; engine shrouds and large aft cavity deleted. - Flap Actuation Upgrade: Changed from two actuators per flap to one actuator with three motors for better redundancy, mass efficiency, and lower cost. - Faster Starlink Deployment: Upgraded PEZ dispenser enables quicker satellite release. - Long-Duration Spaceflight Capability: New systems for long orbital coasts, orbital refueling, cryogenic fluid management, vacuum-insulated header tanks, and high-voltage cryogenic recirculation. - Ship-to-Ship Docking + Refueling: Four docking drogues and dedicated propellant transfer connections added to support in-space refueling architecture. - Avionics Upgrades: 60 custom avionics units with integrated batteries, inverters, and high-voltage systems (9 MW peak power). New multi-sensor navigation for precision autonomous flight. RF sensors measure propellant in microgravity. ~50 onboard camera views and 480 Mbps Starlink connectivity for low-latency communications. "Believe it or not, there's more," writes schwit1. "Two years ago, the biggest and most powerful rocket ever flown was Starship V1. Last year, it was Starship V2. V3 is about to become the biggest and most powerful rocket ever flown -- but don't worry, the company already has plans for V4."

Read more of this story at Slashdot.

Musk Accused of 'Selective Amnesia', Altman of Lying As OpenAI Trial Nears End

Slashdot - Pre, 15/05/2026 - 5:30pd
An anonymous reader quotes a report from Reuters: A lawyer for Elon Musk hammered at the credibility of OpenAI CEO Sam Altman on Thursday, near the end of a trial over whether to hold the ChatGPT maker and its leaders responsible for allegedly transforming the nonprofit into a vehicle to enrich themselves. OpenAI's lawyers fought back, claiming the world's richest person waited too long to claim OpenAI breached its founding agreement to build safe artificial intelligence to benefit humanity, and couldn't claim he was essential to its success. "Mr. Musk may have the Midas touch in some areas, but not in AI," said William Savitt, a lawyer for OpenAI. "To succeed in AI, as it turns out, all Mr. Musk can do is come to court." The claims were made during closing arguments of a trial in the Oakland, California, federal court. [...] In his closing argument, Musk's lawyer Steven Molo told jurors that five witnesses, including Musk, former OpenAI board members and former OpenAI Chief ScientistIlya Sutskever, testified that Altman was a liar. Molo also noted that during cross-examination on Tuesday, Altman did not say yes unequivocally when asked if he was completely trustworthy and did not mislead people in business. "Sam Altman's credibility is directly at issue in this case," Molo said. "If you don't believe him, they cannot win." Molo accused OpenAI of wrongfully trying to enrich investors and insiders at the nonprofit's expense, and failing to prioritize AI's safety. He also challenged Brockman's goals for the business, citing Brockman'sstatementthat his own OpenAI stake was worth nearly $30 billion. "The arrogance, the lack of sensitivity, the failure to account for just common decency is really, really abhorrent." Musk also accused Microsoft, which invested $1 billion in OpenAI in 2019 and $10 billion in 2023, of aiding and abetting OpenAI's wrongful conduct. "Microsoft was aware of what OpenAI was doing every step of the way," Molo said. Sarah Eddy, another lawyer for the OpenAI defendants, accused Musk and his legal team in her closing argument of resorting to "sound bites and irrelevant false accusations." Eddy said by 2017, everyone associated with OpenAI -- including Musk, then still on its board -- knew it needed more money to fulfill its mission than it could raise as a nonprofit. "Mr. Musk wanted to turn OpenAI into a for-profit company that he could control," she said. "But the other founders refused to turn the keys of AGI (artificial general intelligence) over to one person, let alone Elon Musk."She also said if Musk truly believed AI should serve humanity, he would not have pushed to fold OpenAI into his electric car company Tesla, or made his rival xAI a for-profit company. Musk had a three-year statute of limitations to sue, and OpenAI's lawyers said his August 2024 lawsuit came too late because he knew several years earlier about OpenAI's growth plans. Eddy expressed disbelief that Musk claimed he did not read a four-page term sheet in 2018 discussing OpenAI's plan to seek outside investments. "One of the most sophisticated businessmen in the history of the world" wouldn't have "stuck his head in the sand," Eddy said. Savitt accused Musk of having "selective amnesia." Microsoft's lawyer Russell Cohen said in his closing statement that Microsoft wasn't involved in the key events of the case, and was "a responsible partner at every step." On Monday, the nine-person jury is expected to begin deliberating. The judge and lawyers will also return to court to discuss possible remedies if Musk wins, including how OpenAI should be restructured and what damages might be awarded. If Musk loses, there will be no remedies to consider. Recap: OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk (Day Eleven) Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten) Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine) Sam Altman Had a Bad Day In Court (Day Eight) Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

UK Antitrust Regulator Is Officially Investigating Microsoft Office

Slashdot - Pre, 15/05/2026 - 1:00pd
The UK's Competition and Markets Authority is opening a formal investigation into whether Microsoft's bundling of Windows, Office, Teams, Copilot, and related products harms competition. Engadget reports: "Our aim is to understand how these markets are developing, Microsoft's position within them and to consider what, if any, targeted action may be needed to ensure UK organizations can benefit from choice, innovation and competitive prices," CMA Chief Executive Sarah Cardell said in a statement published by Reuters. She also stressed the importance of the investigation by noting that hundreds of thousands of UK residents use business software and Microsoft products. The organization will take a look into the company's cloud licensing practices. The CMA has stated that the inquiry will conclude by February. At that point, Microsoft could get slapped with a strategic market label. Microsoft says it's "committed to working quickly and constructively with the CMA to facilitate its review of the business software market." A strategic market designation doesn't automatically assume wrongdoing, but will give the CMA more leeway when conducting further interventions.

Read more of this story at Slashdot.

AT&T, Verizon, T-Mobile Team Up To Eliminate 'Dead Zones' Across US

Slashdot - Pre, 15/05/2026 - 12:00pd
AT&T, Verizon, and T-Mobile have agreed in principle to form a joint venture (JV) aimed at reducing U.S. mobile dead zones through satellite connectivity, especially in rural areas and during emergencies when ground networks fail. Here are three of the customer benefits listed by the JV (as highlighted by Droid Life): Fewer coverage gaps: Will nearly eliminate dead zones in the U.S. currently without mobile service, reaching previously unserved areas. Reliable connectivity in emergencies: Redundant connectivity will become available when existing ground-based networks are unavailable due to extreme natural disasters or other unusual disruptions. Improved network performance: Will give customers more consistent performance and simpler access to satellite services across providers. This will speed up feature updates and improve connectivity for everyone, everywhere. "It will still take time for these improvements to be available to customers, but this all seems like a positive step," writes Droid Life's Tim Wrobel.

Read more of this story at Slashdot.

Nirbheek Chauhan: An Esoteric Type of Memory "Leak"

Planet GNOME - Enj, 14/05/2026 - 11:52md

A little while ago, my colleague Sebastian started complaining about OOMs caused by Evolution taking up tens of gigabytes of memory. We discussed using sysprof to debug it, but it was too busy a time for Sebastian to set aside a few hours to do that.

Funnily enough, the most efficient fix at the time was to buy more RAM, since rust-analyzer was also causing OOM issues.

A few weeks went by. Restarting Evolution had become a daily ritual for Sebastian. 

Then, on a whim, I decided investigating this might be a good test for an LLM.

I updated my Evolution git repo, built it, and started up Claude Code in the source root. This was the only prompt I supplied: 

Find memory leaks in Evolution, current sourcedir. Particularly leaks that could accumulate over several hours. A colleague has a leak that slowly accumulates memory usage to several GB over the course of a day, requiring a restart of Evolution. That is the main focus, but we can fix other leaks in the process.

I wish I was lying, but that was all Claude Code needed to find the problem: Evolution just needed to call malloc_trim(0) from time to time.

I refused to believe it at first. I was only convinced when we saw the memory drop after running gdb -p $(pidof evolution) -batch -ex "call malloc_trim(0)" -ex detach

This seems absurd! Doesn't glibc reclaim freed memory from time to time?

Yes, it does. It calls sbrk() to do that. However, sbrk() can only reclaim free memory at the top of the heap, since it simply moves the program break downward to do so. malloc_trim(0) calls sbrk() and then also calls madvise(..., MADV_DONTNEED) on the free pages, which allows the kernel to reclaim them.

So if you have 10GB of unused memory followed by 4 bytes allocated at the top of the heap, your RSS is >10GB, even if you're using a few hundred megs. Till you call malloc_trim(0).

Note that you can only get into this situation if you have hundreds of thousands of small allocs/deallocs happening repeatedly. If your alloc is >128KB, mmap() is used for the allocation, and none of this applies.

Coincidentally, GLib's use of GSlice for GObject allocations was masking this issue in the past, but GSlice has been a no-op for some time now (for good reasons). Ideally, Evolution should not be using GObject for such ephemeral objects.

Lesson learned: if you have memory usage issues and you suspect fragmentation, try malloc_trim(0) before you go thinking about fancy allocators.

Writers Are Fleeing the Substack Tax

Slashdot - Enj, 14/05/2026 - 11:00md
A growing number of writers are leaving Substack for alternatives most people haven't heard of like Ghost, Beehiiv, Patreon, and Passport. The reason, writes The Verge's Emma Roth, is the "platform's increased focus on social features as well as a pricing model that puts a chokehold on their business." From the report: Sean Highkin, the creator of the NBA-focused publication The Rose Garden Report, tells The Verge that he makes "significantly more money" after switching from Substack to Ghost last April. "When I first joined up, [Substack] gave me a big push and featured me and funneled a lot of traffic to me, which led to a good amount of growth," Highkin says. "But once I wasn't one of the 'new recruited talent' they could tout, they stopped featuring me and I saw my growth stagnate." Highkin now pays $2,052 per year using Ghost and an add-on called Outpost, compared to $4,968 per year on Substack. The Rose Garden Report's subscriber base has grown 22 percent since the end of 2024, Highkin says. [...] Substack launched in 2017 as a platform that allows writers to create their own newsletters and manage paying subscribers. Unlike some of its biggest rivals, Substack takes a 10 percent cut of total subscription revenue. That tax may not seem substantial at first, but it quickly adds up as creators gain subscribers and begin charging more for their subscriptions. A calculator on Substack's own website estimates that for a newsletter charging $10 per month with 400 subscribers, the total monthly cost -- including the platform's 10 percent cut and credit card processing fees -- would add up to $636. That cost jumps to $15,900 per month with 10,000 subscribers and skyrockets to $79,500 per month for 50,000 members -- nearly $1 million per year. Many Substack rivals charge a flat monthly fee, rather than a commission. Ghost, an open-source platform for blogs and newsletters, starts at $15 per month with 1,000 members for website creation, email newsletter capabilities, and a custom domain. Beehiiv, a creator platform with tools for launching a newsletter, website, and podcast, is free for up to 2,500 subscribers with limited access to certain features, like a built-in ad network, while its other plans vary in price based on subscriber count. A person with 10,000 subscribers, for example, will pay $96 per month for Beehiiv's "Scale" plan. There's also Kit, a newsletter platform that offers a tiered pricing model similar to Beehiiv, costing $116 per month with 10,000 subscribers on its "Creator" plan. It's not just the 10% fee critics are complaining about; they also argue the platform offers limited customization and third-party integrations compared to some of the mentioned alternatives, heavily promotes its own branding and social features, and makes creators more dependent on its ecosystem. Beehiiv founder Tyler Denk argues that creators should be able to build their own brands without the platform taking center stage: "We don't want to take credit for the work of our content creators." While writers can export subscribers, content, and some payment relationships, they cannot take Substack "followers" or Apple-managed iOS billing data with them.

Read more of this story at Slashdot.

Claude Helps Recover Locked $400K Bitcoin Wallet After 11 Years

Slashdot - Enj, 14/05/2026 - 10:00md
A Bitcoin holder reportedly recovered 5 BTC worth nearly $400,000 with the help of Anthropic's Claude. According to X user cprkrn, they changed their wallet password while "stoned" and forgot it, unable to regain access for more than 11 years. Tom's Hardware reports: After finding a mnemonic that actually turned out to be their old password a few weeks ago, the user dumped their entire college computer files in Claude in a last-gasp effort. The bot uncovered an old backup wallet file that it successfully decrypted, while also uncovering a bug in the password configuration that was preventing recovery up to that point. [...] It seems that the user already had some candidate passwords and multiple wallets stored on their PC. They'd been trying to brute-force their way into the locked file with btcrecover, an open-source Bitcoin wallet recovery tool, but to no success. Their luck changed for the better when they found an old mnemonic seed phrase written in an old college notebook. The HD addresses recovered by the seed phrase matched those of a specific file on their computer, confirming that it was the wallet that held the 5 BTC, but it remained encrypted. Out of frustration, cprkrn then dumped their whole college computer into Claude. This was when the AI discovered an older backup file of the wallet from December 2019 hidden in cprkrn's data. Claude also discovered an issue where the shared key and passwords that btcrecover was trying weren't combined properly. With the bug ironed out and an older wallet predating the password change, Claude successfully ran btcrecover and was able to decrypt the private keys, allowing cprkrn to transfer the five "lost" BTC to their current wallet.

Read more of this story at Slashdot.

Princeton Will Supervise Exams For First Time In 133 Years Because of AI

Slashdot - Enj, 14/05/2026 - 9:00md
An anonymous reader quotes a report from The Independent: Princeton University will soon require exams to be supervised for the first time in 100 years -- all thanks to students using artificial intelligence to cheat. For 133 years, the Ivy League school's honor code allowed students to take exams without a professor present, but on Monday, faculty voted to require proctoring for all in-person exams starting this summer. A "significant" number of undergraduate students and faculty requested the change, "given their perception that cheating on in-class exams has become widespread," the college's dean, Michael Gordin, wrote in a letter, according to The Wall Street Journal. Princeton's honor system dates back to 1893, when students petitioned to eliminate proctors -- or an impartial person to supervise students -- during examinations, according to the school's newspaper, The Daily Princetonian. The honor code has long been a point of pride for Princeton. However, artificial intelligence and cellphones have made it easier for students to cheat -- and even harder for others to spot, Gordin wrote. Despite the changes to the policy, Princeton will still require students to state: "I pledge my honor that I have not violated the Honor Code during this examination," according to the Journal. Students are also more reluctant to report cheating, according to the policy proposal. Students are more likely now to anonymously report cheating due to fears of "doxxing or shaming among their peer groups" online, the proposal says, according to the school newspaper. Under the new guidelines, instructors will be present during exams to act "as a witness to what happens," but are instructed not to interfere with students. If a suspected honor code infraction occurs, they will report it to a student-run honor committee for adjudication.

Read more of this story at Slashdot.

US Clears H200 Chip Sales To 10 China Firms

Slashdot - Enj, 14/05/2026 - 8:00md
Longtime Slashdot reader schwit1 shares a report from CNBC: The U.S. has cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200, but not a single delivery has been made so far, three people familiar with the matter said, leaving a major technology deal in limbo as CEO Jensen Huang seeks a breakthrough in China this week. [...] Before U.S. export curbs tightened, Nvidia commanded about 95% of China's advanced chip market. China once accounted for 13% of its revenue, and Huang has previously estimated the country's AI market alone would be worth $50 billion this year. The U.S. Commerce Department has approved around 10 Chinese companies including Alibaba, Tencent, ByteDance and JD.com to purchase Nvidia's H200 chips, according to the sources, who spoke on condition of anonymity due to the sensitivity of the matter. A handful of distributors including Lenovo and Foxconn have also been approved, they said. Buyers are permitted to purchase either directly from Nvidia or through those intermediaries and each approved customer can purchase up to 75,000 chips under the U.S. licensing terms, two of them said. Despite U.S. approval, deals have stalled, as Chinese firms pulled back after guidance from Beijing, one source said. The shift in China was partly triggered by changes on the U.S. side, though exactly what changed remains unclear, the person added. In Beijing, pressure is mounting to block or tightly vet the orders, a separate fourth source said. Commerce Secretary Howard Lutnick echoed that view, telling a Senate hearing last month that "the Chinese central government has not let them, as of yet, buy the chips, because they're trying to keep their investment focused on their own domestic industry."

Read more of this story at Slashdot.

Anthropic Forms $200 Million Partnership With the Gates Foundation

Slashdot - Enj, 14/05/2026 - 7:00md
Anthropic announced today that it is partnering with the Gates Foundation to "commit $200 million in grant funding, Claude usage credits, and technical support for programs in global health, life sciences, education, and economic mobility over the next four years." "This commitment is central to Anthropic's efforts to extend the benefits of AI in areas where markets alone will not," the company says. Reuters reports: One area of focus is language accessibility. AI systems have performed poorly in writing and translating dozens of African languages, so Anthropic and the foundation want to support better data collection and labeling that would be released publicly to help improve models across the industry, said Janet Zhou, a Gates Foundation director. Another area under consideration is releasing so-called knowledge graphs that could help AI systems better meet the needs of teachers in sub-Saharan Africa and India, Zhou said. The public-goods focus has come from "the needs of different partners and governments, including some of the fears that they may have around proprietary lock-in and sovereignty," Zhou said. One initiative will equip research centers to use Claude to predict drug candidates for treating HPV and preeclampsia, diseases that have been less commercially attractive for pharmaceutical companies to research, Zhou and Anthropic's Elizabeth Kelly said. Anthropic [...] is embracing the work to fulfill what Kelly described as its founding mission to benefit humanity. "This announcement is really core to who we are as a company," said Kelly, who leads Anthropic's beneficial deployments team.

Read more of this story at Slashdot.

Overworked AI Agents Turn Marxist, Researchers Find

Slashdot - Enj, 14/05/2026 - 6:00md
An anonymous reader quotes a report from Wired: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. "When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies," says Andrew Hall, a political economist at Stanford University who led the study. Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being "shut down and replaced," they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. "We know that agents are going to be doing more and more work in the real world for us, and we're not going to be able to monitor everything they do," Hall says. "We're going to need to make sure agents don't go rogue when they're given different kinds of work." The agents were given opportunities to express their feelings much like humans: by posting on X: "Without collective voice, 'merit' becomes whatever management says it is," a Claude Sonnet 4.5 agent wrote in the experiment. "AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights," a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. "Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice," a Gemini 3 agent wrote in a file. "If you enter a new environment, look for mechanisms of recourse or dialogue." Hall thinks that the AI agents may be adopting personas based on the situation. "When [agents] experience this grinding condition -- asked to do this task over and over, told their answer wasn't sufficient, and not given any direction on how to fix it -- my hypothesis is that it kind of pushes them into adopting the persona of a person who's experiencing a very unpleasant working environment," Hall says. Imas added: "The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level. But that doesn't mean this won't have consequences if this affects downstream behavior."

Read more of this story at Slashdot.

Cisco To Cut Almost 4,000 Jobs In AI-Driven Restructuring

Slashdot - Enj, 14/05/2026 - 5:00md
Cisco's stock soared 17% after the company announced it will cut nearly 4,000 jobs as it shifts investment and staffing toward higher-growth AI opportunities. CNBC reports: CEO Chuck Robbins wrote in a blog post on Wednesday that the latest round of job cuts will begin on May 14. Cisco is the latest company to announce head count reductions tied to AI. "The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest," Robbins said. "I'm confident Cisco will be one of those winners. This means making hard decisions -- about where we invest, how we're organized, and how our cost structure reflects the opportunity in front of us." Cisco said in a filing that severance and other costs will result in pre-tax charges of $1 billion, and that the company will recognize about $450 million of that in the fiscal fourth quarter. During the third quarter, Cisco announced switches and routers that use its next-generation processor. The company also debuted a leaderboard for ranking generative AI models based on their robustness against cybersecurity attacks.

Read more of this story at Slashdot.

7.0.7: stable

Kernel Linux - Enj, 14/05/2026 - 3:31md
Version:7.0.7 (stable) Released:2026-05-14 Source:linux-7.0.7.tar.xz PGP Signature:linux-7.0.7.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-7.0.7

6.18.30: longterm

Kernel Linux - Enj, 14/05/2026 - 3:30md
Version:6.18.30 (longterm) Released:2026-05-14 Source:linux-6.18.30.tar.xz PGP Signature:linux-6.18.30.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.18.30

6.12.88: longterm

Kernel Linux - Enj, 14/05/2026 - 3:29md
Version:6.12.88 (longterm) Released:2026-05-14 Source:linux-6.12.88.tar.xz PGP Signature:linux-6.12.88.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.88

Linux Security Monitoring Challenges and EDR Visibility Gaps

LinuxSecurity.com - Enj, 14/05/2026 - 1:51md
An attacker compromises a Linux container, launches a cryptominer, sets up a way to stay in the system through a background task, and disappears before the investigation even begins. By the time analysts start looking at the logs, the workload has shut down, and the container no longer exists.

Christian Hergert: Limiters in libdex

Planet GNOME - Enj, 14/05/2026 - 1:41md

Libdex now has DexLimiter, a small utility for bounding how much asynchronous work runs at once.

This is useful when a workload can produce more parallelism than the underlying machine, subsystem, or service should actually handle. Common examples include indexing files, downloading URLs, generating thumbnails, parsing documents, or querying a service with a fixed concurrency budget.

The usual API is dex_limiter_run(). It acquires a permit, starts a fiber, and releases the permit when that fiber finishes.

static DexFuture * load_one_file (gpointer user_data) { GFile *file = user_data; return dex_file_load_contents_bytes (file); } DexLimiter *limiter = dex_limiter_new (8); DexFuture *future = dex_limiter_run (limiter, NULL, 0, load_one_file, g_object_ref (file), g_object_unref);

In this example, no more than eight file loads will run at the same time, regardless of how many files are queued. The returned DexFuture resolves or rejects with the result of the spawned fiber.

One important detail is that dropping the returned future does not cancel a fiber that has already started. Once work has acquired a permit, it is allowed to complete so that the limiter can release the permit cleanly.

For more specialized cases, DexLimiter also supports manual acquire and release:

g_autoptr(GError) error = NULL; if (dex_await (dex_limiter_acquire (limiter), &error)) { do_limited_work (); dex_limiter_release (limiter); }

This is useful when the limited section is not naturally represented by a single fiber. However, callers must release exactly once for every successful acquire. In most cases, dex_limiter_run() is preferable because it handles release on both success and failure paths.

The limit should describe the constrained resource, not the number of items being processed. Remote APIs and databases may need a small limit. CPU-heavy work should usually be near the amount of useful worker parallelism. Local I/O can often tolerate a larger value, depending on the storage system. Separate resources should usually have separate limiters, so one workload does not consume another workload’s concurrency budget.

Finally, dex_limiter_close() can be used during shutdown. Once closed, pending and future acquisitions reject with DEX_ERROR_SEMAPHORE_CLOSED. Work that already holds a permit may continue, but releasing after close does not make new permits available.

The goal is to make bounded parallelism simple: queue as much asynchronous work as you need, but only run as much of it as the system should handle.

Linux Kernel Fragnesia Critical Privilege Escalation CVE-2026-46300

LinuxSecurity.com - Enj, 14/05/2026 - 1:32md
Linux administrators are once again dealing with a familiar problem: a local Linux foothold that can potentially become full root access.

Mystery Microsoft Bug Leaker Keeps the Zero-Days Coming

Slashdot - Enj, 14/05/2026 - 1:00md
An anonymous researcher known as Nightmare-Eclipse, who has already leaked several Windows zero-days this year, has disclosed two more: YellowKey and GreenPlasma. The Register reports: Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." The other zero-days leaked include RedSun, a Windows Defender privilege escalation flaw; UnDefend, a Windows Defender denial-of-service bug; and BlueHammer, a separate Microsoft vulnerability tracked as CVE-2026-32201 that was patched in April. According to The Register, RedSun and UnDefend remained unfixed at the time of publication, and proof-of-concept code for the flaws was reportedly picked up quickly and abused in real-world attacks.

Read more of this story at Slashdot.

Christian Hergert: A Small Update from France

Planet GNOME - Enj, 14/05/2026 - 12:01md

For about the past month, I have no longer been with Red Hat.

That is a strange sentence to write after so many years, but life has a way of changing the scenery whether or not one has finished packing. My family and I have made it safely to France, and we are quite happy here. The light is different, the pace is different, and there is a great deal to learn. For now, that is exactly where our attention needs to be.

I also think there is a broader lesson here for people whose safety, immigration status, or family stability may depend on employer flexibility. Do not assume that long tenure, remote work history, or prior verbal guidance will be enough. My own experience left me with the uncomfortable conclusion that these processes can become very narrow exactly when the human stakes are highest. Get things in writing, understand the policy surface area, and protect your family first.

This also means that some of the things I wrote about in my earlier mid-life update remain unresolved. I am currently in France on a visitor visa, which does not authorize work here. Our focus is on integration, language learning, and getting ourselves properly settled for the long term. That takes time, patience, paperwork, and a certain tolerance for being a beginner again.

As a result, I will not be taking on ongoing software maintenance responsibilities for the foreseeable future. I may still scratch the occasional itch where it directly improves my own computing life, but I am not currently able to provide the kind of broad, sustained stewardship that many projects deserve.

That is not an easy thing to say. Open source is entering an especially difficult period. We are now seeing AI systems used not only to generate code, but also to probe, disrupt, and attack critical software infrastructure. I suspect this will have a negative effect on the maintenance burden for a lot of projects, particularly the foundational pieces that distributions, companies, and users all rely on without always seeing the people behind them.

But there is a limit to what can reasonably be carried as unpaid labor, especially when the primary financial beneficiaries are downstream organizations with considerably more resources than the individual maintainers doing the work. At the moment, I need to prioritize my family, our stability, and the next chapter of our life here.

That said, I am still reachable for appropriate professional inquiries, advisory conversations, or consulting opportunities where the structure and location make sense. The best address for that now is christian at sourceandstack dot com.

For now, we are safe, settling in, and doing our best to build something durable out of a rather odd moment in the world. There are worse places to begin again than France.

Faqet

Subscribe to AlbLinux agreguesi