You are here

Agreguesi i feed

Are AI Coding Assistants Really Saving Developers Time?

Slashdot - Dje, 29/09/2024 - 1:34md
Uplevel provides insights from coding and collaboration data, according to a recent report from CIO magazine — and recently they measured "the time to merge code into a repository [and] the number of pull requests merged" for about 800 developers over a three-month period (comparing the statistics to the previous three months). Their study "found no significant improvements for developers" using Microsoft's AI-powered coding assistant tool Copilot, according to the article (shared by Slashdot reader snydeq): Use of GitHub Copilot also introduced 41% more bugs, according to the study... In addition to measuring productivity, the Uplevel study looked at factors in developer burnout, and it found that GitHub Copilot hasn't helped there, either. The amount of working time spent outside of standard hours decreased for both the control group and the test group using the coding tool, but it decreased more when the developers weren't using Copilot. An Uplevel product manager/data analyst acknowledged to the magazine that there may be other ways to measure developer productivity — but they still consider their metrics solid. "We heard that people are ending up being more reviewers for this code than in the past... You just have to keep a close eye on what is being generated; does it do the thing that you're expecting it to do?" The article also quotes the CEO of software development firm Gehtsoft, who says they didn't see major productivity gains from LLM-based coding assistants — but did see them introducing errors into code. With different prompts generating different code sections, "It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it." On the other hand, cloud services provider Innovative Solutions saw significant productivity gains from coding assistants like Claude Dev and GitHub Copilot. And Slashdot reader destined2fail1990 says that while large/complex code bases may not see big gains, "I have seen a notable increase in productivity from using Cursor, the AI powered IDE." Yes, you have to review all the code that it generates, why wouldn't you? But often times it just works. It removes the tedious tasks like querying databases, writing model code, writing forms and processing forms, and a lot more. Some forms can have hundreds of fields and processing those fields along with doing checks for valid input is time consuming, but can be automated effectively using AI. This prompted an interesting discussion on the original story submission. Slashdot reader bleedingobvious responded: Cursor/Claude are great BUT the code produced is almost never great quality. Even given these tools, the junior/intern teams still cannot outpace the senior devs. Great for learning, maybe, but the productivity angle not quite there.... yet. It's damned close, though. GIve it 3-6 months. And Slashdot reader abEeyore posted: I suspect that the results are quite a bit more nuanced than that. I expect that it is, even outside of the mentioned code review, a shift in where and how the time is spent, and not necessarily in how much time is spent. Agree? Disagree? Share your own experiences in the comments. And are developers really saving time with AI coding assistants?

Read more of this story at Slashdot.

California's Governor Vetoes Bill Requiring Speeding Alerts in New Cars

Slashdot - Dje, 29/09/2024 - 9:34pd
California governor Gavin Newsom "vetoed a bill Saturday that would have required new cars to beep at drivers if they exceed the speed limit," reports the Associated Press: In explaining his veto, Newsom said federal law already dictates vehicle safety standards and adding California-specific requirements would create a patchwork of regulations. The National Highway Traffic Safety "is also actively evaluating intelligent speed assistance systems, and imposing state-level mandates at this time risks disrupting these ongoing federal assessments," the Democratic governor said... The legislation would have likely impacted all new car sales in the U.S., since the California market is so large that car manufacturers would likely just make all of their vehicles comply... Starting in July, the European Union will require all new cars to have the technology, although drivers would be able to turn it off. At least 18 manufacturers including Ford, BMW, Mercedes-Benz and Nissan, have already offered some form of speed limiters on some models sold in America, according to the National Transportation Safety Board. Thanks to Slashdot reader Gruntbeetle for sharing the news.

Read more of this story at Slashdot.

Can AI Developers Be Held Liable for Negligence?

Slashdot - Dje, 29/09/2024 - 5:34pd
Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems... I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate? The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin. Thanks to long-time Slashdot reader david.emery for sharing the article.

Read more of this story at Slashdot.

US Transportation Safety Board Issues Urgent Alert About Boeing 737 Rudders

Slashdot - Dje, 29/09/2024 - 4:34pd
America's National Transportation Safety Board "is issuing 'urgent safety recommendations' for some Boeing 737s..." reports CNN, "warning that critical flight controls could jam." The independent investigative agency is issuing the warning that an actuator attached to the rudder on some 737 NG and 737 MAX airplanes could fail... "Boeing's 737 flight manual instructs pilots confronted with a jammed or restricted rudder to 'overpower the jammed or restricted system (using) maximum force, including a combined effort of both pilots,'" the NTSB said in a news release. "The NTSB expressed concern that this amount of force applied during landing or rollout could result in a large input to the rudder pedals and a sudden, large, and undesired rudder deflection that could unintentionally cause loss of control or departure from a runway," the statement said. "The FAA said United was the only U.S. airline flying planes with the manufacturing defect in the rudder control system," notes the Seattle Times, "and that United has already replaced the component on nine 737s, the only jets in its fleet where it was identified as faulty. However, the NTSB alert may cause the grounding of some 737 MAXs and older model 737NGs flown by foreign air carriers that have not yet replaced the defective part."

Read more of this story at Slashdot.

Why Boeing is Dismissing a Top Executive

Slashdot - Dje, 29/09/2024 - 3:34pd
Last weekend Boeing announced that its CEO of Defense, Space, and Security "had left the company," according to Barrons. "Parting ways like this, for upper management, is the equivalent to firing," they write — though they add that setbacks on Starliner's first crewed test flight is "far too simple an explanation." Starliner might, however, have been the straw that broke the camel's back. [New CEO Kelly] Ortberg took over in early August, so his first material interaction with the Boeing Defense and Space business was the spaceship's failed test flight... Starliner has cost Boeing $1.6 billion and counting. That's lot of money, but not all that much in the context of the Defense business, which generates sales of roughly $25 billion a year.... [T]he overall Defense business has performed poorly of late, burdened by fixed price contracts that have become unprofitable amid years of higher than expected inflation. Profitability in the defense business has been declining since 2020 and started losing money in 2022. From 2022 to 2024 losses should total about $6 billion cumulatively, including Wall Street's estimates for the second half of this year. Still, it felt like something had to give. And the change shows investors something about new CEO Ortberg. "At this critical juncture, our priority is to restore the trust of our customers and meet the high standards they expect of us," read part of an internal email sent to Boeing employees announcing the change. "Why his predecessor — David Calhoun — didn't pull this trigger earlier this year is a mystery," wrote Gordon Haskett analyst Don Bilson in a Monday note. "Can't leave astronauts behind." "Ortberg's logic appears sound," the article concludes. "In recent years, Boeing has disappointed its airline and defense customers, including NASA... "After Starliner, defense profitability, and the strike, Ortberg has to tackle production quality, production rates, and Boeing's ailing balance sheet. Boeing has amassed almost $60 billion in debt since the second tragic 737 MAX crash in March 2019." Thanks to Slashdot reader Press2ToContinue for sharing the news.

Read more of this story at Slashdot.

How I Booted Linux On an Intel 4004 from 1971

Slashdot - Dje, 29/09/2024 - 1:07pd
Long-time Slashdot reader dmitrygr writes: Debian Linux booted on a 4-bit intel microprocessor from 1971 — the first microprocessor in the world — the 4004. It is not fast, but it is a real Linux kernel with a Debian rootfs on a real board whose only CPU is a real intel 4004 from the 1970s. There's a detailed blog post about the experiment. (Its title? "Slowly booting full Linux on the intel 4004 for fun, art, and absolutely no profit.") In the post dmitrygr describes testing speed optimizations with an emulator where "my initial goal was to get the boot time under a week..."

Read more of this story at Slashdot.

Gen Z Grads Are Being Fired Months After Being Hired

Slashdot - Dje, 29/09/2024 - 12:07pd
"After complaining that Gen Z grads are difficult to work with for the best part of two years, bosses are no longer all talk, no action — now they're rapidly firing young workers who aren't up to scratch just months after hiring them," writes Fortune. "According to a new report, six in 10 employers say they have already sacked some of the Gen Z workers they hired fresh out of college earlier this year." Intelligent.com, a platform dedicated to helping young professionals navigate the future of work, surveyed nearly 1,000 U.S. leaders... After experiencing a raft of problems with young new hires, one in six bosses say they're hesitant to hire college grads again. Meanwhile, one in seven bosses have admitted that they may avoid hiring them altogether next year. Three-quarters of the companies surveyed said some or all of their recent graduate hires were unsatisfactory in some way... Employers' gripe with young people today is their lack of motivation or initiative — 50% of the leaders surveyed cited that as the reason why things didn't work out with their new hire. Bosses also pointed to Gen Z being unprofessional, unorganized and having poor communication skills as their top reasons for having to sack grads. Leaders say they have struggled with the latest generation's tangible challenges, including being late to work and meetings often, not wearing office-appropriate clothing, and using language appropriate for the workspace. Now, more than half of hiring managers have come to the conclusion that college grads are unprepared for the world of work. Meanwhile, over 20% say they can't handle the workload. Thanks to long-time Slashdot reader smooth wombat for sharing the article.

Read more of this story at Slashdot.

Despite Predictions of Collapse for Ocean Current, Researchers Find a Key Component is 'Remarkably Stable'

Slashdot - Sht, 28/09/2024 - 11:07md
Past studies have suggested a major ocean current could collapse, quickly changing temperatures and climate patterns, reports the Washington Post. "But scientists disagree on whether the the Atlantic Meridional Overturning Circulation (AMOC) is already slowing, and questions remain as to whether a variety of proxy measurements actually indicate a slowdown" — including a new analysis arguing that the current "has remained remarkably stable." One way to detect AMOC weakening is to monitor the strength of its components such as the Florida Current, which flows swiftly from the Gulf of Mexico into the North Atlantic. The current is a "major contributor" to the AMOC, the researchers write, and a slowdown of the current might indicate a slowdown of the AMOC. Scientists have been tracking its strength since the 1980s using a submarine cable that measures the volume of water it transports. In the current study, researchers reconsider the data, correcting for a gradual shift in Earth's magnetic field that they say affected the cable measurements. Previous assessments of the uncorrected data showed a slight slowing in the Florida Current. But when they corrected for the shift in Earth's magnetic field, the researchers write, they found that the current "has remained remarkably stable" and not declined significantly over the past 40 years. The researchers' announcement acknowledges that "It is possible that the AMOC is changing without a corresponding change in the Florida Current..."

Read more of this story at Slashdot.

Did Canals Help Build Egypt's Pyramids?

Slashdot - Sht, 28/09/2024 - 10:07md
How were the Pyramids built? NBC News reported on "a possible answer" after new evidence was published earlier this year in the journal Communications Earth & Environment. The theory? "[A]n extinct branch of the Nile River once weaved through the landscape in a much wetter climate." Dozens of Egyptian pyramids across a 40-mile-long range rimmed the waterway, the study says, including the best-known complex in Giza. The waterway allowed workers to transport stone and other materials to build the monuments, according to the study. Raised causeways stretched out horizontally, connecting the pyramids to river ports along the Nile's bank. Drought, in combination with seismic activity that tilted the landscape, most likely caused the river to dry up over time and ultimately fill with silt, removing most traces of it. The research team based its conclusions on data from satellites that send radar waves to penetrate the Earth's surface and detect hidden features. It also relied on sediment cores and maps from 1911 to uncover and trace the imprint of the ancient waterway. Such tools are helping environmental scientists map the ancient Nile, which is now covered by desert sand and agricultural fields... The study builds on research from 2022, which used ancient evidence of pollen grains from marsh species to suggest that a waterway once cut through the present-day desert. Granite blocks weighing several tons were transported hundreds of miles, according to a professor of Egyptology at Harvard University — who tells NBC they were moved without wheels. But this new evidence that the Nile was closer to the pyramids lends further support to the evolving "canals" theory. In 2011 archaeologist Pierre Tallet found 30 different man-made caves in remote Egyptian hills, according to Smithsonian magazine. eventually locating the oldest papyrus rolls ever discovered — which were written by the builders of the Great Pyramid of Giza, describing a team of 200 workers moving limestone upriver. And in a 2017 documentary archaeologists were already reporting evidence of a waterway underneath the great Giza plateau. Slashdot reader Smonster found an alternate theory in this 2001 announcement from Caltech: Mory Gharib and his team raised a 6,900-pound, 15-foot obelisk into vertical position in the desert near Palmdale by using nothing more than a kite, a pulley system, and a support frame... One might ask whether there was and is sufficient wind in Egypt for a kite or a drag chute to fly. The answer is that steady winds of up to 30 miles-per-hour are not unusual in the areas where the pyramids and obelisks are found. "We're not Egyptologists," Gharib added. "We're mainly interested in determining whether there is a possibility that the Egyptians were aware of wind power, and whether they used it to make their lives better."

Read more of this story at Slashdot.

An International Space Station Leak Is Getting Worse, NASA Confirms

Slashdot - Sht, 28/09/2024 - 8:52md
Ars Technica reports NASA officials operating the International Space Station "are seriously concerned about a small Russian part of the station" — because it's leaking. The "PrK" tunnel connecting a larger module to a docking port "has been leaking since September 2019... In February of this year NASA identified an increase in the leak rate from less than 1 pound of atmosphere a day to 2.4 pounds a day, and in April this rate increased to 3.7 pounds a day." A new report, published Thursday by NASA's inspector general, provides details not previously released by the space agency that underline the severity of the problem... Despite years of investigation, neither Russian nor US officials have identified the underlying cause of the leak. "Although the root cause of the leak remains unknown, both agencies have narrowed their focus to internal and external welds," the report, signed by Deputy Inspector General George A. Scott, states. The plan to mitigate the risk is to keep the hatch on the Zvezda module leading to the PrK tunnel closed. Eventually, if the leak worsens further, this hatch might need to be closed permanently, reducing the number of Russian docking ports on the space station from four to three. Publicly, NASA has sought to minimize concerns about the cracking issue because it remains, to date, confined to the PrK tunnel and has not spread to other parts of the station. Nevertheless, Ars reported in June that the cracking issue has reached the highest level of concern on the space agency's 5x5 "risk matrix" to classify the likelihood and consequence of risks to spaceflight activities. The Russian leaks are now classified as a "5" both in terms of high likelihood and high consequence. "According to NASA, Roscosmos is confident they will be able to monitor and close the hatch to the Service Module prior to the leak rate reaching an untenable level. However, NASA and Roscosmos have not reached an agreement on the point at which the leak rate is untenable." The article adds that the Space Station should reach its end of life by either 2028 or 2030, and NASA "intends to transition its activities in low-Earth orbit onto private space stations," and has funded Axiom Space, Blue Origin, and Voyager Space for initial development. "There is general uncertainty as to whether any of the private space station operators will be ready in 2030."

Read more of this story at Slashdot.

Alcohol Can Increase Your Cancer Risk, Researchers Find

Slashdot - Sht, 28/09/2024 - 7:38md
The world's oldest and largest cancer research association "found excessive levels of alcohol consumption increase the risk for six different types of cancer," reports CBS News: "Some of this is happening through chronic inflammation. We also know that alcohol changes the microbiome, so those are the bacteria that live in your gut, and that can also increase the risk," Dr. Céline Gounder, CBS News medical contributor and editor-at-large for public health at KFF Health News, recently said on "CBS Mornings." But how much is too much when it comes to drinking? We asked experts what to know. "Excessive levels of alcohol" equates to about three or more drinks per day for women and four or more drinks per day for men, Gounder said... Other studies have shown, however, there is no "safe amount" of alcohol, Gounder said, particularly if you have underlying medical conditions. "If you don't drink, don't start drinking. If you do drink, really try to keep it within moderation," she said. Dr. Amy Commander, medical director of the Mass General Cancer Center specializing in breast cancer, told CBS News alcohol is the third leading modifiable risk factor that can increase cancer risk after accounting for cigarette smoking and excess body weight. [Other factors include physical inactivity — and diet]. "There really isn't a safe amount of alcohol for consumption," she said. "In fact, it's best to not drink alcohol at all, but that is obviously hard for many people. So I think it's really important for individuals to just be mindful of their alcohol consumption and certainly drink less." The article also includes an interesting statistic from the association's latest Cancer Progress Report: from 1991 to 2021 there's been a 33% reduction in overall cancer deaths in the U.S. That's 4.1 million lives saved — roughly 136,667 lives saved each year. "So that is hopeful," Commander said, adding that when it comes to preventing cancer, alcohol is just "one piece of the puzzle."

Read more of this story at Slashdot.

Octopuses Recorded Hunting With Fish - and Punching Those That Don't Cooperate

Slashdot - Sht, 28/09/2024 - 6:34md
Slashdot reader Hmmmmmm shared this report from NBC News: Octopuses don't always hunt alone — but their partners aren't who you'd expect. A new study shows that some members of the species Octopus cyanea maraud around the seafloor in hunting groups with fish, which sometimes include several fish species at once. The research, published in the journal Nature on Monday, even suggests that the famously intelligent animals organized the hunting groups' decisions, including what they should prey upon. What's more, the researchers witnessed the cephalopod species — often called the big blue or day octopus — punching companion fish, apparently to keep them on task and contributing to the collective effort... "If the group is very still and everyone is around the octopus, it starts punching, but if the group is moving along the habitat, this means that they're looking for prey, so the octopus is happy. It doesn't punch anyone..." [said Eduardo Sampaio, a postdoctoral researcher at the Max Planck Institute of Animal Behavior and the lead author of the research]. NBC News says the study is "an indication that at least one octopus species has characteristics and markers of intelligence that scientists once considered common only in vertebrates." Lead author Sampaio agrees that "We are very similar to these animals. In terms of sentience, they are at a very close level or closer than we think toward us."

Read more of this story at Slashdot.

A Cheap, Low-Tech Solution For Storing Carbon? Researchers Suggest Burying Wood

Slashdot - Sht, 28/09/2024 - 5:34md
Researchers propose a "deceptively simple" way to sequester carbon, reports the Washington Post: burying wood underground: Forests are Earth's lungs, sucking up six times more carbon dioxide (CO2) than the amount people pump into the atmosphere every year by burning coal and other fossil fuels. But much of that carbon quickly makes its way back into the air once insects, fungi and bacteria chew through leaves and other plant material. Even wood, the hardiest part of a tree, will succumb within a few decades to these decomposers. What if that decay could be delayed? Under the right conditions, tons of wood could be buried underground in wood vaults, locking in a portion of human-generated CO2 for potentially thousands of years. While other carbon-capture technologies rely on expensive and energy-intensive machines to extract CO2, the tools for putting wood underground are simple: a tractor and a backhoe. Finding the right conditions to impede decomposition over millennia is the tough part. To test the idea, [Ning Zeng, a University of Maryland climate scientist] worked with colleagues in Quebec to entomb wood under clay soil on a crop field about 30 miles east of Montreal... But when the scientists went digging in 2013, they uncovered something unexpected: A piece of wood already buried about 6½ feet underground. The craggy, waterlogged piece of eastern red cedar appeared remarkably well preserved. "I remember standing there looking at other people, thinking, 'Do we really need to continue this experiment?'" Zeng recalled. "Because here's the evidence...." Radiocarbon dating revealed the log to be 3,775 years old, give or take a few decades. Comparing the old chunk of wood to a freshly cut piece of cedar showed the ancient log lost less than 5 percent of its carbon over the millennia. The log was surrounded by stagnant, oxygen-deprived groundwater and covered by an impermeable layer of clay, preventing fungi and insects from consuming the wood. Lignin, a tough material that gives trees their strength, protected the wood's carbohydrates from subterranean bacteria... The researchers estimate buried wood can sequester up 10 billion tons of CO2 per year, which is more than a quarter of annual global emissions from energy, according to the International Energy Agency.

Read more of this story at Slashdot.

Open Source Initiative Announces Alliance with Nonprofit Certifications Group

Slashdot - Sht, 28/09/2024 - 4:34md
When it comes to professional certifications, the long-running nonprofit Linux Professional Institute boasts they've issued 250,000, making them the world's largest Linux/Open Source certification body. And last week they announced a "strategic alliance" with the Open Source Initiative (OSI), which will now be "participating in development and maintenance of these programs." The announcement points out that the Open Source Initiative already has many distinct responsibilities. Besides creating the Open Source Definition — and certifying that Open Source licenses meet the requirements of Open Source software — the OSI's mission is to "encourage the growth of Open Source communities around the world," which includes "educational and outreach efforts to spread Open Source principles." So the ultimate goal is "strengthening Linux and Open Source communities," according to the announcement, by "nurturing the growth of more highly skilled professionals," with the OSI encouraging more people to get certifications for employers. The Open Source movement "has never been in greater need of educated professionals," says OSI executive director Stefano Maffulli, "to drive the next leap forward in Open Source understanding, innovation, and adoption... "This partnership with LPI is one in a series of initiatives that will increase accessibility to the certifications and community participation that Open Source needs to thrive." And the LPI's executive director says it's their group's mission "to promote the use of open source by supporting the people who work with it. A closer relationship with OSI makes a valuable contribution to this effort." The move "reaffirms the commitment of LPI and OSI to enhance the adoption of Linux and Open Source technology," according to the announcement.

Read more of this story at Slashdot.

EPA Must Address Fluoridated Water's Risk To Children's IQs, US Judge Rules

Slashdot - Sht, 28/09/2024 - 3:00md
An anonymous reader quotes a report from Reuters: A federal judge in California has ordered the U.S. Environmental Protection Agency to strengthen regulations for fluoride in drinking water, saying the compound poses an unreasonable potential risk to children at levels that are currently typical nationwide. U.S. District Judge Edward Chen in San Francisco on Tuesday sided (PDF) with several advocacy groups, finding the current practice of adding fluoride to drinking water supplies to fight cavities presented unreasonable risks for children's developing brains. Chen said the advocacy groups had established during a non-jury trial that fluoride posed an unreasonable risk of harm sufficient to require a regulatory response by the EPA under the Toxic Substances Control Act. "The scientific literature in the record provides a high level of certainty that a hazard is present; fluoride is associated with reduced IQ," wrote Chen, an appointee of Democratic former President Barack Obama. But the judge stressed he was not concluding with certainty that fluoridated water endangered public health. [...] The EPA said it was reviewing the decision. "The court's historic decision should help pave the way towards better and safer fluoride standards for all," Michael Connett, a lawyer for the advocacy groups, said in a statement on Wednesday.

Read more of this story at Slashdot.

Jets From Black Holes Cause Stars To Explode, Hubble Reveals

Slashdot - Sht, 28/09/2024 - 12:00md
Black hole jets, which spew near-light-speed particle beams, can trigger nearby white dwarf stars to explode by igniting hydrogen layers on their surfaces. "We don't know what's going on, but it's just a very exciting finding," said Alec Lessing, an astrophysicist at Stanford University and lead author of a new study describing the phenomenon, in an ESA release. Gizmodo reports: In the recent work -- set to publish in The Astrophysical Journal and is currently hosted on the preprint server arXiv -- the team studied 135 novae in the galaxy M87, which hosts a supermassive black hole of the same name at its core. M87 is 6.5 billion times the mass of the Sun and was the first black hole to be directly imaged, in work done in 2019 by the Event Horizon Telescope Collaboration. The team found twice as many novae erupting near M87's 3,000 light-year-long plasma jet than elsewhere in the galaxy. The Hubble Space Telescope also directly imaged M87's jet, which you can see below in luminous blue detail. Though it looks fairly calm in the image, the distance deceives you: this is a long tendril of superheated, near-light speed particles, somehow triggering stars to erupt. Though previous researchers had suggested there was more activity in the jet's vicinity, new observations with Hubble's wider-view cameras revealed more of the novae brightening -- indicating they were blowing hydrogen up off their surface layers. "There's something that the jet is doing to the star systems that wander into the surrounding neighborhood. Maybe the jet somehow snowplows hydrogen fuel onto the white dwarfs, causing them to erupt more frequently," Lessing said in the release. "But it's not clear that it's a physical pushing. It could be the effect of the pressure of the light emanating from the jet. When you deliver hydrogen faster, you get eruptions faster." The new Hubble images of M87 are also the deepest yet taken, thanks to the newer cameras on Hubble. Though the team wrote in the paper that there's between a 0.1% to 1% chance that their observations can be chalked up to randomness, most signs point to the jet somehow catalyzing the stellar eruptions.

Read more of this story at Slashdot.

next-20240927: linux-next

Kernel Linux - Pre, 27/09/2024 - 6:05pd
Version:next-20240927 (linux-next) Released:2024-09-27

Tamnjong Larry Tabeh: Wrapping Up My Outreachy Internship

Planet GNOME - Sht, 24/08/2024 - 11:33pd

As my Outreachy internship comes to a close, I find myself reflecting on the journey with a sense of gratitude. What began with a mix of excitement and fear has turned into a rewarding experience that has shaped my skills, confidence, and passion for open-source contributions.

Overcoming Initial Fears

When I first started, I had some doubts and fears — among them was whether I would fit into the open-source community, I worried that my skills might not translate well to user research, an area I was eager to explore but had limited experience in. However, those fears quickly disappeared as I got myself into the supportive and inclusive GNOME community. I learned that the community values diverse contributions and that there is always room for growth and learning.

Highlights of the Internship

This internship has been a significant period of growth for me. I’ve developed a stronger understanding of user research methodologies, particularly the importance of crafting neutral questions to avoid bias. This was a concept I encountered early in the internship, and it has since become a cornerstone of my research approach. Additionally, I’ve sharpened my ability to analyze and interpret user feedback, which will be invaluable as I continue to pursue UI/UX design.

Beyond technical skills, I’ve also grown in terms of communication. Learning how to ask the right questions, listen actively, and engage with feedback constructively has been crucial. These skills have given me the confidence to interact more effectively within the open-source community.

Mentorship and Project Achievements

My mentors, Allan Day and Aryan Kaushik played a critical role in my development throughout this internship. Their guidance, patience, and willingness to share their expertise made a great difference. They encouraged me to think critically about every aspect of the user research process, helping me to grow not just as a researcher, but as a contributor to the open-source community.

As for my project, I’m proud of the progress I’ve made. I successfully conducted a series of user research exercises and gathered insights that will help improve the usability of some GNOME Apps. However, my work isn’t finished yet — I’m currently in the process of finalizing the usability research report. This report will be a little resource for the GNOME design team, providing detailed findings and recommendations that will guide future improvements.

Throughout this journey, I’ve leaned heavily on the core values I outlined at the start of the internship: Adventure, Contribution, and Optimism. These values have been my compass, guiding me through challenges and reminding me of the importance of giving back to the community. The adventure of stepping into a new field, the joy of making meaningful contributions, and the optimism that every challenge is an opportunity for growth — these principles have been central to my experience.

Reflecting on My Core Values

As I wrap up my time with Outreachy, I feel both proud of what I’ve learned and excited for what lies ahead. I plan to continue my involvement in open-source projects. The skills and confidence I’ve gained during this internship will undoubtedly serve me well in future projects. Additionally, inspired by the mentorship I received, I hope to help mentor others and help them navigate their journeys in open-source contributions.

Finally, this internship has been a transformative experience that has expanded my skill set, deepened my passion for user-focused design, and strengthened my commitment to open-source work. I’m grateful for the opportunity and look forward to staying connected with the GNOME community as I continue to grow and contribute.

Bharat Tyagi: GsoC 2024: The Finale

Planet GNOME - Sht, 24/08/2024 - 12:55pd

Hey everybody, this is another iteration of my previous posts. It’s been a while since I published any updates about my project.

Before I begin with the updates I’d like to thank all of the people who helped me get this far into the project, it wouldn’t have been as engaging and enjoyable of a ride without your support.

For someone reading this blog for the first time, I am Bharat Tyagi. I am a Computer Science major and I have been contributing to the GNOME Project (Workbench in particular) under Google Summer of Code this year.

Since the updates until Week 3 have already been written in greater detail I will only briefly cover them in this report and focus more on the more recent ones.

Project Title

My project is subdivided into three parts:

  1. Port existing demos to Vala
  2. Redesign the Workbench Library and make QoL improvements
  3. Add code search into Workbench
Mentors

Sonny Piers, Andy Holmes

Part 1:

Workbench has a vast library of demos covering every use case for developers or users who would like to learn more about the GTK ecosystem and how each component is connected and works together in unison.

The demos are available in many programming languages including JavaScript, Python, Vala, Rust, and now TypeScript (Thanks to Vixalien for bringing this to Workbench :) ). The first part of my project was to port around 30 demos into Vala. This required me to learn many functions, signals, and how widgets use them to relay information. Since I ported over 30 demos, I’ll mention a few that were fun to port and the list of all the ports that I made, if you’d like a more in-depth review of the process, the Week 3 update is where you should go!

Map (Libshumate)

Maps and CSS gradients don’t just look cool, their code was also fun to port. Support for maps is provided by libshumate, sourcing the world view from OSM (Open Street Map), and supports functions like dragging across the map showing the location for latitudes and longitudes entered, and allowing you to put markers at any point on the map.

CSS Gradients

CSS Gradients allows you to create custom gradients and generate their CSS as the parameters are generated

Session Monitor and Inhibit was another interesting demo to port, as the name goes it allows you to monitor changes and inhibit the desktop from changing state, based on the current state of your application.

You could use the demo for some interesting warnings

https://medium.com/media/7a4f9594d1abf44009e38735db05dc75/hrefThe Ports

After all the ports were done, I moved to make changes to the library

Part 2:

The second part of this project was to redesign the library and bring about quality-of-life improvements.

Sonny prepared a roadmap, including some changes that had already been made, to help break the project down into actionable targets.

Since we wanted to include some filtering based on both language and category, the first step was to move away from the current implementation of the demo rows based on Adw.PreferencesWindow and related widgets, which are easy to use but don’t provide the necessary flexibility.

So I removed their usage with something more universal and that would allow us to reimplement populating the demos. Adw.PreferenceWindow was replaced with Adw.Window, Adw.PreferencesRow withAdw.ActionRow, and Adw.PreferencesGroup and Page were replaced with simpler Gtk.ScrolledWindow with nested Gtk.Box and Label

This is how the library looked after these changes

Not much different right? That's a good sign :)

With these out of the way, we could work on making the current search more prominent. Since the search bar was activated by using the search button on the top left of the Library, there were a few people who were unaware of a search being present at all. To resolve this I shifted the search bar inside the library making it directly accessible and quicker to search.

The subsequent code also needed new logic so only the searched demos were visible. I used hash maps to store the currently visible categories and widgets depending on the search term.

//set the widget to be visible if it exists in the map
category_map.forEach((category_widget, category_name) => {
category_widget.visible = visible_categories.has(category_name);
});

Getting the search to function as expected was relieving as it took a few iterations and changes to polish it enough to merge them. I am happy to report the search works just as expected now

See the search in action!

With these minor improvements, we were ready to add filtering to the demos based on the language and categories.

The logic for filtering was inspired by Sonny’s previous approach towards adding this feature (here if you want to check it out). We have two dropdowns one for the category and one for languages. The filtering was done based on the input provided in all of the three widgets (Search, Language dropdown, and Category dropdown), if and only if the search term matches all three of these, the result would be displayed.

//filtering logic
const is_match =
category_match &&
language_match &&
(search_term === "" || search_match);
//set visibility if the term matches all three
entry_row.visible = is_match;
if (is_match) {
results_found = true;
visible_categories.add(category_check[category_index]);
//also add it to the visible categories map
}

This was super close to how we wanted the filtering to work. Here is the final result :D

It works!!If you’ve reached this far into the post, this cookie is for you

These are the commits for this part of the project for anyone curious

Part 3:

Having completed the filtering for our Library, now comes in the third part of my project which was to implement code search. Since we have a bunch of demos, storing and accessing search terms efficiently is a challenge. Sonny, Angelo, and I had a meeting to discuss code search which would then build up the starting point for the feature

Andy and I looked at a few options that could be used to implement this feature, majorly focusing on tools that provide working with large amounts of data. TinySPARQL is one such engine, but it is more preferred for files and directories which is not our goal. We need an API that can interact with the sqlite database and run text searches on it.

There are two major libraries under GNOME, libgomand libgda. libgom is an object-relational mapping library, which allows you to map database tables to GObjects and then run operations on those objects. This is in hindsight simpler than libgda, but it doesn't directly provide text-search functionalities on its own like libgda does.

As of writing this article, I have ported a demo example that makes use of libgom and performs a simple text/ID-based search on a single table. This can be scaled to bigger databases like our Library itself, but it starts showing limitations when it comes to more search functions.

Here is a screengrab of the demo, ported into Modern Gjs (GNOME JavaScript) :)

The example this demo is based on was written over 7 years ago

Now that we’ve seen the demo, let's have a look at the libgom magic that is happening in the background

First, we create a custom class that represents an object with properties id and url that we want to store in our table

const ItemClass = GObject.registerClass(
{
GTypeName: "Item",
Properties: {
id: GObject.ParamSpec.int(
"id",
"ID",
"An ID",
GObject.ParamFlags.READWRITE | GObject.ParamFlags.CONSTRUCT,
GLib.MAXINT32,
),
url: GObject.ParamSpec.string(
//similarly for url
),
},
},

We then, initialize the database using Gom.Adapter which also opens an SQLite database (for the simplicity of the demo, we’re only storing the contents in memory). A table is set up and mapped to the ItemClass that we previously created. The id field is set as the Primary key.

Once all the preliminary setup is done, I added the logic for basic text searching using a handy filter function in Gom

const filter = Gom.Filter.new_glob(ItemClass, "url", `*${filter_text}*`);
const filtered_items = repository.find_sync(ItemClass, filter);

I use this to filter out the elements, store them in filtered_items, and display them in the table itself, voila

The PR is approved but yet to be merged, it will take some time before it reaches your Workbench. But if you would like to tinker around and make improvements to it, this is the PR

library: Add GOM demo by BharatAtbrat · Pull Request #200 · workbenchdev/demos

The plan right now is to implement search into the Library first using libgom and then later moving to libgda which is more versatile and provides full-text search functionalities using SQL queries without having to route them through GObjects.

Acknowledgments and Learnings

I am very thankful for the insights and guidance of my mentors, Andy and Sonny. They were quick to jump in whenever I encountered a blocker. They are awesome people with a strong passion for what they do, it’s been an honor to be able to contribute however little I was able to. I strive to be at their level someday.

This summer has been fruitful and fun for me. The most important thing I learned is to be curious and always ask questions.

A big thank you to Diego and Lorenz for reviewing all of the ports and providing much necessary improvements!

For the readers, I am pleasantly surprised that you were able to reach the end without scrolling away, thank you so much for tuning in and taking out time to read through. I hope this was just as fun for you to read as it was for me to write it :D

I’ll continue to stay in touch with everyone I have met and talked to during these few months because they are simply awesome!

Once again, thank you for sticking around.

Big ending reward

Matthew Garrett: What the fuck is an SBAT and why does everyone suddenly care

Planet GNOME - Enj, 22/08/2024 - 10:52pd
Short version: Secure Boot Advanced Targeting and if that's enough for you you can skip the rest you're welcome.

Long version: When UEFI Secure Boot was specified, everyone involved was, well, a touch naive. The basic security model of Secure Boot is that all the code that ends up running in a kernel-level privileged environment should be validated before execution - the firmware verifies the bootloader, the bootloader verifies the kernel, the kernel verifies any additional runtime loaded kernel code, and now we have a trusted environment to impose any other security policy we want. Obviously people might screw up, but the spec included a way to revoke any signed components that turned out not to be trustworthy: simply add the hash of the untrustworthy code to a variable, and then refuse to load anything with that hash even if it's signed with a trusted key.

Unfortunately, as it turns out, scale. Every Linux distribution that works in the Secure Boot ecosystem generates their own bootloader binaries, and each of them has a different hash. If there's a vulnerability identified in the source code for said bootloader, there's a large number of different binaries that need to be revoked. And, well, the storage available to store the variable containing all these hashes is limited. There's simply not enough space to add a new set of hashes every time it turns out that grub (a bootloader initially written for a simpler time when there was no boot security and which has several separate image parsers and also a font parser and look you know where this is going) has another mechanism for a hostile actor to cause it to execute arbitrary code, so another solution was needed.

And that solution is SBAT. The general concept behind SBAT is pretty straightforward. Every important component in the boot chain declares a security generation that's incorporated into the signed binary. When a vulnerability is identified and fixed, that generation is incremented. An update can then be pushed that defines a minimum generation - boot components will look at the next item in the chain, compare its name and generation number to the ones stored in a firmware variable, and decide whether or not to execute it based on that. Instead of having to revoke a large number of individual hashes, it becomes possible to push one update that simply says "Any version of grub with a security generation below this number is considered untrustworthy".

So why is this suddenly relevant? SBAT was developed collaboratively between the Linux community and Microsoft, and Microsoft chose to push a Windows update that told systems not to trust versions of grub with a security generation below a certain level. This was because those versions of grub had genuine security vulnerabilities that would allow an attacker to compromise the Windows secure boot chain, and we've seen real world examples of malware wanting to do that (Black Lotus did so using a vulnerability in the Windows bootloader, but a vulnerability in grub would be just as viable for this). Viewed purely from a security perspective, this was a legitimate thing to want to do.

(An aside: the "Something has gone seriously wrong" message that's associated with people having a bad time as a result of this update? That's a message from shim, not any Microsoft code. Shim pays attention to SBAT updates in order to avoid violating the security assumptions made by other bootloaders on the system, so even though it was Microsoft that pushed the SBAT update, it's the Linux bootloader that refuses to run old versions of grub as a result. This is absolutely working as intended)

The problem we've ended up in is that several Linux distributions had not shipped versions of grub with a newer security generation, and so those versions of grub are assumed to be insecure (it's worth noting that grub is signed by individual distributions, not Microsoft, so there's no externally introduced lag here). Microsoft's stated intention was that Windows Update would only apply the SBAT update to systems that were Windows-only, and any dual-boot setups would instead be left vulnerable to attack until the installed distro updated its grub and shipped an SBAT update itself. Unfortunately, as is now obvious, that didn't work as intended and at least some dual-boot setups applied the update and that distribution's Shim refused to boot that distribution's grub.

What's the summary? Microsoft (understandably) didn't want it to be possible to attack Windows by using a vulnerable version of grub that could be tricked into executing arbitrary code and then introduce a bootkit into the Windows kernel during boot. Microsoft did this by pushing a Windows Update that updated the SBAT variable to indicate that known-vulnerable versions of grub shouldn't be allowed to boot on those systems. The distribution-provided Shim first-stage bootloader read this variable, read the SBAT section from the installed copy of grub, realised these conflicted, and refused to boot grub with the "Something has gone seriously wrong" message. This update was not supposed to apply to dual-boot systems, but did anyway. Basically:

1) Microsoft applied an update to systems where that update shouldn't have been applied
2) Some Linux distros failed to update their grub code and SBAT security generation when exploitable security vulnerabilities were identified in grub

The outcome is that some people can't boot their systems. I think there's plenty of blame here. Microsoft should have done more testing to ensure that dual-boot setups could be identified accurately. But also distributions shipping signed bootloaders should make sure that they're updating those and updating the security generation to match, because otherwise they're shipping a vector that can be used to attack other operating systems and that's kind of a violation of the social contract around all of this.

It's unfortunate that the victims here are largely end users faced with a system that suddenly refuses to boot the OS they want to boot. That should never happen. I don't think asking arbitrary end users whether they want secure boot updates is likely to result in good outcomes, and while I vaguely tend towards UEFI Secure Boot not being something that benefits most end users it's also a thing you really don't want to discover you want after the fact so I have sympathy for it being default on, so I do sympathise with Microsoft's choices here, other than the failed attempt to avoid the update on dual boot systems.

Anyway. I was extremely involved in the implementation of this for Linux back in 2012 and wrote the first prototype of Shim (which is now a massively better bootloader maintained by a wider set of people and that I haven't touched in years), so if you want to blame an individual please do feel free to blame me. This is something that shouldn't have happened, and unless you're either Microsoft or a Linux distribution it's not your fault. I'm sorry.

comments

Faqet

Subscribe to AlbLinux agreguesi