Physics | Popular Science https://www.popsci.com/category/physics/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Tue, 06 Jun 2023 16:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Physics | Popular Science https://www.popsci.com/category/physics/ 32 32 The ISS’s latest delivery includes space plants and atmospheric lightning monitors https://www.popsci.com/technology/iss-spacex-experiments-june-2023/ Tue, 06 Jun 2023 16:00:00 +0000 https://www.popsci.com/?p=546234
Computer illustration of ISS with docket spacecraft
A SpaceX Dragon cargo craft docked with 7,000 pounds of material. NASA

SpaceX's Dragon craft autonomously docked with the ISS early Tuesday morning.

The post The ISS’s latest delivery includes space plants and atmospheric lightning monitors appeared first on Popular Science.

]]>
Computer illustration of ISS with docket spacecraft
A SpaceX Dragon cargo craft docked with 7,000 pounds of material. NASA

The International Space Station received roughly 7,000 pounds of supplies and scientific experiment materials early Tuesday morning following the successful autonomous docking of a SpaceX Dragon cargo spacecraft. According to NASA, the Dragon will remain attached to the ISS for about three weeks before returning back to Earth with research and cargo. In addition to a pair of International Space Station Roll Out Solar Arrays (IROSAs) designed to expand the microgravity complex’s energy-production ability, ISS crew members are receiving materials for a host of new and ongoing experiments.

[Related: Microgravity tomatoes, yogurt bacteria, and plastic eating microbes are headed to the ISS.]

THOR, an aptly named investigation courtesy of the European Space Agency, will observe Earth’s thunderstorms from above the atmosphere to examine and document electrical activity. Researchers plan to specifically analyze the “inception, frequency, and altitude of recently discovered blue discharges,” i.e. lightning occurring within the upper atmosphere. Scientists still know very little about such phenomena’s effects on the planet’s climate and weather, but the upcoming observations could potentially shed more light on the processes.

Meanwhile, researchers are hoping to stretch out telomeres in microgravity via Genes in Space-10, part of an ongoing national contest for students in grades 7 through 12 to develop their own biotech experiments. These genetic structures protect humans’ chromosomes, but generally shorten over time as they age. Observing telomere lengthening in ISS microgravity will give scientists a chance to determine if their size change relates to stem cell proliferation. Results could help NASA and other researchers better understand effects on astronauts’ health during long-term missions, a particularly topical subject given their hopes for upcoming excursions to the moon and Mars.

ISS will also deploy the Educational Space Science and Engineering CubeSat Experiment (ESSENCE), a tiny satellite housing a wide-angle camera capable of monitoring ice and permafrost thawing within the Canadian Arctic. This satellite comes alongside another student collaboration project called Iris, which is meant to observe geological samples’ weathering upon exposure to direct solar and background cosmic radiation.

[Related: The ISS’s latest arrivals: a 3D printer, seeds, and ovarian cow cells.]

Finally, a set of plants that germinated from seeds first produced in space and subsequently traveled to Earth are returning to the ISS as part of Plant Habitat-03. According to NASA, plantlife often adapts to the environmental stresses imposed on them via spaceflight, but it’s still unclear if these changes are genetically passed on to future generations. PH-03 will hopefully help scientists better understand these issues, which could prove critical to food generation during future space missions and exploration efforts.

The post The ISS’s latest delivery includes space plants and atmospheric lightning monitors appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists take first-ever X-rays of single atoms https://www.popsci.com/science/one-atom-x-ray-characterization/ Fri, 02 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=545645
Argonne National Laboratory's Advanced Photon Source.
The particle accelerator at Argonne National Laboratory provided the intense X-rays needed to image single atoms. Argonne National Laboratory/Flickr

This technique could help materials scientists control chemical reactions with better precision.

The post Physicists take first-ever X-rays of single atoms appeared first on Popular Science.

]]>
Argonne National Laboratory's Advanced Photon Source.
The particle accelerator at Argonne National Laboratory provided the intense X-rays needed to image single atoms. Argonne National Laboratory/Flickr

Perhaps you think of X-rays as the strange, lightly radioactive waves that phase through your body to scan broken bones or teeth. When you get an X-ray image taken, your medical professionals are essentially using it to characterize your body.

Many scientists use X-rays in a very similar role—they just have different targets. Instead of scanning living things (which likely wouldn’t last long when exposed to the high-powered research X-rays), they scan molecules or materials. In the past, scientists have X-rayed batches of atoms, to understand what they are and predict how those atoms might fare in a particular chemical reaction.

But no one has been able to X-ray an individual atom—until now. Physicists used X-rays to study the insides of two different single atoms, in work published in the journal Nature on Wednesday.

“The X-ray…has been used in so many different ways,” says Saw-Wai Hla, a physicist at Ohio University and Argonne National Laboratory, and an author of the paper. “But it’s amazing what people don’t know. We cannot measure one atom—until now.”

Beyond atomic snapshots

Characterizing an atom doesn’t mean just snapping a picture of it; scientists first did that way back in 1955. Since the 1980s, atom-photographers’ tool of choice has been the scanning tunneling microscope (STM). The key to an STM is its bacterium-sized tip. As scientists move the tip a millionth of a hair’s breadth above the atom’s surface, electrons tunnel through the space in between, creating a current. The tip detects that current, and the microscope transforms it into an image. (An STM can drag and drop atoms, too. In 1989, two scientists at IBM became the first STM artists, spelling the letters “IBM” with xenon atoms.)

But actually characterizing an atom—scanning the lone object, sorting it by its element, decoding its properties, understanding how it will behave in chemical reactions—is a far more complex endeavor. 

X-rays allow scientists to characterize larger batches of atoms. When X-rays strike atoms, they transfer their energy into those atoms’ electrons, exciting them. All good things must end, of course, and when those electrons come down, they release their newfound energy as, again, X-rays. Scientists can study that fresh radiation to study the properties of the atoms in between.

[Related: How scientists managed to store information in a single atom]

That’s a fantastic tool, and it’s been a boon to scientists who need to tinker with molecular structures. X-ray spectroscopy, as the process is called, helped create COVID-19 vaccines, for instance. The technique allows scientists to study a group of atoms—identifying which elements are in a batch and what their electron configurations are in general—but it doesn’t enable scientists to match them up to individual atoms. “We might be able to see, ‘Oh, there’s a whole team of soccer players,’ and ‘There’s a whole team of dancers,’ but we weren’t able to identify a single soccer player or a single dancer,” says Volker Rose, a physicist at Argonne National Laboratory and another of the authors.

Peering with high-power beams

You can’t create a molecule-crunching machine with the X-ray source at your dentist’s office. To reach its full potential, you need a beam that is far brighter, far more powerful. You’ve got to go to a particle accelerator known as a synchrotron.

The device the Nature authors used is located at Argonne National Laboratory, which zips electrons around a ring in the plains of Illinois, two-thirds of a mile long. Rather than crashing particles into each other, however, a synchrotron sends its high-speed electrons through an undulating magnetic gauntlet. As the electrons pass through, they unleash much of their energy as an X-ray beam.

Physics photo
A diagram showing X-rays illuminating a single iron atom (the red ball marked Fe), which provides elemental and chemical information when the tip detects excited electron. Saw-Wai Hla

The authors combined the power of such an X-ray beam with the precision of an STM. In this case, the X-rays energized the atom’s electrons. The STM, however, pulled some of the electrons out, giving scientists a far closer look. Scientists have given this process a name that wouldn’t feel out of place in a PlayStation 1 snowboarding game: synchrotron X-ray scanning tunneling microscopy (SX-STM).

[Related: How neutral atoms could help power next-gen quantum computers]

Combining X-rays and STM isn’t so simple. More than simple technical tinkering, they’re two separate technologies used by two completely separate batches of scientists. Getting them to work together took years of work.

Using SX-STM, the authors successfully detected the electron arrangement within two different atoms: one of iron; and another of terbium, a rare-earth element (number 65) that’s often used in electronic devices that contain magnets as well as in green fluorescent lamps. “That’s totally new, and wasn’t possible before,” says Rose.

The scientists believe that their technique can find use in a broad array of fields. Quantum computers can store information in atoms’ electron states; researchers could use this technique to read them. If the technique catches on, materials scientists might be able to control chemical reactions with far greater precision.

Hla believes that SX-STM characterization can build upon the work that X-ray science already does. “The X-ray has changed many lives in our civilization,” he says. For instance, knowing what specific atoms do is critical to creating better materials and to studying proteins, perhaps for future immunizations. 

Now that Hla and his colleagues have proven it’s possible to examine one or two atoms at a time, he says the road is clear for scientists to characterize whole batches of them at once. “If you can detect one atom,” Hla says, “you can detect 10 atoms and 20 atoms.”

The post Physicists take first-ever X-rays of single atoms appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Danish painters used beer to create masterpieces, but not the way you think https://www.popsci.com/science/beer-byproducts-danish-art/ Thu, 25 May 2023 10:00:00 +0000 https://www.popsci.com/?p=543346
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer.
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer. Statens Museum for Kunst

Nineteenth-century craftspeople made do with what they had. In Denmark, they had beer leftovers.

The post Danish painters used beer to create masterpieces, but not the way you think appeared first on Popular Science.

]]>
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer.
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer. Statens Museum for Kunst

Behind a beautiful oil-on-canvas painting is, well, its canvas. To most art museum visitors, that fabric might be no more than an afterthought. But the canvas and its chemical composition are tremendously important to scientists and conservators who devote their lives to studying and caring for works of art.

When they examine a canvas, sometimes those art specialists are surprised by what they find. For instance, few conservators expected a 200-year-old canvas to contain proteins from yeast and fermented grains: the fingerprints of beer-brewing.

But those very proteins sit in the canvases of paintings from early 19th century Denmark. In a paper published on Wednesday in the journal Science Advances, researchers from across Europe say that Danes may have applied brewing byproducts as a base layer to a canvas before painters had their way with it.

“To find these yeast products—it’s not something that I have come across before,” says Cecil Krarup Andersen, an art conservator at the Royal Danish Academy, and one of the authors. “For us also, as conservators, it was a big surprise.”

The authors did not set out in search of brewing proteins. Instead, they sought traces of animal-based glue, which they knew was used to prepare canvases. Conservators care about animal glue since it reacts poorly with humid air, potentially cracking and deforming paintings over the decades.

[Related: 5 essential apps for brewing your own beer]

The authors chose 10 paintings created between 1828 and 1837 by two Danes: Christoffer Wilhelm Eckersberg, the so-called “Father of Danish Painting,” fond of painting ships and sea life; and Christen Schiellerup Købke, one of Eckersberg’s students at the Royal Danish Academy of Fine Arts, who went on to become a distinguished artist in his own right.

The authors tested the paintings with protein mass spectrometry: a technique that allows scientists to break a sample down into the proteins within. The technique isn’t selective, meaning that the experimenters could find substances they weren’t seeking.

Mass spectrometry destroys its sample. Fortunately, conservators in the 1960s had trimmed the paintings’ edges during a preservation treatment. The National Gallery of Denmark—the country’s largest art museum—had preserved the scraps, allowing the authors to test them without actually touching the original paintings.

Scraps from eight of the 10 paintings contained structural proteins from cows, sheep, or goats, whose body parts might have been reduced into animal glue. But seven paintings also contained something else: proteins from baker’s yeast and from fermented grains—wheat, barley, buckwheat, rye.

[Related: Classic Mexican art stood the test of time with the help of this secret ingredient]

That yeast and those grains feature in the process of brewing beer. While beer does occasionally turn up in recipes for 19th century house-paint, it’s alien to works of fine art.

“We weren’t even sure what they meant,” says study author Fabiana Di Gianvincenzo, a biochemist at the University of Copenhagen in Denmark and the University of Ljubljana in Slovenia.

The authors considered the possibility that stray proteins might have contaminated the canvas from the air. But three of the paintings contained virtually no brewer’s proteins at all, while the other seven contained too much protein for contamination to reasonably explain.

“It was not something random,” says Enrico Cappellini, a biochemist at the University of Copenhagen in Denmark, and another of the authors.

To learn more, the authors whipped up some mock substances containing those ingredients: recipes that 19th-century Danes could have created. The yeast proved an excellent emulsifier, creating a smooth, glue-like paste. If applied to a canvas, the paste would create a smooth base layer that painters could beautify with oil colors.

A mock primer made in the laboratory.
Making a paint paste in the lab, 19th-century style. Mikkel Scharff

Eckersberg, Købke, and their fellow painters likely didn’t interact with the beer. The Royal Danish Academy of Fine Arts provided its professors and students with pre-prepared art materials. Curiously, the paintings that contained grain proteins all came from earlier in the time period, between 1827 and 1833. Købke then left the Academy and produced the three paintings that didn’t contain grain proteins, suggesting that his new source of canvases didn’t use the same preparation method.

The authors aren’t certain how widespread the brewer’s method might have been. If the technique was localized to early 19th century Denmark or even to the Academy, art historians today could use the knowledge to authenticate a painting from that era, which historians sometimes call the Danish Golden Age. 

This was a time of blossoming in literature, in architecture, in sculpture, and, indeed, in painting. In art historians’ reckoning, it was when Denmark developed its own unique painting tradition, which vividly depicted Norse mythology and the Danish countryside. The authors’ work lets them glimpse lost details of the society under that Golden Age. “Beer is so important in Danish culture,” says Cappellini. “Finding it literally at the base of the artwork that defined the origin of modern painting in Denmark…is very meaningful.” 

[Related: The world’s art is under attack—by microbes]

The work also demonstrates how craftspeople repurposed the materials they had. “Denmark was a very poor country at the time, so everything was reused,” says Andersen. “When you have scraps of something, you could boil it to glue, or you could use it in the grounds, or use it for canvas, to paint on.”

The authors are far from done. For one, they want to study their mock substances as they age. Combing through the historical record—artists’ diaries, letters, books, and other period documents—might also reveal tantalizing details of who used the yeast and how. Their work, then, makes for a rather colorful crossover of science with art conservation. “That has been the beauty of this study,” says Andersen. “We needed each other to get to this result.”

This story has been updated to clarify the source of canvases for Købke’s later works.

The post Danish painters used beer to create masterpieces, but not the way you think appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How ‘The Legend of Zelda: Tears of the Kingdom’ plays with the rules of physics https://www.popsci.com/science/legend-of-zelda-physics/ Sun, 14 May 2023 17:00:00 +0000 https://www.popsci.com/?p=541013
Link falls based on a version of physics in Tears of the Kingdom.
Gravity plays a big role in the new 'Zelda' game, as Link soars and jumps from great heights. Nintendo

Link's world is based on our reality, but its natural laws get bent for magic and fun.

The post How ‘The Legend of Zelda: Tears of the Kingdom’ plays with the rules of physics appeared first on Popular Science.

]]>
Link falls based on a version of physics in Tears of the Kingdom.
Gravity plays a big role in the new 'Zelda' game, as Link soars and jumps from great heights. Nintendo

Video games are back in a big way. The Legend of Zelda: Tears of the Kingdom is one of the most anticipated games this year, sure to draw in hardcore players and casual fans alike. From the trailers and teasers, you can see how Tears of the Kingdom will feature ways to make flying machines and manipulate time. It’s a natural question, then, to wonder how Zelda’s laws of nature line up with real-world physics.

In one of the trailers, Link takes flight on a paraglider, dropping from any height and exploring ravines and chasms at speeds that could kill a human in real-life. Like its predecessor, Breath of the Wild, Tears of the Kingdom offers an immersive, somewhat realistic world that still incorporates plenty of magical and superhuman abilities. Game developers say that this bending of the rules that we know adds to the game’s overall level of fun and to the player’s enjoyment.

Charles Pratt, assistant arts professor at NYU Game Center, who has used physics when developing games, says that the reason why the fantastical elements of Zelda still work is because they “follow people’s intuitions about physics” and use their understanding of real-life rules as a jumping off point.

“Gravity isn’t exactly gravity, right?” Pratt says. “Gravity gets applied in certain cases, and not in others to make it feel like you’re bounding through the air. Because jumping is really fun.”

Breath of Wild, which came out in 2017, was a smash hit. It sold 29.81 million copies and shaped a whole generation of video games, pushing developers to make more open-world titles—and arguably influencing Pokémon to open up its borders and feature a wild area for players to ride around and explore.

Aspects of Link’s world line up with the Earth that we inhabit and recognize, and, like our reality, it follows the basic rules of physics. The first Legend of Zelda: Breath of the Wild game follows the same natural laws as Tears. The general force of gravity still exist. Projectile objects fly on a curved trajectory and need to be aimed with skill to hit their targets. Items lose durability and break over time.

[Related: Marvel’s Spider-Man PS4 game twists physics to make web-swinging super fun]

Breath of the Wild also played around with the elements. Metal objects conduct electricity and attract lightning during a storm. Link took damage when entering a cold environment without wearing the right clothes. And setting enemies on fire deals them damage over time, and opens up the possibility of setting the nearby area on fire.

“If you drop a stone, it falls, and if you drop a piece of wood in water, it floats, but unlike the real world, it looks like you will have access to jetpacks and magical objects that our world doesn’t,” says Lasse Astrup, lead designer on the new Apple Arcade game What the Car?, which features its own unusual physics, in which players can drive cars that have multiple human legs or propel into the sky as rockets. Astrup, who is no stranger to exploring physics in video games, says he plans to buy the new Zelda game and spend days playing it—and then seeing what kinds of creations other gamers come up with.

Using weird physics games—whether it’s in Zelda games or one of Astrup’s creations—adds more fun to the titles, Astrup says. “You never have full control over what happens in the scene or which way a thing flies when it explodes,” he says. “This allows for emergent gameplay where players can explore and find their own solutions.”

“It continues to be a beautifully coded game.”

Lindley Winslow, MIT physicist

Other ways that Tears of the Kingdom defies our laws of physics include how Link can stand on a fast-accelerating platform without falling over, when a regular human in our world would have been knocked over by the accelerating force.

Lindley Winslow, an experimental nuclear and particle physicist at the Massachusetts Institute of Technology, says that, based on the trailer, “It continues to be a beautifully coded game. The details are what make it compelling, the movement of the grass, the air moving off the paraglider.”

[Related: Assassin’s Creed Valhalla avoids Dark Age cliches thanks to intense research (and Google Earth)]

Winslow adds, “The power comes from the fact that the physics are correct until it is fantastical. This allows us to immerse ourselves in the world and believe in the fantastical. My favorite is the floating islands.” Magic also exists in Tears of the Kingdom: Link can use his extraordinary powers to stop time, use magnets, and lift extremely heavy objects.

Alex Rose, an indie game developer who is also a physics programmer and lecturer at the University of Applied Science Vienna, points out that there’s plenty of accurate physics in Tears of the Kingdom, too. Link’s terminal velocity drops after he spreads out his body slows even further when he releases his parachute. 

Tears of the Kingdom introduces a system to concoct fanciful machines: platforms lifted into the air by balloons and jetpack-like rockets affixed to Link’s shield. In our world, a person riding a platform would get sent flying by inertia when the vehicle they were riding turned the corner quickly. But Link, being a video game character, is able to stay on the platform, even during quick turns, Rose notes. He’s also somehow able to sling around an arm rocket without losing his limbs.

“In the real world, even the best gymnast would be sent flying to the ground,” Rose says, “like an old firecracker stunt from a certain MTV show.”

The post How ‘The Legend of Zelda: Tears of the Kingdom’ plays with the rules of physics appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a 14-year-old kid became the youngest person to achieve nuclear fusion https://www.popsci.com/science/article/2012-02/boy-who-played-fusion/ Mon, 18 Mar 2019 21:22:34 +0000 https://www.popsci.com/uncategorized/science-article-2012-02-boy-who-played-fusion/
Taylor Wilson, the boy who built a nuclear reactor as a kid, in his kitchen with his family
Taylor Wilson moved to suburban Reno, Nevada, with his parents, Kenneth and Tiffany, and his brother Joey to attend Davidson Academy, a school for gifted students. Bryce Duffy

Taylor Wilson always dreamed of creating a star. Then he became one.

The post How a 14-year-old kid became the youngest person to achieve nuclear fusion appeared first on Popular Science.

]]>
Taylor Wilson, the boy who built a nuclear reactor as a kid, in his kitchen with his family
Taylor Wilson moved to suburban Reno, Nevada, with his parents, Kenneth and Tiffany, and his brother Joey to attend Davidson Academy, a school for gifted students. Bryce Duffy

This story from the March 2012 issue of Popular Science covered the nuclear fusion experiments of Taylor Wilson, who was then 16. Wilson is currently 28 and a nuclear physicist who’s collaborated with multiple US agencies on developing reactors and defense technology. The author of this profile, Tom Clynes, went on to write a book about Wilson titled The Boy Who Played With Fusion.

“PROPULSION,” the nine-year-old says as he leads his dad through the gates of the U.S. Space and Rocket Center in Huntsville, Alabama. “I just want to see the propulsion stuff.”

A young woman guides their group toward a full-scale replica of the massive Saturn V rocket that brought America to the moon. As they duck under the exhaust nozzles, Kenneth Wilson glances at his awestruck boy and feels his burden beginning to lighten. For a few minutes, at least, someone else will feed his son’s boundless appetite for knowledge.

Then Taylor raises his hand, not with a question but an answer. He knows what makes this thing, the biggest rocket ever launched, go up.

And he wants—no, he obviously needs—to tell everyone about it, about how speed relates to exhaust velocity and dynamic mass, about payload ratios, about the pros and cons of liquid versus solid fuel. The tour guide takes a step back, yielding the floor to this slender kid with a deep-Arkansas drawl, pouring out a torrent of Ph.D.-level concepts as if there might not be enough seconds in the day to blurt it all out. The other adults take a step back too, perhaps jolted off balance by the incongruities of age and audacity, intelligence and exuberance.

As the guide runs off to fetch the center’s director—You gotta see this kid!—Kenneth feels the weight coming down on him again. What he doesn’t understand just yet is that he will come to look back on these days as the uncomplicated ones, when his scary-smart son was into simple things, like rocket science.

This is before Taylor would transform the family’s garage into a mysterious, glow-in-the-dark cache of rocks and metals and liquids with unimaginable powers. Before he would conceive, in a series of unlikely epiphanies, new ways to use neutrons to confront some of the biggest challenges of our time: cancer and nuclear terrorism. Before he would build a reactor that could hurl atoms together in a 500-million-degree plasma core—becoming, at 14, the youngest individual on Earth to achieve nuclear fusion.

WHEN I MEET Taylor Wilson, he is 16 and busy—far too busy, he says, to pursue a driver’s license. And so he rides shotgun as his father zigzags the family’s Land Rover up a steep trail in the Virginia Mountains north of Reno, Nevada, where they’ve come to prospect for uranium.

From the backseat, I can see Taylor’s gull-like profile, his forehead plunging from under his sandy blond bangs and continuing, in an almost unwavering line, along his prominent nose. His thinness gives him a wraithlike appearance, but when he’s lit up about something (as he is most waking moments), he does not seem frail. He has spent the past hour—the past few days, really—talking, analyzing, and breathlessly evangelizing about nuclear energy. We’ve gone back to the big bang and forward to mutually assured destruction and nuclear winter. In between are fission and fusion, Einstein and Oppenheimer, Chernobyl and Fukushima, matter and antimatter.

“Where does it come from?” Kenneth and his wife, Tiffany, have asked themselves many times. Kenneth is a Coca-Cola bottler, a skier, an ex-football player. Tiffany is a yoga instructor. “Neither of us knows a dang thing about science,” Kenneth says.

Almost from the beginning, it was clear that the older of the Wilsons’ two sons would be a difficult child to keep on the ground. It started with his first, and most pedestrian, interest: construction. As a toddler in Texarkana, the family’s hometown, Taylor wanted nothing to do with toys. He played with real traffic cones, real barricades. At age four, he donned a fluorescent orange vest and hard hat and stood in front of the house, directing traffic. For his fifth birthday, he said, he wanted a crane. But when his parents brought him to a toy store, the boy saw it as an act of provocation. “No,” he yelled, stomping his foot. “I want a real one.”

This is about the time any other father might have put his own foot down. But Kenneth called a friend who owns a construction company, and on Taylor’s birthday a six-ton crane pulled up to the party. The kids sat on the operator’s lap and took turns at the controls, guiding the boom as it swung above the rooftops on Northern Hills Drive.

To the assembled parents, dressed in hard hats, the Wilsons’ parenting style must have appeared curiously indulgent. In a few years, as Taylor began to get into some supremely dangerous stuff, it would seem perilously laissez-faire. But their approach to child rearing is, in fact, uncommonly intentional. “We want to help our children figure out who they are,” Kenneth says, “and then do everything we can to help them nurture that.”

Looking up, they watched as a small mushroom cloud rose, unsettlingly, over the Wilsons’ yard.

At 10, Taylor hung a periodic table of the elements in his room. Within a week he memorized all the atomic numbers, masses and melting points. At the family’s Thanksgiving gathering, the boy appeared wearing a monogrammed lab coat and armed with a handful of medical lancets. He announced that he’d be drawing blood from everyone, for “comparative genetic experiments” in the laboratory he had set up in his maternal grandmother’s garage. Each member of the extended family duly offered a finger to be pricked.

The next summer, Taylor invited everyone out to the backyard, where he dramatically held up a pill bottle packed with a mixture of sugar and stump remover (potassium nitrate) that he’d discovered in the garage. He set the bottle down and, with a showman’s flourish, ignited the fuse that poked out of the top. What happened next was not the firecracker’s bang everyone expected, but a thunderous blast that brought panicked neighbors running from their houses. Looking up, they watched as a small mushroom cloud rose, unsettlingly, over the Wilsons’ yard.

For his 11th birthday, Taylor’s grandmother took him to Books-A-Million, where he picked out The Radioactive Boy Scout, by Ken Silverstein. The book told the disquieting tale of David Hahn, a Michigan teenager who, in the mid-1990s, attempted to build a breeder reactor in a backyard shed. Taylor was so excited by the book that he read much of it aloud: the boy raiding smoke detectors for radioactive americium . . . the cobbled-together reactor . . . the Superfund team in hazmat suits hauling away the family’s contaminated belongings. Kenneth and Tiffany heard Hahn’s story as a cautionary tale. But Taylor, who had recently taken a particular interest in the bottom two rows of the periodic table—the highly radioactive elements—read it as a challenge. “Know what?” he said. “The things that kid was trying to do, I’m pretty sure I can actually do them.”

Taylor Wilson in a red sweater looking to the right of the camera
Both Wilson boys both went to a science and mathematics school for gifted students. Bryce Duffy

A rational society would know what to do with a kid like Taylor Wilson, especially now that America’s technical leadership is slipping and scientific talent increasingly has to be imported. But by the time Taylor was 12, both he and his brother, Joey, who is three years younger and gifted in mathematics, had moved far beyond their school’s (and parents’) ability to meaningfully teach them. Both boys were spending most of their school days on autopilot, their minds wandering away from course work they’d long outgrown.

David Hahn had been bored too—and, like Taylor, smart enough to be dangerous. But here is where the two stories begin to diverge. When Hahn’s parents forbade his atomic endeavors, the angry teenager pressed on in secret. But Kenneth and Tiffany resisted their impulse to steer Taylor toward more benign pursuits. That can’t be easy when a child with a demonstrated talent and fondness for blowing things up proposes to dabble in nukes.

Kenneth and Tiffany agreed to let Taylor assemble a “survey of everyday radioactive materials” for his school’s science fair. Kenneth borrowed a Geiger counter from a friend at Texarkana’s emergency-management agency. Over the next few weekends, he and Tiffany shuttled Taylor around to nearby antique stores, where he pointed the clicking detector at old
radium-dial alarm clocks, thorium lantern mantles and uranium-glazed Fiesta plates. Taylor spent his allowance money on a radioactive dining set.

Drawn in by what he calls “the surprise properties” of radioactive materials, he wanted to know more. How can a speck of metal the size of a grain of salt put out such tremendous amounts of energy? Why do certain rocks expose film? Why does one isotope decay away in a millionth of a second while another has a half-life of two million years?

As Taylor began to wrap his head around the mind-blowing mysteries at the base of all matter, he could see that atoms, so small but potentially so powerful, offered a lifetime’s worth of secrets to unlock. Whereas Hahn’s resources had been limited, Taylor found that there was almost no end to the information he could find on the Internet, or to the oddities that he could purchase and store in the garage.

On top of tables crowded with chemicals and microscopes and germicidal black lights, an expanding array of nuclear fuel pellets, chunks of uranium and “pigs” (lead-lined containers) began to appear. When his parents pressed him about safety, Taylor responded in the convoluted jargon of inverse-square laws and distance intensities, time doses and roentgen submultiples. With his newfound command of these concepts, he assured them, he could master the furtive energy sneaking away from those rocks and metals and liquids—a strange and ever-multiplying cache that literally cast a glow into the corners of the garage.

Kenneth asked a nuclear-pharmacist friend to come over to check on Taylor’s safety practices. As far as he could tell, the friend said, the boy was getting it right. But he warned that radiation works in quick and complex ways. By the time Taylor learned from a mistake, it might be too late.

Lead pigs and glazed plates were only the beginning. Soon Taylor was getting into more esoteric “naughties”—radium quack cures, depleted uranium, radio-luminescent materials—and collecting mysterious machines, such as the mass spectrometer given to him by a former astronaut in Houston. As visions of Chernobyl haunted his parents, Taylor tried to reassure them. “I’m the responsible radioactive boy scout,” he told them. “I know what I’m doing.”

One afternoon, Tiffany ducked her head out of the door to the garage and spotted Taylor, in his canary yellow nuclear-technician’s coveralls, watching a pool of liquid spreading across the concrete floor. “Tay, it’s time for supper.”
“I think I’m going to have to clean this up first.”
“That’s not the stuff you said would kill us if it broke open, is it?”
“I don’t think so,” he said. “Not instantly.”

THAT SUMMER, Kenneth’s daughter from a previous marriage, Ashlee, then a college student, came to live with the Wilsons. “The explosions in the backyard were getting to be a bit much,” she told me, shortly before my own visit to the family’s home. “I could see everyone getting frustrated. They’d say something and Taylor would argue back, and his argument would be legitimate. He knows how to out-think you. I was saying, ‘You guys need to be parents. He’s ruling the roost.’ “

“What she didn’t understand,” Kenneth says, “is that we didn’t have a choice. Taylor doesn’t understand the meaning of ‘can’t.’ “

“And when he does,” Tiffany adds, “he doesn’t listen.”

“Looking back, I can see that,” Ashlee concedes. “I mean, you can tell Taylor that the world doesn’t revolve around him. But he doesn’t really get that. He’s not being selfish, it’s just that there’s so much going on in his head.”

Tiffany, for her part, could have done with less drama. She had just lost her sister, her only sibling. And her mother’s cancer had recently come out of remission. “Those were some tough times,” Taylor tells me one day, as he uses his mom’s gardening trowel to mix up a batch of yellowcake (the partially processed uranium that’s the stuff of WMD infamy) in a five-gallon bucket. “But as bad as it was with Grandma dying and all, that urine sure was something.”

Taylor looks sheepish. He knows this is weird. “After her PET scan she let me have a sample. It was so hot I had to keep it in a lead pig.

“The other thing is . . .” He pauses, unsure whether to continue but, being Taylor, unable to stop himself. “She had lung cancer, and she’d cough up little bits of tumor for me to dissect. Some people might think that’s gross, but I found it scientifically very interesting.”

What no one understood, at least not at first, was that as his grandmother was withering, Taylor was growing, moving beyond mere self-centeredness. The world that he saw revolving around him, the boy was coming to believe, was one that he could actually change.

The problem, as he saw it, is that isotopes for diagnosing and treating cancer are extremely short-lived. They need to be, so they can get in and kill the targeted tumors and then decay away quickly, sparing healthy cells. Delivering them safely and on time requires expensive handling—including, often, delivery by private jet. But what if there were a way to make those medical isotopes at or near the patients? How many more people could they reach, and how much earlier could they reach them? How many more people like his grandmother could be saved?

As Taylor stirred the toxic urine sample, holding the clicking Geiger counter over it, inspiration took hold. He peered into the swirling yellow center, and the answer shone up at him, bright as the sun. In fact, it was the sun—or, more precisely, nuclear fusion, the process (defined by Einstein as E=mc2) that powers the sun. By harnessing fusion—the moment when atomic nuclei collide and fuse together, releasing energy in the process—Taylor could produce the high-energy neutrons he would need to irradiate materials for medical isotopes. Instead of creating those isotopes in multimillion-dollar cyclotrons and then rushing them to patients, what if he could build a fusion reactor small enough, cheap enough and safe enough to produce isotopes as needed, in every hospital in the world?

At that point, only 10 individuals had managed to build working fusion reactors. Taylor contacted one of them, Carl Willis, then a 26-year-old Ph.D. candidate living in Albuquerque, and the two hit it off. But Willis, like the other successful fusioneers, had an advanced degree and access to a high-tech lab and precision equipment. How could a middle-school kid living on the Texas/Arkansas border ever hope to make his own star?

Taylor Wilson in a hazmat suit and gas mask in his nuclear lab
The teen set up a nuclear laboratory in the family garage. Occasionally he uses it to process uranium ore into yellowcake. Bryce Duffy

When Taylor was 13, just after his grandmother’s doctor had given her a few weeks to live, Ashlee sent Tiffany and Kenneth an article about a new school in Reno. The Davidson Academy is a subsidized public school for the nation’s smartest and most motivated students, those who score in the top 99.9th percentile on standardized tests. The school, which allows students to pursue advanced research at the adjacent University of Nevada–Reno, was founded in 2006 by software entrepreneurs Janice and Robert Davidson. Since then, the Davidsons have championed the idea that the most underserved students in the country are those at the top.

On the family’s first trip to Reno, even before Taylor and Joey were accepted to the academy, Taylor made an appointment with Friedwardt Winterberg, a celebrated physicist at the University of Nevada who had studied under the Nobel Prize–winning quantum theorist Werner Heisenberg. When Taylor told Winterberg that he wanted to build a fusion reactor, also called a fusor, the notoriously cranky professor erupted: “You’re 13 years old! And you want to play with tens of thousands of electron volts and deadly x-rays?” Such a project would be far too technically challenging and hazardous, Winterberg insisted, even for most doctoral candidates. “First you must master calculus, the language of science,” he boomed. “After that,” Tiffany said, “we didn’t think it would go anywhere. Kenneth and I were a bit relieved.”

But Taylor still hadn’t learned the word “can’t.” In the fall, when he began at Davidson, he found the two advocates he needed, one in the office right next door to Winterberg’s. “He had a depth of understanding I’d never seen in someone that young,” says atomic physicist Ronald Phaneuf. “But he was telling me he wanted to build the reactor in his garage, and I’m thinking, ‘Oh my lord, we can’t let him do that.’ But maybe we can help him try to do it here.”

Phaneuf invited Taylor to sit in on his upper-division nuclear physics class and introduced him to technician Bill Brinsmead. Brinsmead, a Burning Man devotee who often rides a wheeled replica of the Little Boy bomb through the desert, was at first reluctant to get involved in this 13-year-old’s project. But as he and Phaneuf showed Taylor around the department’s equipment room, Brinsmead recalled his own boyhood, when he was bored and unchallenged and aching to build something really cool and difficult (like a laser, which he eventually did build) but dissuaded by most of the adults who might have helped.

Rummaging through storerooms crowded with a geeky abundance of electron microscopes and instrumentation modules, they came across a high-vacuum chamber made of thick-walled stainless steel, capable of withstanding extreme heat and negative pressure. “Think I could use that for my fusor?” Taylor asked Brinsmead. “I can’t think of a more worthy cause,” Brinsmead said.

NOW IT’S TIFFANY who drives, along a dirt road that wends across a vast, open mesa a few miles south of the runways shared by Albuquerque’s airport and Kirkland Air Force Base. Taylor has convinced her to bring him to New Mexico to spend a week with Carl Willis, whom Taylor describes as “my best nuke friend.” Cocking my ear toward the backseat, I catch snippets of Taylor and Willis’s conversation.

“The idea is to make a gamma-ray laser from stimulated decay of dipositronium.”

“I’m thinking about building a portable, beam-on-target neutron source.”

“Need some deuterated polyethylene?”

Willis is now 30; tall and thin and much quieter than Taylor. When he’s interested in something, his face opens up with a blend of amusement and curiosity. When he’s uninterested, he slips into the far-off distractedness that’s common among the super-smart. Taylor and Willis like to get together a few times a year for what they call “nuclear tourism”—they visit research facilities, prospect for uranium, or run experiments.

Earlier in the week, we prospected for uranium in the desert and shopped for secondhand laboratory equipment in Los Alamos. The next day, we wandered through Bayo Canyon, where Manhattan Project engineers set off some of the largest dirty bombs in history in the course of perfecting Fat Man, which leveled Nagasaki.

Today we’re searching for remnants of a “broken arrow,” military lingo for a lost nuclear weapon. While researching declassified military reports, Taylor discovered that a Mark 17 “Peacemaker” hydrogen bomb, which was designed to be 700 times as powerful as the bomb detonated over Hiroshima, was accidentally dropped onto this mesa in May 1957. For the U.S. military, it was an embarrassingly Strangelovian episode; the airman in the bomb bay narrowly avoided his own Slim Pickens moment when the bomb dropped from its gantry and smashed the B-36’s doors open. Although its plutonium core hadn’t been inserted, the bomb’s “spark plug” of conventional explosives and radioactive material detonated on impact, creating a fireball and a massive crater. A grazing steer was the only reported casualty.

Tiffany parks the rented SUV among the mesquite, and we unload metal detectors and Geiger counters and fan out across the field. “This,” says Tiffany, smiling as she follows her son across the scrubland, “is how we spend our vacations.”

Taylor Wilson walking in front of a snowy Nevada mountain range while hunting for radioactive material
Taylor has one of the most extensive collections of radioactive material in the world, much of which he found himself. Bryce Duffy

Willis says that when Taylor first contacted him, he was struck by the 12-year-old’s focus and forwardness—and by the fact that he couldn’t plumb the depth of Taylor’s knowledge with a few difficult technical questions. After checking with Kenneth, Willis sent Taylor some papers on fusion reactors. Then Taylor began acquiring pieces for his new machine.

Through his first year at Davidson, Taylor spent his afternoons in a corner of Phaneuf’s lab that the professor had cleared out for him, designing the reactor, overcoming tricky technical issues, tracking down critical parts. Phaneuf helped him find a surplus high-voltage insulator at Lawrence Berkeley National Laboratory. Willis, then working at a company that builds particle accelerators, talked his boss into parting with an extremely expensive high-voltage power supply.

With Brinsmead and Phaneuf’s help, Taylor stretched himself, applying knowledge from more than 20 technical fields, including nuclear and plasma physics, chemistry, radiation metrology and electrical engineering. Slowly he began to test-assemble the reactor, troubleshooting pesky vacuum leaks, electrical problems and an intermittent plasma field.

Shortly after his 14th birthday, Taylor and Brinsmead loaded deuterium fuel into the machine, brought up the power, and confirmed the presence of neutrons. With that, Taylor became the 32nd individual on the planet to achieve a nuclear-fusion reaction. Yet what would set Taylor apart from the others was not the machine itself but what he decided to do with it.

While still developing his medical isotope application, Taylor came across a report about how the thousands of shipping containers entering the country daily had become the nation’s most vulnerable “soft belly,” the easiest entry point for weapons of mass destruction. Lying in bed one night, he hit on an idea: Why not use a fusion reactor to produce weapons-sniffing neutrons that could scan the contents of containers as they passed through ports? Over the next few weeks, he devised a concept for a drive-through device that would use a small reactor to bombard passing containers with neutrons. If weapons were inside, the neutrons would force the atoms into fission, emitting gamma radiation (in the case of nuclear material) or nitrogen (in the case of conventional explosives). A detector, mounted opposite, would pick up the signature and alert the operator.

He entered the reactor, and the design for his bomb-sniffing application, into the Intel International Science and Engineering Fair. The Super Bowl of pre-college science events, the fair attracts 1,500 of the world’s most switched-on kids from some 50 countries. When Intel CEO Paul Otellini heard the buzz that a 14-year-old had built a working nuclear-fusion reactor, he went straight for Taylor’s exhibit. After a 20-minute conversation, Otellini was seen walking away, smiling and shaking his head in what looked like disbelief. Later, I would ask him what he was thinking. “All I could think was, ‘I am so glad that kid is on our side.’ “

For the past three years, Taylor has dominated the international science fair, walking away with nine awards (including first place overall), overseas trips and more than $100,000 in prizes. After the Department of Homeland Security learned of Taylor’s design, he traveled to Washington for a meeting with the DHS’s Domestic Nuclear Detection Office, which invited Taylor to submit a grant proposal to develop the detector. Taylor also met with then–Under Secretary of Energy Kristina Johnson, who says the encounter left her “stunned.”

“I would say someone like him comes along maybe once in a generation,” Johnson says. “He’s not just smart; he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

And yet Taylor’s story began much like David Hahn’s, with a brilliant, high-flying child hatching a crazy plan to build a nuclear reactor. Why did one journey end with hazmat teams and an eventual arrest, while the other continues to produce an array of prizes, patents, television appearances, and offers from college recruiters?

The answer is, mostly, support. Hahn, determined to achieve something extraordinary but discouraged by the adults in his life, pressed on without guidance or oversight—and with nearly catastrophic results. Taylor, just as determined but socially gifted, managed to gather into his orbit people who could help him achieve his dreams: the physics professor; the older nuclear prodigy; the eccentric technician; the entrepreneur couple who, instead of retiring, founded a school to nurture genius kids. There were several more, but none so significant as Tiffany and Kenneth, the parents who overcame their reflexive—and undeniably sensible—inclinations to keep their Icarus-like son on the ground. Instead they gave him the wings he sought and encouraged him to fly up to the sun and beyond, high enough to capture a star of his own.

After about an hour of searching across the mesa, our detectors begin to beep. We find bits of charred white plastic and chunks of aluminum—one of which is slightly radioactive. They are remnants of the lost hydrogen bomb. I uncover a broken flange with screws still attached, and Taylor digs up a hunk of lead. “Got a nice shard here,” Taylor yells, finding a gnarled piece of metal. He scans it with his detector. “Unfortunately, it’s not radioactive.”

“That’s the kind I like,” Tiffany says.

Willis picks up a large chunk of the bomb’s outer casing, still painted dull green, and calls Taylor over. “Wow, look at that warp profile!” Taylor says, easing his scintillation detector up to it. The instrument roars its approval. Willis, seeing Taylor ogling the treasure, presents it to him. Taylor is ecstatic. “It’s a field of dreams!” he yells. “This place is loaded!”

Suddenly we’re finding radioactive debris under the surface every five or six feet—even though the military claimed that the site was completely cleaned up. Taylor gets down on his hands and knees, digging, laughing, calling out his discoveries. Tiffany checks her watch. “Tay, we really gotta go or we’ll miss our flight.”

“I’m not even close to being done!” he says, still digging. “This is the best day of my life!” By the time we manage to get Taylor into the car, we’re running seriously late. “Tay,” Tiffany says, “what are we going to do with all this stuff?”

“For $50, you can check it on as excess baggage,” Willis says. “You don’t label it, nobody knows what it is, and it won’t hurt anybody.” A few minutes later, we’re taping an all-too-flimsy box shut and loading it into the trunk. “Let’s see, we’ve got about 60 pounds of uranium, bomb fragments and radioactive shards,” Taylor says. “This thing would make a real good dirty bomb.”

In truth, the radiation levels are low enough that, without prolonged close-range exposure, the cargo poses little danger. Still, we stifle the jokes as we pull up to curbside check-in. “Think it will get through security?” Tiffany asks Taylor.

“There are no radiation detectors in airports,” Taylor says. “Except for one pilot project, and I can’t tell you which airport that’s at.”

As the skycap weighs the box, I scan the “prohibited items” sign. You can’t take paints, flammable materials or water on a commercial airplane. But sure enough, radioactive materials are not listed.

We land in Reno and make our way toward the baggage claim. “I hope that box held up,” Taylor says, as we approach the carousel. “And if it didn’t, I hope they give us back the radioactive goodies scattered all over the airplane.” Soon the box appears, adorned with a bright strip of tape and a note inside explaining that the package has been opened and inspected by the TSA. “They had no idea,” Taylor says, smiling, “what they were looking at.”

APART FROM THE fingerprint scanners at the door, Davidson Academy looks a lot like a typical high school. It’s only when the students open their mouths that you realize that this is an exceptional place, a sort of Hogwarts for brainiacs. As these math whizzes, musical prodigies and chess masters pass in the hallway, the banter flies in witty bursts. Inside humanities classes, discussions spin into intellectual duels.

Although everyone has some kind of advanced obsession, there’s no question that Taylor is a celebrity at the school, where the lobby walls are hung with framed newspaper clippings of his accomplishments. Taylor and I visit with the principal, the school’s founders and a few of Taylor’s friends. Then, after his calculus class, we head over to the university’s physics department, where we meet Phaneuf and Brinsmead.

Taylor’s reactor, adorned with yellow radiation-warning signs, dominates the far corner of Phaneuf’s lab. It looks elegant—a gleaming stainless-steel and glass chamber on top of a cylindrical trunk, connected to an array of sensors and feeder tubes. Peering through the small window into the reaction chamber, I can see the golf-ball-size grid of tungsten fingers that will cradle the plasma, the state of matter in which unbound electrons, ions and photons mix freely with atoms and molecules.

“OK, y’all stand back,” Taylor says. We retreat behind a wall of leaden blocks as he shakes the hair out of his eyes and flips a switch. He turns a knob to bring the voltage up and adds in some gas. “This is exactly how me and Bill did it the first time,” he says. “But now we’ve got it running even better.”

Through a video monitor, I watch the tungsten wires beginning to glow, then brightening to a vivid orange. A blue cloud of plasma appears, rising and hovering, ghostlike, in the center of the reaction chamber. “When the wires disappear,” Phaneuf says, “that’s when you know you have a lethal radiation field.”

I watch the monitor while Taylor concentrates on the controls and gauges, especially the neutron detector they’ve dubbed Snoopy. “I’ve got it up to 25,000 volts now,” Taylor says. “I’m going to out-gas it a little and push it up.”

Taylor’s reactor, adorned with yellow radiation-warning signs, dominates the far corner of the lab. It looks elegant—a gleaming stainless-steel and glass chamber on top of a cylindrical trunk, connected to an array of sensors and feeder tubes.

Willis’s power supply crackles. The reactor is entering “star mode.” Rays of plasma dart between gaps in the now-invisible grid as deuterium atoms, accelerated by the tremendous voltages, begin to collide. Brinsmead keeps his eyes glued to the neutron detector. “We’re getting neutrons,” he shouts. “It’s really jamming!”

Taylor cranks it up to 40,000 volts. “Whoa, look at Snoopy now!” Phaneuf says, grinning. Taylor nudges the power up to 50,000 volts, bringing the temperature of the plasma inside the core to an incomprehensible 580 million degrees—some 40 times as hot as the core of the sun. Brinsmead lets out a whoop as the neutron gauge tops out.

“Snoopy’s pegged!” he yells, doing a little dance. On the video screen, purple sparks fly away from the plasma cloud, illuminating the wonder in the faces of Phaneuf and Brinsmead, who stand in a half-orbit around Taylor. In the glow of the boy’s creation, the men suddenly look years younger.

Taylor keeps his thin fingers on the dial as the atoms collide and fuse and throw off their energy, and the men take a step back, shaking their heads and wearing ear-to-ear grins.

“There it is,” Taylor says, his eyes locked on the machine. “The birth of a star.”

Read more PopSci+ stories.

The post How a 14-year-old kid became the youngest person to achieve nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The physics of champagne’s fascinating fizz https://www.popsci.com/science/champagne-bubbles-fluid-dynamics/ Wed, 03 May 2023 17:00:00 +0000 https://www.popsci.com/?p=538697
Champagne being poured into two glasses.
Champagne bubbles are known for their neat lines that travel up the glass. Madeline Federle and Colin Sullivan

Effervescent experiments reveal the fluid dynamics behind bubbly beverages.

The post The physics of champagne’s fascinating fizz appeared first on Popular Science.

]]>
Champagne being poured into two glasses.
Champagne bubbles are known for their neat lines that travel up the glass. Madeline Federle and Colin Sullivan

The pop of the cork, the fizz of the pour, and the clink of champagne flutes toasting are the ingredients for a celebration in many parts of the world. champagne itself dates back to Ancient Rome, but the biggest advances in the modern form of the beverage came from a savvy trio of women from the Champagne region of northeastern France in the 19th century. 

Now, scientists are adding another chapter to champagne’s bubbly history by discovering why the little effervescent bubbles of joy fizz upwards in a straight line.

[Related: Popping a champagne cork creates supersonic shockwaves.]

In a study published May 3 in the journal Physical Review Fluids, a team found that the stable bubble chains in champagne and other sparkling wines occur because of ingredients in it that act similar to soap-like compounds called surfactants. The surfactant-like molecules help reduce the tensions between the liquid and the gas bubbles, creating the smooth rise to the top. 

Champagne bubbles form neat single file lines. CREDIT: Madeline Federle and Colin Sullivan.

In this new study, a team conducted both numerical and physical experiments on four carbonated drinks to investigate the stability of the bubble chains. Depending on the drink, the fluid mechanics are quite different. For example, champagne and sparkling wine have gas bubbles that continuously appear to rise rapidly to the top of the glass in a single-file line like little ants—and they keep doing so for some time. In beer and soda, the bubbles veer off to the side and the bubble chains are not as stable. 

To observe the bubble chains, the team poured glasses of carbonated beverages including Pellegrino sparkling water, Tecate beer, Charles de Cazanove champagne, and a Spanish-style sparkling wine called brut.

They then filled small rectangular plexiglass containers with liquid and pumped in gas to create different kinds of bubble chains. They gradually added surfactants or increased the bubble size. They found that the larger bubbles could become stable even without the surfactants. When they kept a fixed bubble size with only added surfactants, the chains could go from unstable to stable. 

Beer bubbles are not as tightly bound as champagne bubbles. CREDIT: Madeline Federle and Colin Sullivan.

The authors found that the stability of the bubbles is actually impacted by the size of the bubbles themselves. The chains with large bubbles have a wake similar to that of bubbles with contaminants, which leads to a smooth rise and stable chains.

“The theory is that in Champagne these contaminants that act as surfactants are the good stuff,” co-author and Brown University engineer Roberto Zenit said in a statement. “These protein molecules that give flavor and uniqueness to the liquid are what makes the bubbles chains they produce stable.”

Since bubbles are always pretty small in drinks, surfactants are the key ingredient to producing the straight and stable chains we see in champagne. While beer also contains surfactant-like molecules, the bubbles can rise in straight chains or not depending on the type of beer. The bubbles in carbonated water like seltzer are always unstable because there are no contaminants helping the bubbles move smoothly through the wake of the flows.

[Related: This pretty blue fog only happens in warm champagne.]

“This wake, this velocity disturbance, causes the bubbles to be knocked out,” said Zenit. “Instead of having one line, the bubbles end up going up in more of a cone.”

The findings could add a better understanding of how fluid mechanics work, particularly the formation of clusters in bubbly flow, which has economic and societal value. The global carbonated drink market was valued at a whopping $221.6 billion in 2020

The technologies that use bubble-induced mixing, like aeration tanks at water treatment facilities and in wine making, could benefit greatly from better knowledge of how bubbles cluster, their origins, and how to predict their appearance. Understanding these flows may also help better explain ocean seeps, when methane and carbon dioxide emerge from the bottom of the ocean.

“This is the type of research that I’ve been working out for years,” said Zenit. “Most people have never seen an ocean seep or an aeration tank but most of them have had a soda, a beer or a glass of Champagne. By talking about Champagne and beer, our master plan is to make people understand that fluid mechanics is important in their daily lives.”

The post The physics of champagne’s fascinating fizz appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This supermassive black hole sucks big time https://www.popsci.com/science/m87-black-hole-jets/ Wed, 26 Apr 2023 22:41:45 +0000 https://www.popsci.com/?p=537095
Closeup of vent horizon around M87, a supermassive black hole and the first black hole image
An image of the shadow of the supermassive black hole M87 (inset) and a powerful jet of matter and energy being projected away from it. R.-S. Lu (SHAO) and E. Ros (MPIfR), S.Dagnello (NRAO/AUI/NSF)

We knew M87, the first black hole to be seen by humans, was powerful. But not this powerful.

The post This supermassive black hole sucks big time appeared first on Popular Science.

]]>
Closeup of vent horizon around M87, a supermassive black hole and the first black hole image
An image of the shadow of the supermassive black hole M87 (inset) and a powerful jet of matter and energy being projected away from it. R.-S. Lu (SHAO) and E. Ros (MPIfR), S.Dagnello (NRAO/AUI/NSF)

Black holes remain among the most enigmatic objects in the universe, but the past few years have seen astronomers develop techniques to directly image these powerful vacuums. And they keep getting better at it.

The Event Horizon Telescope (EHT) collaboration, the international team that took the first picture of a black hole in 2017, followed up that work with observations highlighting the black hole’s magnetic field. And just this month, another team of astronomers created an AI-sharpened version of the same image.

Now a new study published today in the journal Nature describes how images of that black hole, named after its galaxy, Messier 87 (M87), has a much larger circle of debris around it than the 2017 observations would suggest. 

Though long hypothesized to exist in theory, for many decades astronomers could only find indirect evidence of black holes in the sky. For instance, they would look for signs of the immense gravity of a black hole influencing other objects, such as when stars follow especially tight or fast orbits that imply the presence of another massive, but invisible partner.

But that all changed in 2017, when the EHT’s global network of radio telescopes captured the first visible evidence of a black hole, the supermassive black hole at the heart of a galaxy 57 million light-years away from Earth. When the image was released in 2019, the orange ring of fire around a central black void drew comparisons to “The Eye of Sauron” from Lord of the Rings.

EHT would go on to directly image Sagittarius A*, the supermassive black hole at the heart of the Milky Way galaxy, releasing another image of a fiery orange doughnut around a black center in May 2022.

Such supermassive black holes, which are often billions of times more massive than our sun—M87 is estimated to be 6.5 billion times bigger and Sagittarius A*  4 million times bigger—are thought to exist at the centers of most galaxies. The intense gravity of all that mass pulls on any gas, dust, and other excess material that comes too close, accelerating it to incredible speeds as it falls toward the lip of the black hole, known as the event horizon.

[Related: What would happen if you fell into a black hole?]

Like water circling a drain, the falling material spirals and is condensed into a flat ring known as an accretion disk. But unlike water around a drain, the incredible speed and pressures in the accretion disk heat the inflating material to the point where it emits powerful X-ray radiation. The disk propels jets of radiation and gas out and away from the black hole at nearly the speed of light.  

The EHT team already figured that M87 produced forcible jets. But the second set of results show that the ring-like structure of collapsing material around the black hole is 50 percent larger than they originally estimated.

“This is the first image where we are able to pin down where the ring is, relative to the powerful jet escaping out of the central black hole,” Kazunori Akiyama, an MIT Haystack Observatory research scientist and EHT collaboration member, said in a statement. “Now we can start to address questions such as how particles are accelerated and heated, and many other mysteries around the black hole, more deeply.”

The new observations were made in 2018 using the Global Millimeter VLBI Array, a network of a dozen radio telescopes running east to west across Europe and the US. To get the resolution necessary for more accurate measurements, however, the researchers also included observatories in the North and South: the Greenland Telescope along with the Atacama Large Millimetre/submillimetre Array, which consists of 66 radio telescopes in the Chilean high desert.

“Having these two telescopes [as part of] the global array resulted in a boost in angular resolution by a factor of four in the north-south direction,” Lynn Matthews, an EHT collaboration member at the MIT Haystack Observatory, said in a media statement. “This greatly improves the level of detail we can see. And in this case, a consequence was a dramatic leap in our understanding of the physics operating near the black hole at the center of the M87 galaxy.”

[Related: Construction starts on the world’s biggest radio telescope]

The more recent study focused on radio waves around 3 millimeters long, as opposed to 1.3 millimeters like the original 2017 one. That may have brought the larger, more distant ring structure into focus in a way the 2017 observations could not.

“That longer wavelength is usually associated with lower energies of the emitting electrons,” says Harvard astrophysicist Avi Loeb, who was not involved with the new study. “It’s possible that you get brighter emission at longer wavelengths farther out from the black hole.”

Going forward, astronomers plan to observe the black hole at other wavelengths to highlight different parts and layers of its structure, and better understand how such cosmic behemoths form at the hearts of galaxies and contribute to galactic evolution.

Just how supermassive black holes generate jets is “not a well-understood process,” Loeb says. “This is the first time we have observations of what may be the base of the jet. It can be used by theoretical physicists to model how the M87 jet is being launched.” 

He adds that he would like to see future observations capture the sequence of events in the accretion disk. That is, to essentially make a movie out of what’s happening at M87.

“There might be a hotspot that we can track that is moving either around or moving towards the jet,” Loeb says, which in turn, could explain how a beast like a black hole gets fed.

The post This supermassive black hole sucks big time appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Alien civilizations could send us messages by 2029 https://www.popsci.com/science/aliens-contact-earth-2029/ Tue, 25 Apr 2023 10:00:00 +0000 https://www.popsci.com/?p=536305
NASA Deep Space Network radiotelescope sending radio waves to spacecraft, stars, and maybe aliens
NASA's Deep Space Network helps Earth make long-distance calls. NASA

NASA sends powerful radio transmissions into space. Who's listening, and when will they respond?

The post Alien civilizations could send us messages by 2029 appeared first on Popular Science.

]]>
NASA Deep Space Network radiotelescope sending radio waves to spacecraft, stars, and maybe aliens
NASA's Deep Space Network helps Earth make long-distance calls. NASA

Humans have used radio waves to communicate across Earth for more than 100 years. Those waves also leak out into space, a fingerprint of our presence propagating through the cosmos. In more recent years, humans have also sent out a stronger signal beyond our planet: communications with our most distant probes, like the famous Voyager spacecraft.

Scientists recently traced the paths of these powerful radio transmissions from Earth to multiple far-away spacecraft and determined which stars—along with any planets with possible alien life around them—are best positioned to intercept those messages. 

The research team created a list of stars that will encounter Earth’s signals within the next century and found that alien civilizations (if they’re out there) could send a return message as soon as 2029. Their results were published on March 20 in the journal Publications of the Astronomical Society of the Pacific.

“This is a famous idea from Carl Sagan, who used it as a plot theme in the movie Contact,” explains Howard Isaacson, a University of California, Berkeley astronomer and co-author of the new work. 

[Related: UFO research is stigmatized. NASA wants to change that.]

However, it’s worth taking any study involving extraterrestrial life with a grain of salt. Kaitlin Rasmussen, an astrobiologist at the University of Washington not affiliated with the paper, calls this study “an interesting exercise, but unlikely to yield results.” The results, in this case, would be aliens contacting Earth within a certain timeframe.

As radio signals travel through space, they spread out and become weaker and harder to detect. Aliens parked around a nearby star probably won’t notice the faint leakage from TVs and other small devices. However, the commands we send to trailblazing probes at the edge of the solar system—Voyager 1, Voyager 2, Pioneer 10, Pioneer 11, and New Horizons—require a much more focused and powerful broadcast from NASA’s Deep Space Network (DSN), a global array of radio dishes designed for space communications.

NASA Deep Space Network radiotelescopes on a grassy hill
The DSN can receive signals if it’s pointed in the right direction. NASA

The DSN signals don’t magically stop at the spacecraft they’re targeting: They continue into interstellar space where they eventually reach other stars. But electromagnetic waves like radio transmissions and light can only travel so fast—that’s why we use light-years to measure distances across the universe. The researchers used this law of physics to estimate how long it will take for DSN signals to reach nearby stars, and for alien life to return the message. 

The process revealed several insights. For example, according to their calculations, a signal sent to Pioneer 10 reached a dead star known as a white dwarf around 27 light-years away in 2002. The study team estimates a return message from any alien life near this dead star could reach us as soon as 2029, but no earlier. 

[Related: Nothing can break the speed of light]

More opportunities for return messages will pop up in the next decade. Signals sent to Voyager 2 around 1980 and 1983 reached two stars in 2007: one that’s 26 light-years away and a brown dwarf that’s 24 light-years away, respectively. If aliens sent a message right back from either, it could reach Earth in the early 2030s.

This work “gives Search for Extraterrestrial Intelligence researchers a more narrow group of stars to focus on,” says lead author Reilly Derrick, a University of California, Los Angeles engineering student.  

Derrick and Isaacson propose that radio astronomers could use their star lists to listen for return messages at predetermined times. For example, in 2029 they may want to point some of Earth’s major radio telescopes towards the white dwarf that received Pioneer 10’s message.

But other astronomers are skeptical. “If a response were to be sent, our ability to detect it would depend on many factors,” says Macy Huston, an astronomer at Penn State not involved in the new study. These factors include “how long or often we monitor the star for a response, and how long or often the return signal is transmitted.”

Our radio transmissions have only reached one-millionth of the volume of the Milky Way. 

There are still many unknowns when considering alien life. In particular, astronomers aren’t certain the stars in this study even have planets—although based on other exoplanet studies, it’s likely that at least a fraction of them do. The signals from the DSN are also still incredibly weak at such large distances, so it’s unclear how plausible it is for other stars to detect our transmissions.

“Our puny and infrequent transmissions are unlikely to yield a detection of humanity by extraterrestrials,” says Jean-Luc Margot, a University of California, Los Angeles radio astronomer who was not involved in the recent paper. He explains that our radio transmissions have only reached one-millionth of the volume of the Milky Way. 

“The probability that another civilization resides in this tiny bubble is extraordinarily small unless there are millions of civilizations in the Milky Way,” he says. But if they’re out there, there might be a time and place to capture the evidence.

The post Alien civilizations could send us messages by 2029 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ancient Maya masons had a smart way to make plaster stronger https://www.popsci.com/science/ancient-maya-plaster/ Wed, 19 Apr 2023 18:16:42 +0000 https://www.popsci.com/?p=535272
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

Up close, the Mayas' timeless recipe from Copán looks similar to mother-of-pearl.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

]]>
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

An ancient Maya city might seem an unlikely place for people to be experimenting with proprietary chemicals. But scientists think that’s exactly what happened at Copán, an archaeological complex nestled in a valley in the mountainous rainforests of what is now western Honduras.

By historians’ reckoning, Copán’s golden age began in 427 CE, when a king named Yax Kʼukʼ Moʼ came to the valley from the northwest. His dynasty built one of the jewels of the Maya world, but abandoned it by the 10th century, leaving its courts and plazas to the mercy of the jungle. More than 1,000 years later, Copán’s buildings have kept remarkably well, despite baking in the tropical sun and humidity for so long. 

The secret may lie in the plaster the Maya used to coat Copán’s walls and ceilings. New research suggests that sap from the bark of local trees, which Maya craftspeople mixed into their plaster, helped reinforce its structures. Whether by accident or by purpose, those Maya builders created a material not unlike mother-of-pearl, a natural element of mollusc shells.

“We finally unveiled the secret of ancient Maya masons,” says Carlos Rodríguez Navarro, a mineralogist at the University of Granada in Spain and the paper’s first author. Rodríguez Navarro and his colleagues published their work in the journal Science Advances today.

[Related: Scientists may have solved an old Puebloan mystery by strapping giant logs to their foreheads]

Plaster makers followed a fairly straightforward recipe. Start with carbonate rock, such as limestone; bake it at over 1,000 degrees Fahrenheit; mix in water with the resulting quicklime; then, set the concoction out to react with carbon dioxide from the air. The final product is what builders call lime plaster or lime mortar. 

Civilizations across the world discovered this process, often independently. For example, Mesoamericans in Mexico and Central America learned how to do it by around 1,100 BCE. While ancient people found it useful for covering surfaces or holding together bricks, this basic lime plaster isn’t especially durable by modern standards.

Ancient Maya pyramid in Copán, Guatemala, in aerial photo
Copán, with its temples, squares, terraces and other characteristics, is an excellent representation of Classic Mayan civilization. Xin Yuewei/Xinhua via Getty Images

But, just as a dish might differ from town to town, lime plaster recipes varied from place to place. “Some of them perform better than others,” says Admir Masic, a materials scientist at the Massachusetts Institute of Technology who wasn’t part of the study. Maya lime plaster, experts agree, is one of the best.

Rodríguez Navarro and his colleagues wanted to learn why. They found their first clue when they examined brick-sized plaster chunks from Copán’s walls and floors with X-rays and electron microscopes. Inside some pieces, they found traces of organic materials like carbohydrates. 

That made them curious, Rodríguez Navarro says, because it seemed to confirm past archaeological and written records suggesting that ancient Maya masons mixed plant matter into their plaster. The other standard ingredients (lime and water) wouldn’t account for complex carbon chains.

To follow this lead, the authors decided to make the historic plaster themselves. They consulted living masons and Maya descendants near Copán. The locals referred them to the chukum and jiote trees that grow in the surrounding forests—specifically, the sap that came from the trees’ bark.

Jiote or gumbo-limbo tree in the Florida Everglades
Bursera simaruba, sometimes locally known as the jiobe tree. Deposit Photos

The authors tested the sap’s reaction when mixed into the plaster. Not only did it toughen the material, it also made the plaster insoluble in water, which partly explains how Copán survived the local climate so well.

The microscopic structure of the plant-enhanced plaster is similar to nacre or mother-of-pearl: the iridescent substance that some molluscs create to coat their shells. We don’t fully understand how molluscs make nacre, but we know that it consists of crystal plates sandwiching elastic proteins. The combination toughens the sea creatures’ exteriors and reinforces them against weathering from waves.

A close study of the ancient plaster samples and the modern analog revealed that they also had layers of rocky calcite plates and organic sappy material, giving the materials the same kind of resilience as nacre. “They were able to reproduce what living organisms do,” says Rodríguez Navarro. 

“This is really exciting,” says Masic. “It looks like it is improving properties [of regular plaster].”

Now, Rodríguez Navarro and his colleagues are trying to answer another question: Could other civilizations that depended on masonry—from Iberia to Persia to China—have stumbled upon the same secret? We know, for instance, that Chinese lime-plaster-makers mixed in a sticky rice soup for added strength.

Plaster isn’t the only age-old material that scientists have reconstructed. Masic and his colleagues found that ancient Roman concrete has the ability to “self-heal.” More than two millennia ago, builders in the empire may have added quicklime to a rocky aggregate, creating microscopic structures within the material that help fill in pores and cracks when it’s hit by seawater.

[Related: Ancient architecture might be key to creating climate-resilient buildings]

If that property sounds useful, modern engineers think so too. There exists a blossoming field devoted to studying—and recreating—materials of the past. Standing structures from archaeological sites already prove they can withstand the test of time. As a bonus, ancient people tended to work with more sustainable methods and use less fuel than their industrial counterparts.

“The Maya paper…is another great example of this [scientific] approach,” Masic says.

Not that Maya plaster will replace the concrete that’s ubiquitous in the modern world—but scientists say it could have its uses in preserving and upgrading the masonry found in pre-industrial buildings. A touch of plant sap could add centuries to a structure’s lifespan.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An Einstein-backed method could help us find smaller exoplanets than ever before https://www.popsci.com/science/exoplanets-gravitational-microlensing/ Tue, 18 Apr 2023 16:34:47 +0000 https://www.popsci.com/?p=534889
Exoplanet KMT-2021-BLG-1898L b is a gas giant that looks like Jupiter but orbits a separate star. Illustration.
KMTNet astronomers identified exoplanet KMT-2021-BLG-1898L b in 2022. An artist's concept of the gas giant shows it completing a 3.8-year-long orbit around its star in a solar system far from ours. NASA/KMTNet

Astronomy is entering the golden age of exoplanet discoveries.

The post An Einstein-backed method could help us find smaller exoplanets than ever before appeared first on Popular Science.

]]>
Exoplanet KMT-2021-BLG-1898L b is a gas giant that looks like Jupiter but orbits a separate star. Illustration.
KMTNet astronomers identified exoplanet KMT-2021-BLG-1898L b in 2022. An artist's concept of the gas giant shows it completing a 3.8-year-long orbit around its star in a solar system far from ours. NASA/KMTNet

Since 1995 scientists have found more than 5,000 exoplanets—other worlds beyond our solar system. But while space researchers have gotten very good at discovering big planets, smaller ones have evaded detection.

However, a novel astronomy detection technique known as microlensing is starting to fill in the gaps. Experts who are a part of the Korea Microlensing Telescope Network (KMTNet) recently used this method to locate three new exoplanets about the same sizes as Jupiter and Saturn. They announced these findings in the journal Astronomy & Astrophysics on April 11. 

How does microlensing work?

Most exoplanets have been found through the transit method. This is when scientists use observatories like the Kepler Space Telescope and the James Webb Space Telescope to look at dips in the amount of light coming from a star. 

Meanwhile, gravitational microlensing (usually just called microlensing) involves searching for increases in brightness in deep space. These brilliant flashes are from a planet and its star bending the light of a more distant star, magnifying it according to Einstein’s rules for relativity. You may have heard of gravitational lensing for galaxies, which pretty much relies on the same physics, but on a much bigger scale.

Credit: NASA Scientific Visualization Studio

The new discoveries were particularly unique because they were found in partial data, where astronomers only observed half the event.

“Microlensing events are sort of like supernovae in that we only get one chance to observe them,” says Samson Johnson, an astronomer at the NASA Jet Propulsion Lab who was not affiliated with the study. 

Because astronomers only have one chance and don’t always know when events will happen, they sometimes miss parts of the show. “This is sort of like making a cake with only half of the recipe,” adds Johnson.

[Related: Sorry, Star Trek fans, the real planet Vulcan doesn’t exist]

The three new planets have long serial-number-like strings of letters and numbers for names: KMT-2021-BLG-2010Lb, KMT-2022-BLG-0371Lb, and KMT-2022-BLG-1013Lb. Each of these worlds revolves around a different star. They weigh as much as Jupiter, Saturn, and a little less than Saturn, respectively. 

Even though the researchers only observed part of the microlensing events for each of these planets, they were able to rule out other scenarios that could confidently explain the signals. This work “does show that even with incomplete data, we can learn interesting things about these planets,” says Scott Gaudi, an Ohio State University astronomer who was not involved in the published paper.

The exoplanet search continues

Microlensing is “highly complementary” to other exoplanet-hunting techniques, says Jennifer Yee, a co-author of the new study and researcher at The Center for Astrophysics | Harvard & Smithsonian. It can scope out planets that current technologies can’t, including worlds as small as Jupiter’s moon Ganymede or even a few times the mass of Earth’s moon, according to Gaudi.

The strength of microlensing is that “it’s a demographics machine, so you can detect lots of planets,” says Gaudi. This ability to detect planets of all sizes is crucial for astronomers as they complete their sweeping exoplanet census to determine the most common type of planet and the uniqueness of our own solar system. 

Credit: NASA Scientific Visualization Studio

Astronomers are honing their microlensing skills with new exoplanet discoveries like those from KTMNet, ensuring that they know how to handle this kind of data before new space telescopes come online in the next few years. For example, microlensing will be a large part of the Roman Space Telescope’s planned mission when it launches mid-decade

“We’ll increase the number of planets we know by several thousand with Roman, maybe even more,” says Gaudi. “We went from Kepler being the star of the show to TESS [NASA’s Transiting Exoplanet Survey Satellite] being the star of the show … For its time period, Roman [and microlensing] will be the star of the show.”

The post An Einstein-backed method could help us find smaller exoplanets than ever before appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the Tonga eruption rang Earth ‘like a bell’ https://www.popsci.com/science/tonga-volcano-tsunami-simulation/ Fri, 14 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=534151
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

A detailed simulation of underwater shockwaves changes what we know about the Hunga Tonga-Hunga Ha’apai eruption.

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

]]>
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

When the Hunga Tonga–Hunga Haʻapai volcano in Tonga exploded on January 15, 2022—setting off a sonic boom heard as far north as Alaska—scientists instantly knew that they were witnessing history. 

“In the geophysical record, this is the biggest natural explosion ever recorded,” says Ricky Garza-Giron, a geophysicist at the University of California at Santa Cruz. 

It also spawned a tsunami that raced across the Pacific Ocean, killing two people in Peru. Meanwhile, the disaster devastated Tonga and caused four deaths in the archipelago. While tragic, experts anticipated an event of this magnitude would cause further casualties. So why didn’t it?

Certainly, the country’s disaster preparations deserve much of the credit. But the nature of the eruption itself and how the tsunami it spawned spread across Tonga’s islands, also saved Tonga from a worse outcome, according to research published today in the journal Science Advances. By combining field observations with drone and satellite data, the study team was able to recreate the event through a simulation.

2022 explosion from Hunga-Tonga volcano captured by satellites
Satellites captured the explosive eruption of the Hunga Tonga-Hunga Ha’apai volcano. National Environmental Satellite Data and Information Service

It’s yet another way that scientists have studied how this eruption shook Tonga and the whole world. For a few hours, the volcano’s ash plume bathed the country and its surrounding waters with more lightning than everywhere else on Earth—combined. The eruption spewed enough water vapor into the sky to boost the amount in the stratosphere by around 10 percent. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

The eruption shot shockwaves into the ground, water, and air. When Garza-Giron and his colleagues measured those waves, they found that the eruption released an order of magnitude more energy than the 1980 eruption of Mount St Helens.

“It literally rang the Earth like a bell,” says Sam Purkis, a geoscientist at the University of Miami in Florida and the Khaled bin Sultan Living Oceans Foundation. Purkis is the first author of the new paper. 

The aim of the simulation is to present a possible course of events. Purkis and his colleagues began by establishing a timeline. Scientists agree that the volcano erupted in a sequence of multiple bursts, but they don’t agree on when or how many. Corroborating witness statements with measurements from tide gauges, the study team suggests a quintet of blasts, each steadily increasing in strength up to a climactic fifth blast: measuring 15 megatons, equivalent to a hydrogen bomb.

Credit: Steven N. Ward Institute of Geophysics and Planetary Physics, University of California Santa Cruz, U.S.A.

Then, the authors simulated what those blasts may have done to the ocean—and how fearsome the waves they spawned were as they battered Tonga’s other islands. The simulation suggests the isle of Tofua, about 55 miles northeast of the eruption, may have fared worst: bearing waves more than 100 feet tall.

But there’s a saving grace: Tofua is uninhabited. The simulation also helps explain why Tonga’s capital and largest city, Nuku’alofa, was able to escape the brunt of the tsunami. It sits just 40 miles south of the eruption, and seemingly experienced much shallower waves. 

[Related: Tonga is fighting multiple disasters after a historic volcanic eruption]

The study team thinks geography is partly responsible. Tofua, a volcanic caldera, sits in deep waters and has sharp, mountainous coasts that offer no protection from an incoming tsunami. Meanwhile, Nuku’alofa is surrounded by shallower waters and a lagoon, giving a tsunami less water to displace. Coral reefs may have also helped protect the city from the tsunami. 

Researchers believed that reefs could cushion tsunamis, Purkis says, but they didn’t have the real-world data to show it. “You don’t have a real-world case study where you have waves which are tens of meters high hitting reefs,” says Purkis.

We do know of volcanic eruptions more violent than Hunga Tonga–Hunga Haʻapai: for instance, Tambora in 1815 (which famously caused a “Year Without a Summer”) and Krakatau in 1883. But those occurred before the 1960s when geophysicists started deploying the worldwide net of sensors and satellites they can use today.

Ultimately, the study authors write that this eruption resulted in a “lucky escape.” It occurred under the most peculiar circumstances: At the time of its eruption, Tonga had shut off its borders due to Covid-19, reducing the number of overseas tourists visiting the islands. Scientists credit this as another reason for the low death toll. But the same closed borders meant scientists had to wait to get data.

Ash cloud from Hunga-Tonga volcano over the Pacific ocean seen from space
Ash over the South Pacific could be seen from space. NASA

That’s part of why this paper came out 15 months after the eruption. Other scientists had been able to simulate the tsunami before, but Purkis and his colleagues bolstered theirs with data from the ground. Not only did this help them reconstruct a timeline, it also helped them to corroborate their simulation with measurements from more than 100 sites along Tonga’s coasts. 

The study team argues that the eruption serves as a “natural laboratory” for the Earth’s activity. Understanding this tsunami can help humans plan how to stay safe from them. There are many other volcanoes like Hunga Tonga–Hunga Haʻapai, and volcanoes located underwater can devastate coastal communities if they erupt at the wrong time.

Garza-Giron is excited about the possibility of comparing the new study’s results with prior studies, such as his own, about seismic activity—in addition to other data sources, likethe sounds of the ocean—to create a more complete picture of what happened that day.

“It’s not very often that we can see the Earth acting as a whole system, where the atmosphere, the ocean, and the solid earth are definitely interacting,” says Garza-Giron. “That, to me, was one of the most fascinating things about this eruption.”

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You saw the first image of a black hole. Now see it better with AI. https://www.popsci.com/science/first-black-hole-image-ai/ Fri, 14 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=534170
M87 black hole Event Horizon Telescope image sharpened by AI with PRIMO algorithm. The glowing event horizon is now clearer and thinner and the black hole at the center darker.
AI, enhance. Medeiros et al., 2023

Mix general relativity with machine learning, and an astronomical donut starts to look more like a Cheerio.

The post You saw the first image of a black hole. Now see it better with AI. appeared first on Popular Science.

]]>
M87 black hole Event Horizon Telescope image sharpened by AI with PRIMO algorithm. The glowing event horizon is now clearer and thinner and the black hole at the center darker.
AI, enhance. Medeiros et al., 2023

Astronomy sheds light on the far-off, intangible phenomena that shape our universe and everything outside it. Artificial intelligence sifts through tiny, mundane details to help us process important patterns. Put the two together, and you can tackle almost any scientific conundrum—like determining  the relative shape of a black hole. 

The Event Horizon Telescope (a network of eight radio observatories placed strategically around the globe) originally captured the first image of a black hole in 2017 in the Messier 87 galaxy. After processing and compressing more than five terabytes of data, the team released a hazy shot in 2019, prompting people to joke that it was actually a fiery donut or a screenshot from Lord of the Rings. At the time, researchers conceded that the image could be improved with more fine-tuned observations or algorithms. 

[Related: How AI can make galactic telescope images ‘sharper’]

In a study published on April 13 in The Astrophysical Journal Letters, physicists from four US institutions used AI to sharpen the iconic image. This group fed the observatories’ raw interferometry data into an algorithm to produce a sharper, more accurate depiction of the black hole. The AI they used, called PRIMO, is an automated analysis tool that reconstructs visual data at higher resolutions to study gravity, the human genome, and more. In this case, the authors trained the neural network with simulations of accreting black holes—a mass-sucking process that produces thermal energy and radiation. They also relied on a mathematical technique called Fourier transform to turn energy frequencies, signals, and other artifacts into information the eye can see.

Their edited image shows a thinner “event horizon,” the glowing circle formed when light and accreted gas crosses into the gravitational sink. This could have “important implications for measuring the mass of the central black hole in M87 based on the EHT images,” the paper states.

M87 black hole original image next to M87 black hole sharpened image to show AI difference
The original image of M87 from 2019 (left) compared to the PRIMO reconstruction (middle) and the PRIMO reconstruction “blurred” to EHT’s resolution (right). The blurring occurs such that the image can match the resolution of EHT and the algorithm doesn’t add resolution when it is filling in gaps that the EHT would not be able to see with its true resolution. Medeirois et al., 2023

One thing’s for sure: The subject at the center of the shot is extremely dark, potent, and powerful. It’s even more clearly defined in the AI-enhanced version, backing up the claim that the supermassive black hole is up to 6.5 billion times heftier than our sun. Compare that to Sagittarius A*—the black hole that was recently captured in the Milky Way—which is estimated at 4 million times the sun’s mass.

Sagittarius A* could be another PRIMO target, Lia Medeiros, lead study author and astrophysicist at the Institute for Advanced Study, told the Associated Press. But the group is not in a rush to move on from the more distant black hole located 55 million light-years away in Messier 87. “It feels like we’re really seeing it for the first time,” she added in the AP interview. The image was a feat of astronomy, and now, people can gaze on it with more clarity.

Watch an interview where the researchers discuss their AI methods more in-depth below:

The post You saw the first image of a black hole. Now see it better with AI. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum computers can’t teleport things—yet https://www.popsci.com/technology/wormhole-teleportation-quantum-computer-simulation/ Fri, 07 Apr 2023 12:28:09 +0000 https://www.popsci.com/?p=532454
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

It's almost impossible to simulate a good wormhole without more qubits.

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

]]>
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

Last November, a group of physicists claimed they’d simulated a wormhole for the first time inside Google’s Sycamore quantum computer. The researchers tossed information into one batch of simulated particles and said they watched that information emerge in a second, separated batch of circuits. 

It was a bold claim. Wormholes—tunnels through space-time—are a very theoretical product of gravity that Albert Einstein helped popularize. It would be a remarkable feat to create even a wormhole facsimile with quantum mechanics, an entirely different branch of physics that has long been at odds with gravity. 

And indeed, three months later, a different group of physicists argued that the results could be explained through alternative, more mundane means. In response, the team behind the Sycamore project doubled down on their results.

Their case highlights a tantalizing dilemma. Successfully simulating a wormhole in a quantum computer could be a boon for solving an old physics conundrum, but so far, quantum hardware hasn’t been powerful or reliable enough to do the complex math. They’re getting there very quickly, though.

[Related: Journey to the center of a quantum computer]

The root of the challenge lies in the difference of mathematical systems. “Classical” computers, such as the device you’re using to read this article, store their data and do their computations with “bits,” typically made from silicon. These bits are binary: They can be either zero or one, nothing else. 

For the vast majority of human tasks, that’s no problem. But binary isn’t ideal for crunching the arcana of quantum mechanics—the bizarre rules that guide the universe at the smallest scales—because the system essentially operates in a completely different form of math.

Enter a quantum computer, which swaps out the silicon bits for “qubits” that adhere to quantum mechanics. A qubit can be zero, one—or, due to quantum trickery, some combination of zero and one. Qubits can make certain calculations far more manageable. In 2019, Google operators used Sycamore’s qubits to complete a task in minutes that they said would have taken a classical computer 10,000 years.

There are several ways of simulating wormholes with equations that a computer can solve. The 2022 paper’s researchers used something called the Sachdev–Ye–Kitaev (SYK) model. A classical computer can crunch the SYK model, but very ineffectively. Not only does the model involve particles interacting at a distance, it also features a good deal of randomness, both of which are tricky for classical computers to process.

Even the wormhole researchers greatly simplified the SYK model for their experiment. “The simulation they did, actually, is very easy to do classically,” says Hrant Gharibyan, a physicist at Caltech, who wasn’t involved in the project. “I can do it in my laptop.”

But simplifying the model opens up new questions. If physicists want to show that they’ve created a wormhole through quantum math, it makes it harder for them to confirm that they’ve actually done it. Furthermore, if physicists want to learn how quantum mechanics interact with gravity, it gives them less information to work with.

Critics have pointed out that the Sycamore experiment didn’t use enough qubits. While the chips in your phone or computer might have billions or trillions of bits, quantum computers are far, far smaller. The wormhole simulation, in particular, used nine.

While the team certainly didn’t need billions of qubits, according to experts, they should have used more than nine. “With a nine-qubit experiment, you’re not going to learn anything whatsoever that you didn’t already know from classically simulating the experiment,” says Scott Aaronson, a computer scientist at the University of Texas at Austin, who wasn’t an author on the paper.

If size is the problem, then current trends give physicists reason to be optimistic that they can simulate a proper wormhole in a quantum computer. Only a decade ago, even getting one qubit to function was an impressive feat. In 2016, the first quantum computer with cloud access had five. Now, quantum computers are in the dozens of qubits. Google Sycamore has a maximum of 53. IBM is planning a line of quantum computers that will surpass 1,000 qubits by the mid-2020s.

Additionally, today’s qubits are extremely fragile. Even small blips of noise or tiny temperature fluctuations—qubits need to be kept at frigid temperatures, just barely above absolute zero—may cause the medium to decohere, snapping the computer out of the quantum world and back into a mundane classical bit. (Newer quantum computers focus on trying to make qubits “cleaner.”)

Some quantum computers use individual particles; others use atomic nuclei. Google’s Sycamore, meanwhile, uses loops of superconducting wire. It all shows that qubits are in their VHS-versus-Betamax era: There are multiple competitors, and it isn’t clear which qubit—if any—will become the equivalent to the ubiquitous classical silicon chip.

“You need to make bigger quantum computers with cleaner qubits,” says Gharibyan, “and that’s when real quantum computing power will come.”

[Related: Scientists eye lab-grown brains to replace silicon-based computer chips]

For many physicists, that’s when great intangible rewards come in. Quantum physics, which guides the universe at its smallest scales, doesn’t have a complete explanation for gravity, which guides the universe at its largest. Showing a quantum wormhole—with qubits effectively teleporting—could bridge that gap.

So, the Google users aren’t the only physicists poring over this problem. Earlier in 2022, a third group of researchers published a paper, listing signs of teleportation they’d detected in quantum computers. They didn’t send a qubit through a simulated wormhole—they only sent a classical bit—but it was still a promising step. Better quantum gravity experiments, such as simulating the full SYK model, are about “purely extending our ability to build processors,” Gharibyan explains.

Aaronson is skeptical that a wormhole will ever be modeled in a meaningful form, even in the event that quantum computers do reach thousands of qubits. “There’s at least a chance of learning something relevant to quantum gravity that we didn’t know how to calculate otherwise,” he says. “Even then, I’ve struggled to get the experts to tell me what that thing is.”

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hotter weather could be changing baseball https://www.popsci.com/environment/baseball-climate-change-weather/ Fri, 07 Apr 2023 12:00:00 +0000 https://www.popsci.com/?p=532290
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts.
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts. Maddie Meyer/Getty Images

'Climate ball' isn't necessarily a good thing.

The post Hotter weather could be changing baseball appeared first on Popular Science.

]]>
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts.
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts. Maddie Meyer/Getty Images

As average global temperatures continue to rise, America’s pastime could be entering the “climate-ball era.” A report published April 7 in the Bulletin of the American Meteorological Society found that since 2010, more than 500 home runs can be attributed to higher-than-average temperatures. These higher-than-average temperatures are due to human-made global warming.

While the authors of this study only attribute one percent of recent home runs to climate change, their study found that warmer temperatures could account for 10 percent or more of home runs by 2100, if emissions and climate change continue on their current trajectory.

[Related: What’s really behind baseball’s recent home run surge.]

“Global warming is not just a phenomenon that shows up in hurricanes and heat waves—it’s going to alter every aspect of how we live and play,” study co-author and doctoral candidate in geography at Dartmouth University Chris Callahan tells PopSci in an email. “Reducing human emissions of greenhouse gasses is the only way to prevent these effects from accelerating.”

This study primarily arose because Callahan, a huge baseball fan, was interested in any possible connections between climate change and home runs. “This simple physical mechanism—higher temperatures mean reduced air density, which means less air resistance to batted balls—had been proposed previously, but no one had tested whether it shows up in the large-scale data. It turns out that it does!” Callahan says. 

Callanhan and his team analyzed more than 100,000 Major League Baseball (MLB) games and 220,000 individual hits to correlate the number of home runs with the occurrence of unseasonably warm temperatures during the game. Next, they estimated how much the reduced air density that results from high air temperature was a possible driving force in the number of home runs on one given day compared to other games. 

Other factors, such as performance-enhancing drugs, bat and ball construction, and technology like launch analytics intended to optimize a batter’s power were also taken into account. While the team does not believe that temperature is the dominant factor in the increase in home runs, particularly because present day batters are primed to hit the ball at optimal angles and speeds, temperature does play a factor.

Global Warming photo
Increase in average number of home runs per year for each American major league ballpark with every 2 degree Fahrenheit increase in global average temperature. CREDIT: Christopher Callahan

The team particularly looked at the average number of home runs annually compared to every 2 degrees Fahrenheit increase in local average temperature at every MLB ballpark in the US. They found that the open-air Wrigley Field in Chicago would experience the largest spike (more than 15 home runs per season per 2 degree change), while Tampa Bay’s dome roofed Tropicana Field would stay level at one home run or less regardless of how hot it is outside the stadium. 

[Related: Will baseball ever replace umpires with robots?]

Night games lessened temperature and air density’s potential influence on the distance the ball travels, and covered stadiums would nearly eliminate the influence. Additionally, the study did not name precipitation as a factor, after all, most games are postponed or delayed. The number of runs per season due to temperature could be higher or lower depending on the conditions on each game day.

“I think it was surprising that the [heat’s] effect itself, while intuitive, was so clearly detectable in observations. As a non baseball fan, I was astounded by the data,” study co-author and geographer Justin Mankin tells PopSci. Mankin also noted that some next steps for this kind of research could potentially be looking into how wooden bats should change due to warming and how other ballistics based sports (golf, cricket, etc.) are affected by the increased temperature. 

While more home runs arguably makes for more exciting games, exposure to players and fans to extreme heat is a major risk factor that MLB and its teams will need to consider more frequently as the planet warms. 

“A key question for the organization at large is what’s an acceptable level of heat exposure for everybody and what’s the acceptable cost for maximizing home runs,” Mankin said in a statement. “Home runs are one pathway by which temperature is affecting game play, but there are other pathways that are more concerning because they have human risk attached to them.”

The post Hotter weather could be changing baseball appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dying plants are ‘screaming’ at you https://www.popsci.com/science/do-plants-makes-sounds-stressed/ Thu, 30 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=524200
Pincushion cactus with pink flowers on a sunny windowsill
Under that prickly exterior, even a cactus has feelings. Deposit Photos

In the future, farmers might use ultrasound to listen to stressed plants vent.

The post Dying plants are ‘screaming’ at you appeared first on Popular Science.

]]>
Pincushion cactus with pink flowers on a sunny windowsill
Under that prickly exterior, even a cactus has feelings. Deposit Photos

While plants can’t chat like people, they don’t just sit in restful silence. Under certain conditions—such as a lack of water or physical damage—plants vibrate and emit sound waves. Typically, those waves are too high-pitched for the human ear and go unnoticed.

But biologists can now hear those sound waves from a distance. Lilach Hadany, a biologist at Tel Aviv University in Israel, and her colleagues even managed to record them. They published their work in the journal Cell today.

Hadany and colleagues’ work is part of a niche but budding field called “plant bioacoustics.” While scientists know plants aren’t just inert decorations in the ecological backdrop— they interact with their surroundings, like releasing chemicals as a defense mechanism—researchers don’t exactly know how plants respond to and produce sounds. Not only could solving this mystery give farmers a new way of tending to their plants, but it might also unlock something wondrous: Plants have senses in a way we never realized.

It’s established that “the sounds emitted by plants are much more prominent after some kind of stress,” says František Baluška, a plant bioacoustics researcher at Bonn University in Germany who wasn’t a part of the new study. But past plant bioacoustics experiments had to listen to plants at a very close distance to measure vibrations. Meanwhile, Hadany and her colleagues managed to pick up plant sounds from across a room.

[Related on PopSci+: Biohacked cyborg plants may help prevent environmental disaster]

The study team first tested out their ideas on tomato and tobacco plants. Some plants were watered regularly, while others were neglected for days—a process that simulated drought-like conditions. Finally, the most unfortunate plants were severed from their roots.

Plants under idyllic conditions seemed to thrive. But the damaged and dehydrated plants did something peculiar: They emitted clicking sounds once every few minutes. 

Of course, if you were to walk through a drought-stricken tomato grove with a machete, chopping every vine you see, you wouldn’t hear a chorus of distressed plants. The plants emit sounds in ultrasound: frequencies too high for the human ear to hear. That’s part of why researchers have only now perceived these clicks.
“Not everybody has the equipment to do ultrasound [or] has the mind to look into these broader frequencies,” says ecologist Daniel Robert, a professor at the University of Bristol in the United Kingdom who wasn’t an author of the paper.

Three tomato plants in a greenhouse with a microphone in front of them
Three tomato plants’ sounds were recorded in a greenhouse. Ohad Lewin-Epstein

The researchers were able to record similar sounds in other plants deprived of water, including wheat, maize, wine grapes, pincushion cactus, and henbit (a common spring weed in the Northern Hemisphere). 

Biologists think the clicks might come from xylem, the “piping” that transports water and nutrients through a plant. Pressure differences cause air bubbles to enter the fluid. The bubbles grow until they pop—and the burst is the noise picked up by scientists. This process is called cavitation. 

Most people who study cavitation aren’t biologists; they’re typically physicists and engineers. For them, cavitation is often a nuisance. Bursting bubbles can damage pumps, propellers, hydraulic turbines, and other devices that do their work underwater. But, on the other hand, we can put cavitation to work for us: for instance, in ultrasound jewelry cleaners.

Although it’s known cavitation occurs in plants under certain conditions, like when they’re dehydrated, scientists aren’t sure that this process can entirely explain the plant sounds they hear. “There might not be only one mechanism,” says Robert.

The authors speculate that their work could eventually help plant growers, who could listen from a distance and monitor the plants in their greenhouse. To support this potential future,  Hadany and her colleagues trained a machine learning model to break down the sound waves and discern what stress caused a particular sound. Instead of being surprised by wilted greens, this type of tech could give horticulturists a heads-up.

[Related: How to water your plants less but still keep them happy]

Robert suspects that—unlike people—animals might already be able to hear plant sounds. Insects searching for landing spots or places to lay their eggs, for instance, might pick and choose plants by listening in and selecting a plant based on their health.

If there is an observable quality like sound (or light or electric fields) in the wild, then some organisms will evolve to use it, explains Robert. “This is why we have ears,” he says

If that’s the case, perhaps it can work the other way—plants may also respond to sounds. Scientists like Baluška have already shown that plants can “hear” external sounds. For example, research suggests some leaf trichomes react to vibrations from worms chewing on them. And in the laboratory, researchers have seen some plants’ root tips grow through the soil in the direction of incoming sounds.

If that’s the case, some biologists think plants may have more sophisticated “senses” than we perhaps believed.

“Plants definitely must be aware of what is around because they must react every second because the environment is changing all the time,” says Baluška. “They must be able to, somehow, understand the environment.”

The post Dying plants are ‘screaming’ at you appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Room-temperature superconductors could zap us into the future https://www.popsci.com/science/room-temperature-superconductor/ Sat, 25 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=522900
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

Superconductors convey powerful currents and intense magnetic fields. But right now, they can only be built at searing temperatures and crushing pressures.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

]]>
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

In the future, wires might cross underneath oceans to effortlessly deliver electricity from one continent to another. Those cables would carry currents from giant wind turbines or power the magnets of levitating high-speed trains.

All these technologies rely on a long-sought wonder of the physics world: superconductivity, a heightened physical property that lets metal carry an electric current without losing any juice.

But superconductivity has only functioned at freezing temperatures that are far too cold for most devices. To make it more useful, scientists have to recreate the same conditions at regular temperatures. And even though physicists have known about superconductivity since 1911, a room-temperature superconductor still evades them, like a mirage in the desert.

What is a superconductor?

All metals have a point called the “critical temperature.” Cool the metal below that temperature, and electrical resistivity all but vanishes, making it extra easy to move charged atoms through. To put it another way, an electric current running through a closed loop of superconducting wire could circulate forever. 

Today, anywhere from 8 to 15 percent of mains electricity is lost between the generator and the consumer because the electrical resistivity in standard wires naturally wicks some of it away as heat. Superconducting wires could eliminate all of that waste.

[Related: This one-way superconductor could be a step toward eternal electricity]

There’s another upside, too. When electricity flows through a coiled wire, it produces a magnetic field; superconducting wires intensify that magnetism. Already, superconducting magnets power MRI machines, help particle accelerators guide their quarry around a loop, shape plasma in fusion reactors, and push maglev trains like Japan’s under-construction Chūō Shinkansen.

Turning up the temperature

While superconductivity is a wondrous ability, physics nerfs it with the cold caveat. Most known materials’ critical temperatures are barely above absolute zero (-459 degrees Fahrenheit). Aluminum, for instance, comes in at -457 degrees Fahrenheit; mercury at -452 degrees Fahrenheit; and the ductile metal niobium at a balmy -443 degrees Fahrenheit. Chilling anything to temperatures that frigid is tedious and impractical. 

Scientists made it happen—in a limited capacity—by testing it with exotic materials like cuprates, a type of ceramic that contains copper and oxygen. In 1986, two IBM researchers found a cuprate that superconducted at -396 degrees Fahrenheit, a breakthrough that won them the Nobel Prize in Physics. Soon enough, others in the field pushed cuprate superconductors past -321 degrees Fahrenheit, the boiling point of liquid nitrogen—a far more accessible coolant than the liquid hydrogen or helium they’d otherwise need. 

“That was a very exciting time,” says Richard Greene, a physicist at the University of Maryland. “People were thinking, ‘Well, we might be able to get up to room temperature.’”

Now, more than 30 years later, the search for a room-temperature superconductor continues. Equipped with algorithms that can predict what a material’s properties will look like, many researchers feel that they’re closer than ever. But some of their ideas have been controversial.

The replication dilemma

One way the field is making strides is by turning the attention away from cuprates to hydrates, or materials with negatively charged hydrogen atoms. In 2015, researchers in Mainz, Germany, set a new record with a sulfur hydride that superconducted at -94 degrees Fahrenheit. Some of them then quickly broke their own record with a hydride of the rare-earth element lanthanum, pushing the mercury up to around -9 degrees Fahrenheit—about the temperature of a home freezer.

But again, there’s a catch. Critical temperatures shift when the surrounding pressure changes, and hydride superconductors, it seems, require rather inhuman pressures. The lanthanum hydride only achieved superconductivity at pressures above 150 gigapascals—roughly equivalent to conditions in the Earth’s core, and far too high for any practical purpose in the surface world.

[Related: How the small, mighty transistor changed the world]

So imagine the surprise when mechanical engineers at the University of Rochester in upstate New York presented a hydride made from another rare-earth element, lutetium. According to their results, the lutetium hydride superconducts at around 70 degrees Fahrenheit and 1 gigapascal. That’s still 10,000 times Earth’s air pressure at sea level, but low enough to be used for industrial tools.

“It is not a high pressure,” says Eva Zurek, a theoretical chemist at the University at Buffalo. “If it can be replicated, [this method] could be very significant.”

Scientists aren’t cheering just yet, however—they’ve seen this kind of an attempt before. In 2020, the same research group claimed they’d found room-temperature superconductivity in a hydride of carbon and sulfur. After the initial fanfare, many of their peers pointed out that they’d mishandled their data and that their work couldn’t be replicated. Eventually, the University of Rochester engineers caved and retracted their paper.

Now, they’re facing the same questions with their lutetium superconductor. “It’s really got to be verified,” says Greene. The early signs are inauspicious: A team from Nanjing University in China recently tried to replicate the experiment, without success.

“Many groups should be able to reproduce this work,” Greene adds. “I think we’ll know very quickly whether this is correct or not.”

But if the new hydride does mark the first room-temperature superconductor—what next? Will engineers start stringing power lines across the planet tomorrow? Not quite. First, they have to understand how this new material behaves under different temperatures and other conditions, and what it looks like at smaller scales.

“We don’t know what the structure is yet. In my opinion, it’s going to be quite different from a high-pressure hydride,” says Zurek. 

If the superconductor is viable, engineers will have to learn how to make it for everyday uses. But if they succeed, the result could be a gift for world-changing technologies.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dark energy fills the cosmos. But what is it? https://www.popsci.com/science/what-is-dark-energy/ Mon, 20 Mar 2023 10:00:00 +0000 https://www.popsci.com/?p=520278
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy.
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy. NASA/CXC/ITA/INAF/STScI

We know how dark energy behaves, but its nature is still a mystery.

The post Dark energy fills the cosmos. But what is it? appeared first on Popular Science.

]]>
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy.
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy. NASA/CXC/ITA/INAF/STScI

The universe has a dark side—it’s filled with dark matter and dark energy. Dark matter is the unseen mass floating around galaxies, which physicists have searched for using giant vats of ice, particle colliders, and other sophisticated techniques. But what about dark matter’s stranger sibling, dark energy? 

Dark energy is the term given to something that is causing the universe to expand faster and faster as time goes on. The great puzzle facing cosmologists today is figuring out the identity of that “something.”

“We can tell you a lot about the properties of dark energy and how it behaves,” says astrophysicist Tamara Davis, a professor at the University of Queensland in Australia. “However, we still don’t know what it is. That’s the big question.”

How do we know dark energy exists?

Astronomers have long known that the universe is expanding. In the early 1900s,  Edwin Hubble observed galaxies in motion and created Hubble’s Law, which relates a galaxy’s velocity to its distance from us. At the end of the 20th century, though, new detections of supernovae in far-off galaxies revealed a conundrum: The expansion of the universe isn’t constant, but is instead speeding up.

“The fact that the universe is accelerating caught us all by surprise,” says University of Texas at Austin astrophysicist Katherine Freese. Unlike the attractive force of gravity, dark energy must create “some sort of repulsive behavior, driving things apart from one another more and more quickly,” adds Freese.

Many observations since the 1990s have confirmed that the universe is accelerating. Exploding stars in distant galaxies appear fainter than they should have been in a steadily-expanding universe. Even the cosmic microwave background—the remnant light from the first clear moments in the universe’s history—shows fingerprints of dark energy’s effects. To explain the observed universe, dark energy is a necessary component of our mathematical models of cosmology.

[Related: Dark matter has never killed anyone, and scientists want to know why]

The term dark energy was coined in 1998 by astrophysicist Michael Turner to match the nomenclature of dark matter. It also conveys that the universe’s accelerating expansion was a crucial, unsolved problem. Many scientists at the time thought that Albert Einstein’s cosmological constant—a “fudge factor” he included in general relativity to make the math work out, also known as lambda—was the perfect explanation for dark energy, since it fit nicely into their models. 

“It was my belief that it was not that simple,” says Turner, now a visiting professor at UCLA. He views the accelerating universe as “the most profound problem” and “the biggest mystery in all of science.” 

Why does dark energy matter?

The Lambda-CDM model, which says we live in a universe that consists of only 5 percent normal matter—everything you’ve ever seen or touched—plus 27 percent dark matter and a whopping 68 percent dark energy, is “the current paradigm in cosmology, says Yale astrophysicist Will Tyndall. It “rather ambitiously seeks to incorporate (and explain) all of cosmic history,” he says. But it still leaves a lot unexplained, including the nature of dark energy. “After all, how can we have so little understanding of something that supposedly constitutes 68 percent of the universe we live in?” adds Tyndall. 

Dark energy is also a major deciding factor in our universe’s ultimate fate. Will the universe be torn apart in a Big Rip, in which everything is shredded apart atom by atom?  Or will it end in a whimper? 

These scenarios depend on whether dark energy changes with time. If dark energy is just the cosmological constant, with no variation, our universe will expand eternally into a very lonely place; in this scenario, all the stars beyond our local cluster of galaxies would be invisible to us, too red to be detected.

If dark energy gets stronger, it might lead to the  event known as the Big Rip. Maybe dark energy weakens, and our universe crunches back down, starting the cycle all over with a new big bang. Physicists won’t know which of these scenarios lies ahead until they have a better handle on the nature of dark energy.

What could dark energy actually be? 

Dark energy shows up in the mathematics of the universe as Einstein’s cosmological constant, but that doesn’t explain what physically causes the universe’s expansion to speed up. A leading theory is a funky feature of quantum mechanics known as the vacuum energy. This is created when pairs of particles and their antiparticles quickly pop into and out of existence, which happens pretty much everywhere all the time. 

It sounds like a great explanation for dark energy. But there’s one big issue: The value of the vacuum energy that scientists measure and the one they predict from theories are wildly and inexplicably different. This is known as the cosmological constant problem. Put another way, particle physicist’s models predict that what we think of as “nothing” should have some weight, Turner says. But measurements find it weighs very little, if anything at all. “Maybe nothing weighs nothing,” he says. 

[Related: An ambitious dark energy experiment just went live in Arizona]

Cosmologists have raised other explanations for dark energy over the years. One, string theory, claims that the universe is made up of tiny little string-like bits, and the value of dark energy that we see just happens to be one possibility within many different multiverses. Many physicists consider this to be pretty human-centric in its logic—we couldn’t exist in a universe with other values of the cosmological constant, so we ended up in this one, even if it’s an outlier compared to the others.

Other physicists have considered changing Einstein’s equations for general relativity altogether, but most of those attempts were ruled out by measurements from LIGO’s pioneering observations of gravitational waves. “In short, we need a brilliant new idea,” says Freese.

How might scientists solve this mystery?

New observations of the cosmos may be able to help astrophysicists measure the properties of dark energy in more detail. For example, astronomers already know the universe’s expansion is accelerating—but has that acceleration always been the same? If the answer to this question is no, then that means dark energy hasn’t been constant, and the lives of physics theorists everywhere will be upended as they scramble to find new explanations.

One project, known as the Dark Energy Spectroscopic Instrument or DESI, is already underway at Kitt Peak Observatory in Arizona. This effort searches for signs of varying acceleration in the universe by cosmic cartography. “It is like laying grid-paper over the universe and measuring how it has expanded and accelerated with time,” says Davis. 

Even more experiments are upcoming, such as the European Euclid mission launching this summer. Euclid will map galaxies as far as 10 billion light-years away—looking backward in time by 10 billion years. This is “the entire period over which dark energy played a significant role in accelerating the expansion of the universe,” as its mission website states. Radio telescopes such as CHIME will be mapping the universe in a slightly different way, tracing how hydrogen spreads across space.

New observations won’t solve everything, though. “Even if we measure the properties of dark energy to infinite precision, it doesn’t tell us what it is,” Davis adds. “The real breakthrough that is needed is a theoretical one.” Astronomers have a timeline for new experiments, which will keep marching forward, recording better and better measurements. But theoretical breakthroughs are unpredictable—it could take one, ten, or even a hundred-plus years. “In science, there are very few true puzzles. A true puzzle means you don’t really know the answer,” says Turner. “And I think dark energy is one of them.”

The post Dark energy fills the cosmos. But what is it? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Clouds of ancient space water might have filled Earth’s oceans https://www.popsci.com/science/water-origin-theory-space/ Fri, 10 Mar 2023 11:00:00 +0000 https://www.popsci.com/?p=518688
Protoplanetary disk and water formation around star V883 Orionis in the Orion constellation. Illustrated in gold, white, and black.
This artist’s impression shows the planet-forming disc around the star V883 Orionis. The inset image shows the two kinds of water molecules studied in this disc: normal water, with one oxygen atom and two hydrogen atoms, and a heavier version where one hydrogen atom is replaced with deuterium, an isotope. ESO/L. Calçada

The molecules that made Earth wet were probably older than our sun.

The post Clouds of ancient space water might have filled Earth’s oceans appeared first on Popular Science.

]]>
Protoplanetary disk and water formation around star V883 Orionis in the Orion constellation. Illustrated in gold, white, and black.
This artist’s impression shows the planet-forming disc around the star V883 Orionis. The inset image shows the two kinds of water molecules studied in this disc: normal water, with one oxygen atom and two hydrogen atoms, and a heavier version where one hydrogen atom is replaced with deuterium, an isotope. ESO/L. Calçada

Water is an essential ingredient for life as we know it, but its origins on Earth, or any other planet, have been a long-standing puzzle. Was most of our planet’s water incorporated in the early Earth as it coalesced out of the material orbiting the young sun? Or was water brought to the surface only later by comet and asteroid bombardments? And where did that water come from originally

A study published on March 7 in the journal Nature provides new evidence to bolster a theory about the ultimate origins of water—namely, that it predates the sun and solar system, forming slowly over time in vast clouds of gas and dust between stars.

”We now have a clear link in the evolution of water. It actually seems to be directly inherited, all the way back from the cold interstellar medium before a star ever formed,” says John Tobin, an astronomer studying star formation at the National Radio Astronomy Observatory and lead author of the paper. The water, unchanged, was incorporated from the protoplanetary disk, a dense, round layer of dust and gas that forms in orbit around newborn stars and from which planets and small space bodies like comets emerge. Tobin says the water gets drawn into comets “relatively unchanged as well.”

Astronomers have proposed different origins story for water in solar systems. In the hot nebular theory, Tobin says, the heat in a protoplanetary disk around a natal star will break down water and other molecules, which form afresh as things start to cool.  

The problem with that theory, according to Tobin, is that when water emerges at relatively warm temperatures in a protoplanetary disk, it won’t look like the water found on comets and asteroids. We know what those molecules look like: Space rocks, such as asteroids and comets act as time capsules, preserving the state of matter in the early solar system. Specifically, water made in the disk wouldn’t have enough deuterium—the hydrogen isotope that contains one neutron and one proton in its nucleus, rather than a single proton as in typical hydrogen. 

[Related: Meteorites older than the solar system contain key ingredients for life]

An alternative to the hot nebular theory is that water forms at cold temperatures on the surface of dust grains in vast clouds in the interstellar medium. This deep chill changes the dynamics of water formation, so that more deuterium is incorporated in place of typical hydrogen atoms in H2O molecules, more closely resembling the hydrogen-to-deuterium ratio seen in asteroids and comets.  

“The surface of dust grains is the only place where you can efficiently form large amounts of water with deuterium in it,” Tobin says. “The other routes of forming water with deuterium and gas just don’t work.” 

While this explanation worked in theory, the new paper is the first time scientists have found evidence that water from the interstellar medium can survive the intense heat during the formation of a protoplanetary disk. 

The researchers used the European Southern Observatory’s Atacama Large Millimeter/submillimeter Array, a radio telescope in Chile, to observe the protoplanetary disk around the young star V883 Orionis, about 1,300 light-years away from Earth in the constellation Orion. 

Radio telescopes such as this one can detect the signal of water molecules in the gas phase. But dense dust found in  protoplanetary disks very close to young stars often turns water into ice, which sticks to grains in ways telescopes cannot observe. 

But V883 Orionis is not a typical young star—it’s been shining brighter than normal due to material from the protoplanetary disk falling onto the star. This increased intensity warmed ice on dust grains farther out than usual, allowing Tobin and his colleagues to detect the signal of deuterium-enriched water in the disk. 

“That’s why it was unique to be able to observe this particular system, and get a direct confirmation of the water composition,” Tobin explains. ”That signature of that level of deuterium gives you your smoking gun.” This suggests Earth’s oceans and rivers are, at a molecular level, older than the sun itself. 

[Related: Here’s how life on Earth might have formed out of thin air and water]

“We obviously will want to do this for more systems to make sure this wasn’t just that wasn’t just a fluke,” Tobin adds. It’s possible, for instance, that water chemistry is somehow altered later in the development of planets, comets, and asteroids, as they smash together in a protoplanetary disk. 

But as an astronomer studying star formation, Tobin already has some follow up candidates in mind. “There are several other good candidates that are in the Orion star-forming region,” he says. “You just need to find something that has a disk around it.”

The post Clouds of ancient space water might have filled Earth’s oceans appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
We might soon lose a full second of our lives https://www.popsci.com/science/negative-leap-second/ Mon, 20 Feb 2023 11:00:00 +0000 https://www.popsci.com/?p=513420
Surrealist digital painting inspired by Dali of flying clocks and chess pieces upside down over sand and the Earth. The motifs symbolize the leap second.
Some tech companies think a negative leap second would turn the world upside down. But it probably won't be that bad. Deposit Photos

The Earth is spinning faster. A negative leap second could help the world's clocks catch up.

The post We might soon lose a full second of our lives appeared first on Popular Science.

]]>
Surrealist digital painting inspired by Dali of flying clocks and chess pieces upside down over sand and the Earth. The motifs symbolize the leap second.
Some tech companies think a negative leap second would turn the world upside down. But it probably won't be that bad. Deposit Photos

The leap second’s days, so to speak, are numbered. Late last year, the world’s timekeepers announced they would abandon the punctual convention in 2035.

That still gives timekeepers a chance to invoke the leap second before its scheduled end—in a more unconventional way. Ever since its creation, they’ve only used positive leap seconds, adding a second to slow down the world’s clocks when they get too far ahead of Earth’s rotation.

As it happens, the world’s clocks aren’t ahead right now; in fact, they’ve fallen behind. If this trend holds up, it’s possible that the next leap second may be a negative one, removing a second to speed the human measure of time back up. That’s uncharted territory.

The majority of humans won’t notice a missing second, just as they wouldn’t with an extra one. Computers and their networks, however, already have problems with positive leap seconds. While their operators can practice for when the world’s clocks skip a second, they won’t know what a negative leap second can do until the big day happens (if it ever does).

“Nobody knows how software systems will react to it,” says Marina Gertsvolf, a researcher at the National Research Council, which is responsible for Canada’s timekeeping.

The second is defined by a process that occurs in the nucleus of a cesium-155 atom—a process that atomic clocks can calculate with stunning accuracy. But a day is based on how long the Earth takes to finish one full spin, which takes 24 hours or 86,400 of those seconds.

Except a day isn’t always precisely 86,400 seconds, because the planet’s rotation isn’t a constant. Everything from the mantle churning to the atmosphere moving to the moon’s gravity pulling can play with it, adding or subtracting a few milliseconds every day. Over time, those differences add up.

[Related: What would happen if the Earth started to spin faster]

An organization called the International Earth Rotation and Space Systems Service (IERS) is responsible for tracking and adjusting for the changes. When the gulf widens by enough, they decree that the final minute of June 30 or December 31—whichever comes next—should be modified with a leap second. Since 1972, these judges of time have added 31 positive leap seconds.

But for the past several months, Earth’s rotation has been pacing ahead of the world’s clocks. If this continues, then it’s possible the next leap second might be negative. At some point in the late 2020s or early 2030s, IERS might decide to peel away the last second of the last minute of June 30 or December 31, resulting in a minute that’s 59 seconds long. Clocks would skip from 23:59:58 right to 00:00:00. And we don’t know what that will do. 

What we do know is the chaos that past positive leap seconds have caused. It’s nothing like the apocalyptic collapse that Y2K preppers feared, but the time tweaks have given systems administrators their fair share of headaches. In 2012, a leap second glitched a server’s Linux operating system and knocked out Reddit at midnight. In 2017, Cloudflare—a major web service provider—experienced a significant outage due to a leap second. Problems invariably arise when one computer or server talks to another computer or server that still might not have accounted for a leap second.

As a result, some of the leap second’s biggest critics have been tech companies who have to deal with the consequences. And at least one of them is not excited about the possibility of a negative leap second. In 2022, two Facebook engineers wrote: “The impact of a negative leap second has never been tested on a large scale; it could have a devastating effect on the software relying on timers or schedulers.”

Timekeepers, however, aren’t expecting a meltdown. “Negative leap seconds aren’t quite as nasty as positive leap seconds,” says Michael Wouters, a researcher at the National Measurement Institute, Australia’s peak measurement body.

[Related: Daylight saving can mess with circadian rhythm]

Still, some organizations have already made emergency plans. Google, for instance, uses a process they call a “smear.” Rather than adding a second, it spread a positive leap second over the course of a day, making every second slightly longer to make up the difference. According to the company, it tested the process for a negative leap second by making every second slightly shorter, amounting to a lost second over the course of the day.

Many servers get their time from constellations of navigation satellites like America’s GPS, Europe’s Galileo, and China’s BeiDou. To read satellite data, servers typically rely on specialized receivers that translate signals into information—including the time. According to Wouters, many of those receivers’ manufacturers have tested to handle negative leap seconds. “I think that there is a lot more awareness of leap seconds than in the past,” says Wouters. 

At the end of the day, the leap second is just an awkward, artificial construct. Human timekeepers use it to force the astronomical cycles that once defined our time back into lockstep with the atomic physics that have replaced the stars. “It removes this idea that time belongs to no country … and no particular industrial interest,” says Gertsvolf.

So, with the blessing of the world’s timekeepers, the leap second is on its way out. If that goes according to plan, then we can let the Earth spin as it wants without having to skip a beat.

The post We might soon lose a full second of our lives appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why is space cold if the sun is hot? https://www.popsci.com/why-is-space-cold-sun-hot/ Tue, 31 Aug 2021 13:04:12 +0000 https://www.popsci.com/uncategorized/why-is-space-cold-sun-hot/
Heat of sun radiating through cold of space
On July 23, 2012, a massive cloud of solar material erupted off the sun's right side, zooming out into space. NASA/STEREO

We live in a universe of extremes.

The post Why is space cold if the sun is hot? appeared first on Popular Science.

]]>
Heat of sun radiating through cold of space
On July 23, 2012, a massive cloud of solar material erupted off the sun's right side, zooming out into space. NASA/STEREO

How cold is space? And how hot is the sun? These are both excellent questions. Unlike our mild habitat here on Earth, our solar system is full of temperature extremes. The sun is a bolus of gas and fire measuring around 27 million degrees Fahrenheit at its core and 10,000 degrees at its surface. Meanwhile, the cosmic background temperature—the temperature of space once you get far enough away to escape Earth’s balmy atmosphere—hovers at -455 F.

But how can one part of our galactic neighborhood be freezing when another is searing? Scholars (and NFL players) have puzzled over this paradox for time eternal.

Well, there’s a reasonable explanation. Heat travels through the cosmos as radiation, an infrared wave of energy that migrates from hotter objects to cooler ones. The radiation waves excite molecules they come in contact with, causing them to heat up. This is how heat travels from the sun to Earth, but the catch is that radiation only heats molecules and matter that are directly in its path. Everything else stays chilly. Take Mercury: the nighttime temperature of the planet can be 1,000 degrees Fahrenheit lower than the radiation-exposed day-side, according to NASA.

Compare that to Earth, where the air around you stays warm even if you’re in the shade—and even, in some seasons, in the dark of night. That’s because heat travels throughout our beautiful blue planet by three methods instead of just one: conduction, convection, and radiation. When the sun’s radiation hits and warms up molecules in our atmosphere, they pass that extra energy to the molecules around them. Those molecules then bump into and heat up their own neighbors. This heat transfer from molecule to molecule is called conduction, and it’s a chain reaction that warms areas outside of the sun’s path.

[Related: What happens to your body when you die in space?]

Space, however, is a vacuum—meaning it’s basically empty. Gas molecules in space are too few and far apart to regularly collide with one another. So even when the sun heats them with infrared waves, transferring that heat via conduction isn’t possible. Similarly, convection—a form of heat transfer that happens in the presence of gravity—is important in dispersing warmth across the Earth, but doesn’t happen in zero-g space.

These are things Elisabeth Abel, a thermal engineer on NASA’s DART project, thinks about as she prepares vehicles and devices for long-term voyages through space. This is especially true when she was working on the Parker Solar Probe, she says.

As you can probably tell by its name, the Parker Solar Probe is part of NASA’s mission to study the sun. It zooms through the outermost layer of the star’s atmosphere, called the corona, collecting data. In April 2021, the probe got within 6.5 million miles of the inferno, the closest a spacecraft has ever been to the sun. The heat shield projected on one side of the probe makes this possible.

“The job of that heat shield,” Abel says, is to make sure “none of the solar radiation [will] touch anything on the spacecraft.” So, while the heat shield is experiencing the extreme heat (around 250 degrees F) of our host star, the spacecraft itself is much colder—around -238 degrees F, she says.

[Related: How worried should we be about solar flares and space weather?]

As the lead thermal engineer for DART—a small spacecraft designed to collide with an asteroid and nudge it off course—Abel takes practical steps to manage the temperatures of deep space. The extreme variation in temperature between the icy void and the boiling heat of the sun poses unique challenges. Some parts of the spacecraft needed help staying cool enough to avoid shorting out, while others required heating elements to keep them warm enough to function.

Preparing for temperature shifts of hundreds of degrees might sound wild, but it’s just how things are out in space. The real oddity is Earth: Amidst the extreme cold and fiery hot, our atmosphere keeps things surprisingly mild—at least for now.

This story has been updated. It was originally published on July 24, 2019.

The post Why is space cold if the sun is hot? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Let’s talk about how planes fly https://www.popsci.com/how-do-planes-fly/ Fri, 02 Nov 2018 19:00:00 +0000 https://www.popsci.com/uncategorized/how-do-planes-fly/
An airplane taking off toward the camera at dusk, with lights along the runway and on the front of the plane, against a cloudy reddish sunset.
Flight isn't magic, it's physics. Josue Isai Ramos Figueroa / Unsplash

How does an aircraft stay in the sky, and how do wings work? Fasten your seatbelts—let's explore.

The post Let’s talk about how planes fly appeared first on Popular Science.

]]>
An airplane taking off toward the camera at dusk, with lights along the runway and on the front of the plane, against a cloudy reddish sunset.
Flight isn't magic, it's physics. Josue Isai Ramos Figueroa / Unsplash

How does an airplane stay in the air? Whether you’ve pondered the question while flying or not, it remains a fascinating, complex topic. Here’s a quick look at the physics involved with an airplane’s flight, as well as a glimpse at a misconception surrounding the subject, too. 

First, picture an aircraft—a commercial airliner, such as a Boeing or Airbus transport jet—cruising in steady flight through the sky. That flight involves a delicate balance of opposing forces. “Wings produce lift, and lift counters the weight of the aircraft,” says Holger Babinsky, a professor of aerodynamics at the University of Cambridge. 

“That lift [or upward] force has to be equal to, or greater than, the weight of the airplane—that’s what keeps it in the air,” says William Crossley, the head of the School of Aeronautics and Astronautics at Purdue University. 

Meanwhile, the aircraft’s engines are giving it the thrust it needs to counter the drag it experiences from the friction of the air around it. “As you’re flying forward, you have to have enough thrust to at least equal the drag—it can be higher than the drag if you’re accelerating; it can be lower than the drag if you’re slowing down—but in steady, level flight, the thrust equals drag,” Crossley notes.

[Related: How high do planes fly?]

Understanding just how the airplane’s wings produce the lift in the first place is a bit more complicated. “The media, in general, are always after a quick and simple explanation,” Babinsky reflects. “I think that’s gotten us into hot water.” One popular explanation, which is wrong, goes like this: Air moving over the curved top of a wing has to travel a longer distance than air moving below it, and because of that, it speeds up to try to keep abreast of the air on the bottom—as if two air particles, one going over the top of the wing and one going under, need to stay magically connected. NASA even has a webpage dedicated to this idea, labeling it as an “incorrect airfoil theory.”

So what’s the correct way to think about it? 

Lend a hand

One very simple way to start thinking about the topic is to imagine that you’re riding in the passenger seat of a car. Stick your arm out sideways, into the incoming wind, with your palm down, thumb forward, and hand basically parallel to the ground. (If you do this in real life, please be careful.) Now, angle your hand upward a little at the front, so that the wind catches the underside of your hand; that process of tilting your hand upward approximates an important concept with wings called their angle of attack.

“You can clearly feel the lift force,” Babinsky says. In this straightforward scenario, the air is hitting the bottom of your hand, being deflected downward, and in a Newtonian sense (see law three), your hand is being pushed upward. 

Follow the curve 

But a wing, of course, is not shaped like your hand, and there are additional factors to consider. Two key points to keep in mind with wings are that the front of a wing—the leading edge—is curved, and overall, they also take on a shape called an airfoil when you look at them in cross-section. 

[Related: How pilots land their planes in powerful crosswinds]

The curved leading edge of a wing is important because airflow tends to “follow a curved surface,” Babinsky says. He says he likes to demonstrate this concept by pointing a hair dryer at the rounded edge of a bucket. The airflow will attach to the bucket’s curved surface and make a turn, potentially even snuffing out a candle on the other side that’s blocked by the bucket. Here’s a charming old video that appears to demonstrate the same idea. “Once the flow attaches itself to the curved surface, it likes to stay attached—[although] it will not stay attached forever,” he notes.

With a wing—and picture it angled up somewhat, like your hand out the window of the car—what happens is that the air encounters the rounded leading edge. “On the upper surface, the air will attach itself, and bend round, and actually follow that incidence, that angle of attack, very nicely,” he says. 

Keep things low-pressure

Ultimately, what happens is that the air moving over the top of the wing attaches to the curved surface and turns, or flows downward somewhat: a low-pressure area forms, and the air also travels faster. Meanwhile, the air is hitting the underside of the wing, like the wind hits your hand as it sticks out the car window, creating a high-pressure area. Voila: the wing has a low-pressure area above it, and higher pressure below. “The difference between those two pressures gives us lift,” Babinsky says. 

This video depicts the general process well:

Babinsky notes that more work is being done by that lower pressure area above the wing than the higher pressure one below the wing. You can think of the wing as deflecting the air flow downwards on both the top and bottom. On the lower surface of the wing, the deflection of the flow “is actually smaller than the flow deflection on the upper surface,” he notes. “Most airfoils, a very, very crude rule of thumb would be that two-thirds of the lift is generated there [on the top surface], sometimes even more,” Babinksy says.

Can you bring it all together for me one last time?

Sure! Gloria Yamauchi, an aerospace engineer at NASA’s Ames Research Center, puts it this way. “So we have an airplane, flying through the air; the air approaches the wing; it is turned by the wing at the leading edge,” she says. (By “turned,” she means that it changes direction, like the way a car plowing down the road forces the air to change its direction to go around it.) “The velocity of the air changes as it goes over the wing’s surface, above and below.” 

“The velocity over the top of the wing is, in general, greater than the velocity below the wing,” she continues, “and that means the pressure above the wing is lower than the pressure below the wing, and that difference in pressure generates an upward lifting force.”

Is your head constantly spinning with outlandish, mind-burning questions? If you’ve ever wondered what the universe is made of, what would happen if you fell into a black hole, or even why not everyone can touch their toes, then you should be sure to listen and subscribe to Ask Us Anything, a podcast from the editors of Popular Science. Ask Us Anything hits AppleAnchorSpotify, and everywhere else you listen to podcasts every Tuesday and Thursday. Each episode takes a deep dive into a single query we know you’ll want to stick around for.

This story has been updated. It was originally published in July, 2022.

The post Let’s talk about how planes fly appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Throwing the perfect football spiral is a feat in science https://www.popsci.com/science/how-to-throw-a-football-spiral/ Mon, 06 Feb 2023 18:38:32 +0000 https://www.popsci.com/?p=510229
Super Bowl-qualifying Philadelphia Eagles quarterback Jalen Hurts throws a perfect football spiral
While the basic mechanics of throwing a perfect football spiral are the same, some quarterbacks, like Philadelphia Eagles' Jalen Hurts, put their own spin on it. Mitchell Leff/Getty Images

Football players don’t break the laws of physics—they take advantage of them. And you can too.

The post Throwing the perfect football spiral is a feat in science appeared first on Popular Science.

]]>
Super Bowl-qualifying Philadelphia Eagles quarterback Jalen Hurts throws a perfect football spiral
While the basic mechanics of throwing a perfect football spiral are the same, some quarterbacks, like Philadelphia Eagles' Jalen Hurts, put their own spin on it. Mitchell Leff/Getty Images

It’s Super Bowl LVII time, and this year the Philadelphia Eagles are squaring off against the Kansas City Chiefs for the championship title. While the Chiefs are returning for their third final in four years, bets are slightly favored towards the Eagles as they’ve kept a strong and consistent offensive line all season, led by quarterback Jalen Hurts. But the Chiefs could defy the odds if quarterback Patrick Mahomes fully recovers from an ankle sprain he sustained more than a week ago against the Cincinnati Bengals. 

[Related: We calculated how much sweat will come out of the Super Bowl]

Ultimately, the game could come down to every single throw. Mahomes has already proven he can hit his mark in most circumstances: His football spirals are the “closest we’ll see to breaking the law of physics,” says Chad Orzel, an associate professor of physics and astronomy at Union College in New York. “He manages to make some amazing passes from bizarre positions that wouldn’t look like they would produce anything good.” Hurts has also been leveled up his game this season through “meteoric improvements” in his throws.

Throwing the perfect football spiral might seem like something reserved for Super Bowl quarterbacks. But with some practice and science know-how, you too can chuck up the perfect spiral.

Why do football players throw spirals?

Unlike baseball or basketball, the American football relies on a spiral rotation because of its prolate spheroid shape. If you make the ball spin fast enough, it will stay in the same axis it’s pointing towards and hit the intended target straight-on, Orzel says. This follows the conservation of angular momentum: an object preserves its rotational speed if no external force is acting on it. 

Think of a spinning top. When you twist the toy and release, it will rotate in the same direction that you wound it up in, and will continue to stay upright in that angle until another external force (like your hand) causes it to stop. “It’s the same idea with football,” explains Orzel. “If you get the ball spinning rapidly around its axis, it’s a little more likely to hold its orientation and fly through [the air] in an aerodynamic shape.” 

[Related: Hitting a baseball is the hardest skill to pull off in sports. Here’s why.]

In a game where you have seconds to pass before you get tackled or intercepted, the biggest priority is to flick the ball with its nose pointed toward you. This confers less air resistance, meaning the ball can travel farther in a straight path (as long as it doesn’t meet outside forces like strong winds), explains John Eric Goff, a professor of physics at the University of Lynchburg in Virginia and author of Gold Medal Physics: The Science of Sports. A wobbly pass will result in more air drag and take longer to reach its destination, he adds. If you have to duck a defender and then pass the ball off quickly, you will get erratic air drag, which also hurts the accuracy of the throw.

How to throw a football spiral

To get a great spiral, you need to master angular momentum, which involves a few key physical factors. First, a person’s grip on the laces of the ball acts as torque—a measure of force applied to an object to rotate on its axis. In other words, the friction from the fingers gives the ball traction to spin. 

Second, you need to perfectly balance the frictional force on the ball and the forward force needed to give the ball velocity. This requires strong core muscles to rotate the body all the way through the shoulder and increase throwing power. “Tom Brady used to practice drills where he would rotate his torso quickly to help develop fast-twitching muscles in his core,” says Goff. 

Third, the hand must also be on the back of the ball to give it forward velocity, but not too far back to prevent the necessary torque for the spin. “A typical NFL spiral rotates at around 600 rotations per minute, which is the low end of a washing machine’s rotational rate and about 30 percent greater rotation rate than that of a helicopter’s rotor blades,” adds Goff. “Pass speeds are typically in the range 45 to 60 mph—the same range for cars entering and driving on highways.” For maximum force, pull the ball back to your ear just above your armpit, then release it with your elbow fully extended. Your wrist should point down at the end of the pass.

Knowing the physics behind a football spiral is only half of the battle. Both physicists emphasize the importance of practice. Practice can be as simple as watching videos of pro footballers, studying their technique using computer simulations, and playing a game of catch at the park with friends. 

Achieving a perfect spiral is challenging but doable. Even your favorite NFL quarterback might have started with a clumsy first toss. But with practice, they’ve become the ideal throwing machines we cheer for every year. 

The post Throwing the perfect football spiral is a feat in science appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why shooting cosmic rays at nuclear reactors is actually a good idea https://www.popsci.com/science/nuclear-reactor-3d-imaging/ Fri, 03 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=509775
Marcoule Nuclear Power Plant in France. Workers in protective gear heating glowing nuclear reactor.
The Marcoule Nuclear Power Plant in France was decommissioned in the 1980s. The French government has been trying to take down the structures since, including the G2 reactor. Patrick Robert/Sygma/CORBIS/Sygma via Getty Images

Muons, common and mysterious particles that beam down from space, can go where humans can't. That can be useful for nuclear power plants.

The post Why shooting cosmic rays at nuclear reactors is actually a good idea appeared first on Popular Science.

]]>
Marcoule Nuclear Power Plant in France. Workers in protective gear heating glowing nuclear reactor.
The Marcoule Nuclear Power Plant in France was decommissioned in the 1980s. The French government has been trying to take down the structures since, including the G2 reactor. Patrick Robert/Sygma/CORBIS/Sygma via Getty Images

The electron is one of the most common bits of matter around us—every complete atom in the known universe has at least one. But the electron has far rarer and shadier counterparts, one of them being the muon. We may not think much about muons, but they’re constantly hailing down on Earth’s surface from the edge of the atmosphere

Muons can pass through vast spans of bedrock that electrons can’t cross. That’s good luck for scientists, who can collect the more elusive particles to paint images of objects as if they were X-rays. In the last several decades, they’ve used muons to pierce the veils of erupting volcanoes and peer into ancient tombs, but only in two dimensions. The few three-dimensional images have been limited to small objects.

That’s changing. In a paper published in the journal Science Advances today, researchers have created a fully 3D muon image of a nuclear reactor the size of a large building. The achievement could give experts new, safer ways of inspecting old reactors or checking in on nuclear waste.

“I think, for such large objects, it’s the first time that it’s purely muon imaging in 3D,” says Sébastien Procureur, a nuclear physicist at the Université Paris-Saclay in France and one of the study authors.

[Related: This camera can snap atoms better than a smartphone]

Muon imaging is only possible with the help of cosmic rays. Despite their sunny name, most cosmic rays are the nuclei of hydrogen or helium atoms, descended to Earth from distant galaxies. When they strike our atmosphere, they burst into an incessant rainstorm of radiation and subatomic particles.

Inside the rain is a muon shower. Muons are heavier—about 206 times more massive—than their electron siblings. They’re also highly unstable: On average, each muon lasts for about a millionth of a second. That’s still long enough for around 10,000 of the particles to strike every square meter of Earth per minute.

Because muons are heavier than electrons, they’re also more energetic. They can penetrate the seemingly impenetrable, such as rock more than half a mile deep. Scientists can catch those muons with specially designed detectors and count them. More muons striking from a certain direction might indicate a hollow space lying that way. 

In doing so, they can gather data on spaces where humans cannot tread. In 2017, for instance, researchers discovered a hidden hollow deep inside Khufu’s Great Pyramid in Giza, Egypt. After a tsunami ravaged the Fukushima Daiichi nuclear power station in 2011, muons allowed scientists to gauge the damage from a safe distance. Physicists have also used muons to check nuclear waste casks without risking leakage while opening them up.

However, taking a muon image comes with some downsides. For one, physicists have no control over how many muons drizzle down from the sky, and the millions that hit Earth each day aren’t actually very many in the grand scheme of things. “It can take several days to get a single image in muography,” says Procureur. “You have to wait until you have enough.”

Typically, muon imagers take their snapshots with a detector that counts how many muons are striking it from what directions. But with a single machine, you can only tell that a hollow space exists—not how far away it lies. This limitation leaves most muon images trapped in two dimensions. That means if you scan of a building’s facade, you might see the individual rooms, but not the layout. If you want to explore a space in great detail, the lack of a third dimension is a major hurdle.

In theory, by taking muon images from different perspectives, you can stitch them together into a 3D reconstruction. This is what radiologists do with X-rays. But while it’s easy to take hundreds of X-ray images from different angles, it’s far more tedious and time-consuming to do so with muons. 

Muon detectors around G2 nuclear reactor in France. Two facility photos and four diagrams.
The 3D muon images of the G2 nuclear reactor. Procureur et al., Sci. Adv. 9, eabq8431 (2023)

Still, Procureur and his colleagues gave it a go. The site in question was an old reactor at Marcoule, a nuclear power plant and research facility in the south of France. G2, as it’s called, was built in the 1950s. In 1980, the reactor shut down for good; since then, French nuclear authorities have slowly removed components from the building. Now, preparing to terminally decommission G2, they wanted to conduct another safety check of the structures inside. “So they contacted us,” says Procureur.

Scientists had taken 3D muon images of small objects like tanks before, but G2—located inside a concrete cylinder the size of a small submarine and fitted inside a metal-walled building the size of an aircraft hangar—required penetrating a lot more layers and area.

Fortunately, this cylinder left enough space for Procureur and his colleagues to set up four gas-filled detectors at strategic points around and below the reactor. Moving the detectors around, they were able to essentially snap a total of 27 long-exposure muon images, each one taking days on end to capture.

[Related: Nuclear power’s biggest problem could have a small solution]

But the tricky part, Procureur says, wasn’t actually setting up the muon detectors or even letting them run: It was piecing together the image afterward. To get the process started, the team adapted an algorithm used for stitching together anatomical images in a medical clinic. Though the process was painstaking, they succeeded. In their final images, they could pluck out objects as small as cooling pipes about two-and-a-half feet in diameter.

“What’s significant is they did it,” says Alan Bross, a physicist at Fermilab in suburban Chicago, who wasn’t involved with this research. “They built the detectors, they went to the site, and they took the data … which is really involved.”

The effort, Procureur says, was only a proof of concept. Now that they know what can be accomplished, they’ve decided to move onto a new challenge: imaging nuclear containers at other locations. “The accuracy will be significantly better,” Procureur notes.

Even larger targets may soon be on the horizon. Back in Giza, Bross and some of his colleagues are working to scan the Great Pyramid in three dimensions. “We’re basically doing the same technique,” he explains, but on a far more spectacular scale.

The post Why shooting cosmic rays at nuclear reactors is actually a good idea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cosmic cartographers release a more accurate map of the universe’s matter https://www.popsci.com/science/universe-matter-map/ Wed, 01 Feb 2023 14:00:00 +0000 https://www.popsci.com/?p=509007
Two giant, circular ground telescopes with an overlay of a starry night sky.
Scientists have released a new survey of all the matter in the universe, using data taken by the Dark Energy Survey in Chile and the South Pole Telescope. Andreas Papadopoulos

It’s another step in understanding our 13 billion year-old universe.

The post Cosmic cartographers release a more accurate map of the universe’s matter appeared first on Popular Science.

]]>
Two giant, circular ground telescopes with an overlay of a starry night sky.
Scientists have released a new survey of all the matter in the universe, using data taken by the Dark Energy Survey in Chile and the South Pole Telescope. Andreas Papadopoulos

When the universe first began about 13 billion years ago, all of the matter that eventually formed the galaxies, stars, and planets of today was flung around like paint splattering from a paintbrush. 

Now, an international group of over 150 scientists and researchers have released some of the most precise measurements ever made of how all of this matter is distributed across the universe. With a map of that matter in the present, scientists can try to understand the forces that shaped the evolution of the universe.

[Related: A key part of the Big Bang remains troublingly elusive.]

The team combined data from the Dark Energy Survey (DES) and the South Pole Telescope, which conducted two major telescope surveys of the present universe. The analysis was published in the journal Physical Review D as three articles on January 31.

In the analysis, the team found that matter isn’t as “clumpy” as previously believed, adding to a body of evidence that something might be missing from the existing standard model of the universe.

By tracing the path of this matter to see where everything ended up, scientists can try to recreate what happened during the Big Bang and what forces were needed for such a massive explosion. 

To create this map, an enormous amount of data was analyzed from the DES and South Pole Telescope. The DES surveyed the night sky for six years from atop a mountain in Chile, while the South Pole Telescope scoured the universe for faint traces of traveling radiation that date back to the first moments of our universe.

Deep Space photo
By overlaying maps of the sky from the Dark Energy Survey telescope (at left) and the South Pole Telescope (at right), the team could assemble a map of how the matter is distributed—crucial to understand the forces that shape the universe. CREDIT: Yuuki Omori

Scientists were able to infer where all of the universe’s matter ended up and are offering a more accurate matter map by rigorously analyzing both data sets. “It is more precise than previous measurements—that is, it narrows down the possibilities for where this matter wound up—compared to previous analyses,” the authors said.

Combining two different skygazing methods reduced the chance of a measurement error throwing off the results. “It functions like a cross-check, so it becomes a much more robust measurement than if you just used one or the other,” said co-author Chihway Chang, an astrophysicist from the University of Chicago, in a statement

The analyses looked at gravitational lensing, which occurs when some of the light traveling across the universe can be slightly bent when it passes objects like galaxies that contain a lot of gravity. 

Regular matter and dark matter can be caught by this method. Dark matter is an invisible form of matter that makes up most of the universe’s mass, but it is so mysterious that scientists know more about what it isn’t than what it is. It doesn’t emit light, so it can’t be a planet of stars, but it also isn’t a bunch of black holes. 

[Related: A key part of the Big Bang remains troublingly elusive.]

While most of the results fit perfectly with the currently accepted best theory of the universe, there are some signs of a crack in the theory.

“It seems like there are slightly less fluctuations in the current universe, than we would predict assuming our standard cosmological model anchored to the early universe,” said analysis coauthor and University of Hawaii astrophysicist Eric Baxter, in a statement.

Even if something is missing from today’s matter models, the team believes that using information from two different telescope surveys is a promising strategy for the future of astrophysics.

“I think this exercise showed both the challenges and benefits of doing these kinds of analyses,” Chang said. “There’s a lot of new things you can do when you combine these different angles of looking at the universe.”

The post Cosmic cartographers release a more accurate map of the universe’s matter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this metallic material move like the T-1000 from ‘Terminator 2’ https://www.popsci.com/technology/magnetoactive-liquid-metal-demo/ Wed, 25 Jan 2023 22:00:00 +0000 https://www.popsci.com/?p=507689
Lego man liquid-metal model standing in mock jail cell
Hmm. This scene looks very familiar. Wang and Pan, et al.

A tiny figure made from the magenetoactive substance can jailbreak by shifting phases.

The post Watch this metallic material move like the T-1000 from ‘Terminator 2’ appeared first on Popular Science.

]]>
Lego man liquid-metal model standing in mock jail cell
Hmm. This scene looks very familiar. Wang and Pan, et al.

Sci-fi film fans are likely very familiar with that scene in Terminator 2 when Robert Patrick’s slick, liquid metal T-1000 robot easily congeals itself through the metal bars of a security door. It’s an iconic set piece that relied on then-cutting edge computer visual effects—that’s sort of director James Cameron’s thing, after all. But researchers recently developed a novel substance capable of recreating a variation on that ability. With more experimentation and fine-tuning, this new “magnetoactive solid-liquid phase transitional machine” could provide a host of tools for everything from construction repair to medical procedures.

[Related: ‘Avatar 2’s high-speed frame rates are so fast that some movie theaters can’t keep up.]

So far, researchers have been able to make their substance “jump” over moats, climb walls, and even split into two cooperative halves to move around an object before reforming back into a single entity, as detailed in a new study published on Wednesday in Matter. In a cheeky video featuring some strong T2 callbacks, a Lego man-shaped mold of the magnetoactive solid-liquid can even be seen liquifying and moving through tiny jail cell bars before reforming into its original structure. If that last part seems a bit impossible, well, it is. For now.

“There is some context to the video. It [looks] like magic,” Carmel Majidi, a senior author and mechanical engineer at Carnegie Mellon, explains to PopSci with a laugh. According Majidi, everything leading up to the model’s reformation is as it appears—the shape does liquify before being drawn through the mesh barrier via alternating electromagnetic currents. From there, however, someone pauses the camera to recast the mold into its original shape.

But even without the little cinema history gag, Majidi explains that he and his colleagues’ new material could have major benefits within a host of situations. The  team, made up of experts from The Chinese University of Hong Kong and Carnegie Mellon University have created a “phase-shifting” material by embedding magnetic particles within gallium, a metal featuring an extremely low melting point of just 29.8C, or roughly 85F. To accomplish this, the magnetically infused gallium is exposed to an alternating magnetic field to generate enough heat through induction. Changing the electromagnet’s path can conversely direct the liquified form, all while retaining a far less viscous state than similar phase-changing materials.

[Related: Acrobatic beetle bots could inspire the latest ‘leap’ in agriculture.]

“There’s been a tremendous amount of work on these soft magnetic devices that could be used for biomedical applications,” says Majidi. “Increasingly, those materials [could] used for diagnostics, drug delivery… [and] recovering or removing foreign objects.”

Majidi’s and colleagues’ latest variation, however, stands apart from that amorphous blob of similar substances. “What this endows those systems with is their ability to now change stiffness and change shape, so they can now have even greater mobility within that context.”

[Related: Boston Dynamics’s bipedal robots can throw heavy objects now.]

Majidi cautions, however, that any deployment in doctors’ offices is still far down the road. In the meantime, it’s much closer to being deployed in situations such as circuit assembly and repair, where the material could ooze into hard-to-reach areas before congealing as simultaneously both a conductor and solder.

Further testing needs undertaking to determine the substance’s biocompatibility in humans, but Majidi argues that it’s not hard to imagine patients one day entering an MRI-like machine that can guide ingested versions of the material for medical procedures. For now, however, it looks like modern technology is at least one step closer to catching up with Terminator 2’s visual effects wizardry from over 30 years ago.

The post Watch this metallic material move like the T-1000 from ‘Terminator 2’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Earth’s inner core could be slowing its spin—but don’t panic https://www.popsci.com/science/earth-core-spin/ Tue, 24 Jan 2023 16:00:00 +0000 https://www.popsci.com/?p=507356
The planet's innermost core has a rhythm of its own.
The planet's innermost core has a rhythm of its own. NASA

We could be in the middle of a big shift in how the center of the Earth rotates.

The post The Earth’s inner core could be slowing its spin—but don’t panic appeared first on Popular Science.

]]>
The planet's innermost core has a rhythm of its own.
The planet's innermost core has a rhythm of its own. NASA

In elementary school science class, we learned that the Earth has three main layers: the crust, mantle, and the core. In reality, the core—which is over 4,000 miles wide—has two layers: a liquid outer core and a solid and dense inner core made mostly of iron that actually rotates.

A study published January 23 in the journal Nature Geoscience finds that this rotation may have paused recently—and could possibly be reversing. The team from Peking University in China believe that these findings could indicate that the changes in the rotation occur on a decadal scale and are helping us understand more about how what’s going on deep beneath the Earth affects the surface.

[Related: A rare gas is leaking from Earth’s core. Could it be a clue to the planet’s creation?]

The Earth’s inner core is separated from the rest of the solid Earth by its liquid outer core, so it rotates at a different pace and direction than the planet itself. A magnetic field created by the outer core generates the spin and the mantle’s gravitational effects balance it out. Understanding how the inner core rotates could shed light on how all of the Earth’s layers interact.

In this study, seismologists Yi Yang and Xiaodong Song looked at seismic waves. They analyzed the difference in the waveform and travel time of the waves created during near-identical earthquakes that have passed along similar paths through the Earth’s inner core since the 1960s. They particularly studied the earthquakes that struck between 1995 and 2021.

Before 2009, the inner core appeared to be rotating slightly faster than the surface and mantle, but the rotation began slowing down and paused around 2009. Looking down at the core now wouldn’t reveal any spinning since the inner core and surface are spinning at roughly the same rate.

“That means it’s not a steady rotation as was originally reported some 20 years ago, but it’s actually more complicated,” Bruce Buffett, a professor of earth and planetary science at the University of California, Berkeley, told the New Scientist.

[Related: Scientists wielded giant lasers to simulate an exoplanet’s super-hot core.]

Additionally, the team believes that this could be associated with a reversal of the inner core rotation on a seven-decade schedule. They believe that a previous turning point occurred in the early 1970s and say that this variation does correlate with small changes in geophysical observations at the Earth’s surface, such as the length of a day or changes in magnetic fields.

The authors conclude that this fluctuation in the inner core’s rotation that coincides with some periodic changes in the Earth’s surface system, demonstrates the interactions occurring between Earth’s different layers.

However, scientists are debating the speed of the rotation and whether it varies. This new theory is just one of several models explaining the rotation. “It’s weird that there’s a solid iron ball kind of floating in the middle of the Earth,” John Vidale, a seismologist at the University of Southern California who was not involved with the study, told The New York Times. “No matter which model you like, there’s some data that disagrees with it.”

Since studying the inner core is very difficult and physically going there is almost impossible (unless you’re famed sci-fi author Jules Verne), what’s really going on in the Earth’s core could always remain a mystery.

The post The Earth’s inner core could be slowing its spin—but don’t panic appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best—and worst—places to shelter after a nuclear blast https://www.popsci.com/science/how-to-survive-a-nuclear-bomb-shockwave/ Fri, 20 Jan 2023 16:53:24 +0000 https://www.popsci.com/?p=506575
Nuclear shelter basement sign on brick building to represent survival tips for a nuclear blast
Basements work well as nuclear shelters as long as they don't have many external opening. Deposit Photos

Avoid windows, doors, and long hallways at all costs.

The post The best—and worst—places to shelter after a nuclear blast appeared first on Popular Science.

]]>
Nuclear shelter basement sign on brick building to represent survival tips for a nuclear blast
Basements work well as nuclear shelters as long as they don't have many external opening. Deposit Photos

In the nightmare scenario of a nuclear bomb blast, you might picture a catastrophic fireball, a mushroom cloud rising into an alien sky overhead, and a pestilent rain of toxic fallout in the days to come. All of these are real, and all of them can kill.

But just as real, and every bit as deadly, is the air blast that comes just instants after. When a nuke goes off, it usually creates a shockwave. That front tears through the air at supersonic speed, shattering windows, demolishing buildings, and causing untold damage to human bodies—even miles from the point of impact.

[Related: How to protect yourself from nuclear radiation]

So, you’ve just seen the nuclear flash, and know that an air blast is soon to follow. You’ve only got seconds to hide. Where do you go?

To help you find the safest spot in your home, two engineers from Cyprus simulated which spaces made winds from a shockwave move more violently—and which spaces slowed them down. Their results were published on January 17 in the journal Physics of Fluids.

During the feverish nuclear paranoia of the Cold War, plenty of scientists studied what nuclear war would do to a city or the world. But most of their research focused on factors like the fireball or the radiation or simulating a nuclear winter, rather than an individual air blast. Moreover, 20th-century experts lacked the sophisticated computational capabilities that their modern counterparts can use. 

Very little is known about what is happening when you are inside a concrete building that has not collapsed,” says Dimitris Drikakis, an engineer at the University of Nicosia and co-author of the new paper. 

[Related: A brief but terrifying history of nuclear weapons]

The advice that he and his colleague Ioannis W. Kokkinakis came up with doesn’t apply to the immediate vicinity of a nuclear blast. If you’re within a shout of ground zero, there’s no avoiding it—you’re dead. Even some distance away, the nuke will bombard you with a bright flash of thermal radiation: a torrent of light, infrared, and ultraviolet that could blind you or cause second- or third-degree burns.

But as you move farther away from ground zero, far enough that the thermal radiation might leave you with minor injuries at most, the airburst will leave most structures standing. The winds will only be equivalent to a very strong hurricane. That’s still deadly, but with preparation, you might just make it.

Drikakis and Kokkinakis constructed a one-story virtual house and simulated striking winds from two different shockwave scenarios—one well above standard air pressure, and one even stronger. Based on their simulations, here are the best—and worst—places to go during a nuclear war.

Worst: by a window

If you catch a glimpse of a nuclear flash, your first instinct might be to run to the nearest window to see what’s just happened. That would be a mistake, as you’d be in the prime place to be hit by the ensuing air blast.

If you stand right in a window facing the blast, the authors found, you might face winds over 300 miles per hour—enough to pick the average human off the ground. Depending on the exact strength of the nuke, you might then strike the wall with enough force to kill you.

Surprisingly, there are more dangerous places in the house when it comes to top wind speed (more on that later). But what really helps make a window deadly is the glass. As it shatters, you’ll be sprayed in the face by high-velocity shards.

Bad: a hallway

You might imagine that you can escape the airblast by retreating deeper into your building. But that’s not necessarily true. A window can act as a funnel for rushing air, turning a long hallway into something like a wind tunnel. Doors can do the same. 

The authors found that winds would throw an average-sized human standing in the corridor nearly as far as it would throw an average-sized human standing by the front window. Intense winds can also pick up glass shards and loose objects from the floor or furniture and send them hurtling as fast as a shot from a musket, the simulations showed.

Better: a corner

Not everywhere in the house is equally deadly. The authors found that, as the nuclear shockwave passed through a room, the highest winds tended to miss the room’s edges and corners. 

Therefore, even if you’re in an otherwise dangerous room, you can protect yourself from the worst of the impact by finding a corner and bracing yourself in. The key, again, is to avoid doors and windows.

“Wherever there are no openings, you have better chances to survive,” says Drikakis. “Essentially, run away from the openings.”

Best: a corner of an interior room

The best place to hide out is in the corner of a small room as far inside the building as possible.  For example, a closet that lacks any openings is ideal.

The “good” news is that the peak of the blast lasts just a moment. The most furious winds will pass in less than a second. If you can survive that, you’ll probably stay alive—as long as you’re not in the path of the radioactive fallout.

These tips for sheltering can be useful in high-wind disasters across the board. (The US Centers for Disease Control currently advises those who cannot evacuate before a hurricane to avoid windows and find a closet.) But the authors stress that the risk of nuclear war, while low, has certainly not disappeared. “I think we have to raise awareness to the international community … to understand that this is not just a joke,” says Drikakis. “It’s not a Hollywood movie.”

The post The best—and worst—places to shelter after a nuclear blast appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists figured out a recipe to make titanium stardust on Earth https://www.popsci.com/science/stardust-titanium-tools/ Fri, 13 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=505062
Cosmic dust on display in Messier 98 galaxy.
Spiral galaxy Messier 98 showcases its cosmic dust in this Hubble Space Telescope image. NASA / ESA / Hubble / V. Rubin et al

The essential ingredients are carbon atoms, titanium, and a good coating of graphite.

The post Physicists figured out a recipe to make titanium stardust on Earth appeared first on Popular Science.

]]>
Cosmic dust on display in Messier 98 galaxy.
Spiral galaxy Messier 98 showcases its cosmic dust in this Hubble Space Telescope image. NASA / ESA / Hubble / V. Rubin et al

Long ago—before humans, before Earth, before even the sun—there was stardust.

In time, the young worlds of the solar system would eat up much of that dust as those bodies ballooned into the sun, planets, and moons we know today. But some of the dust survived, pristine, in its original form, locked in places like ancient meteorites.

Scientists call this presolar dust, since it formed before the sun. Some grains of presolar dust contain tiny bits of carbon, like diamond or graphite; others contain a host of other elements such as silicon or titanium. One form contains a curious and particularly hardy material called titanium carbide, used in machine tools on Earth. 

Now, physicists and engineers think they have an idea of how those particular dust grains formed. In a study published today in the journal Science Advances, researchers believe they could use that knowledge to build better materials here on Earth.

These dust grains are extremely rare and extremely minuscule, often smaller than the width of a human hair. “They were present when the solar system formed, survived this process, and can now be found in primitive solar system materials,” such as meteorites, says Jens Barosch, an astrophysicist at the Carnegie Institution for Science in Washington, DC, who was not an author of the study.

[Related: See a spiral galaxy’s haunting ‘skeleton’ in a chilly new space telescope image]

The study authors peered into a unique kind of dust grain with a core of titanium carbide—titanium and carbon, combined into durable, ceramic-like material that’s nearly as hard as diamond—wrapped in a shell of graphite. Sometimes, tens or even hundreds of these carbon-coated cores clump together into larger grains.

But how did titanium carbide dust motes form in the first place? So far, scientists haven’t quite known for sure. Testing it on Earth is hard, because would-be dustbuilders have to deal with gravity—something that these grains didn’t have to contend with. But scientists can now go to a place where gravity is no object.

On June 24, 2019, a sounding rocket launched from Kiruna, a frigid Swedish town north of the Arctic circle. This rocket didn’t reach orbit. Like many rockets before and since, it streaked in an arc across the sky, peaking at an altitude of about 150 miles, before coming back down.

Still, that brief flight was enough for the rocket’s components to gain more than a taste of the microgravity that astronauts experience in orbit. One of those components was a contraption inside which scientists could incubate dust grains and record the process. 

“Microgravity experiments are essential to understanding dust formation,” says Yuki Kimura, a physicist at Hokkaido University in Japan, and one of the paper’s authors.

Deep Space photo
Titanium carbide grains, seen here magnified at a scale of several hundred nanometers. Yuki Kimura

Just over three hours after launch, including six and a half minutes of microgravity, the rocket landed about 46 miles away from its launch site. Kimura and his colleagues had the recovered dust grains sent back to Japan for analysis. From this shot and follow-up tests in an Earthbound lab, the group pieced together a recipe for a titanium carbide dust grain.

[Related: Black holes have a reputation as devourers. But they can help spawn stars, too.]

That recipe might look something like this: first, start with a core of carbon atoms, in graphite form; second, sprinkle the carbon core with titanium until the two sorts of atoms start to mix and create titanium carbide; third, fuse many of these cores together and drape them with graphite until you get a good-sized grain.

It’s interesting to get a glimpse of how such ancient things formed, but astronomers aren’t the only people who care. Kimura and his colleagues also believe that understanding the process could help engineers and builders craft better materials on Earth—because we already build particles not entirely unlike dust grains.

They’re called nanoparticles, and they’ve been around for decades. Scientists can insert them into polymers like plastic to strengthen them. Road-builders can use them to reinforce the asphalt under their feet. Doctors can even insert them into the human body to deliver drugs or help image hard-to-see body parts.

Typically, engineers craft nanoparticles by growing them within a liquid solution. “The large environmental impact of this method, such as liquid waste, has become an issue,” says Kimura. Stardust, then, could help reduce that waste.

Machinists already use tools strengthened by a coat of titanium carbide nanoparticles. Just like diamond, the titanium carbide helps the tools, often used to forge things like spacecraft, cut harder. One day, stardust-inspired machine coatings might help build the very vessels humans send to space.

The post Physicists figured out a recipe to make titanium stardust on Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet Golfi, the robot that plays putt-putt https://www.popsci.com/technology/robot-golf-neural-network-machine-learning/ Tue, 03 Jan 2023 21:00:00 +0000 https://www.popsci.com/?p=502766
Robot putting golf ball across indoor field into hole
But can Golfi sink a putt through one of those windmill obstacles, though?. YouTube

The tiny bot can scoot on a green and hit a golf ball with impressive accuracy.

The post Meet Golfi, the robot that plays putt-putt appeared first on Popular Science.

]]>
Robot putting golf ball across indoor field into hole
But can Golfi sink a putt through one of those windmill obstacles, though?. YouTube

The first robot to sink an impressive hole-in-one pulled off its fairway feat back in 2016. But the newest automated golfer looks like it’s coming for the short game.

First presented at the IEEE International Conference on Robotic Computing last month and subsequently highlighted by New Scientist on Tuesday, “Golfi” is the modest-sized creation from a research team at Germany’s Paderborn University capable of autonomously locating a ball on a green, traveling to it, and successfully sinking a putt around 60 percent of the time.

To pull off its relatively accurate par, Golfi utilizes an overhead 3D camera to scan an indoor, two-square-meter artificial putting green to find its desired golf ball target. It can then scoot over to the ball and use a neural network algorithm to quickly analyze approximately 3,000 potential golf swings from random points while accounting for physics variables like mass, speed, and ground friction. From there, its arm offers a modest putt that sinks the ball roughly 6 or 7 times out of 10. Although not quite as good as standard human players, it’s still a sizable feat for the machine.

[Related: Reverse-engineered hummingbird wings could inspire new drone designs.]

However, Golfi isn’t going to show up at minigolf parks anytime soon, however. The robot’s creators at Paderborn University designed their prototype to solely work in a small indoor area while connected to a wired power source. Golfi’s necessary overhead 3D camera mount also ensures it won’t make an outdoor tee time, either. That’s because, despite its name, Golfi isn’t actually designed to revolutionize the golf game. Instead, the little robot was built to showcase the benefits of combining physics-based models with machine learning programs.

It’s interesting to see Golfi’stalent in comparison to other recent robotic advancements, which have often drawn from inspirations within the animal kingdom—from hummingbirds, to spiders, to dogs that just so happen to also climb up walls and across ceilings.

The post Meet Golfi, the robot that plays putt-putt appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
ISS astronauts are building objects that couldn’t exist on Earth https://www.popsci.com/science/iss-resin-manufacture-new-shapes/ Tue, 03 Jan 2023 17:00:00 +0000 https://www.popsci.com/?p=502628
A test device aboard the ISS is making new shapes beyond gravity's reach.
A test device aboard the ISS is making new shapes beyond gravity's reach. NASA

Gravity-defying spare parts are created by filling silicone skins with resin.

The post ISS astronauts are building objects that couldn’t exist on Earth appeared first on Popular Science.

]]>
A test device aboard the ISS is making new shapes beyond gravity's reach.
A test device aboard the ISS is making new shapes beyond gravity's reach. NASA

Until now, virtually everything the human race has ever built—from rudimentary tools to one-story houses to the tallest skyscrapers—has had one key restriction: Earth’s gravity. Yet, if some scientists have their way, that could soon change.

Aboard the International Space Station (ISS) right now is a metal box, the size of a desktop PC tower. Inside, a nozzle is helping build little test parts that aren’t possible to make on Earth. If engineers tried to make these structures on Earth, they’d fail under Earth’s gravity. 

“These are going to be our first results for a really novel process in microgravity,” says Ariel Ekblaw, a space architect who founded MIT’s Space Exploration Initiative and one of the researchers (on Earth) behind the project.

The MIT group’s process involves taking a flexible silicone skin, shaped like the part it will eventually create, and filling it with a liquid resin. “You can think of them as balloons,” says Martin Nisser, an engineer at MIT, and another of the researchers behind the project. “Instead of injecting them with air, inject them with resin.” Both the skin and the resin are commercially available, off-the-shelf products.

The resin is sensitive to ultraviolet light. When the balloons experience an ultraviolet flash, the light percolates through the skin and washes over the resin. It cures and stiffens, hardening into a solid structure. Once it’s cured, astronauts can cut away the skin and reveal the part inside.

All of this happens inside the box that launched on November 23 and is scheduled to spend 45 days aboard the ISS. If everything is successful, the ISS will ship some experimental parts back to Earth for the MIT researchers to test. The MIT researchers have to ensure that the parts they’ve made are structurally sound. After that, more tests. “The second step would be, probably, to repeat the experiment inside the International Space Station,” says Ekblaw, “and maybe to try slightly more complicated shapes, or a tuning of a resin formulation.” After that, they’d want to try making parts outside, in the vacuum of space itself. 

The benefit of building parts like this in orbit is that Earth’s single most fundamental stressor—the planet’s gravity—is no longer a limiting factor. Say you tried to make particularly long beams with this method. “Gravity would make them sag,” says Ekblaw.

[Related: The ISS gets an extension to 2030 to wrap up unfinished business]

In the microgravity of the ISS? Not so much. If the experiment is successful, their box would be able to produce test parts that are too long to make on Earth.

The researchers imagine a near future where, if an astronaut needed to replace a mass-produced part—say, a nut or a bolt—they wouldn’t need to consign one from Earth. Instead, they could just fit a nut- or a bolt-shaped skin into a box like this and fill it up with resin.

But the researchers are also thinking long-term. If they can make very long parts in space, they think, those pieces could  speed up large construction projects, such as the structures of space habitats. They might also be used to form the structural frames for solar panels that power a habitat or radiators that keep the habitat from getting too warm.

International Space Station photo
A silicone skin that will be filled to make a truss. Rapid Liquid Printing

Building stuff in space has a few key advantages, too. If you’ve ever seen a rocket in person, you’ll know that—as impressive as they are—they aren’t particularly wide. It’s one reason that large structures such as the ISS or China’s Tiangong go up piecemeal, assembled one module at a time over years.

Mission planners today often have to spend a great deal of effort trying to squeeze telescopes and other craft into that small cargo space. The James Webb Space Telescope, for instance, has a sprawling tennis-court-sized sunshield. To fit it into its rocket, engineers had to delicately fold it up and plan an elaborate unfurling process once JWST reached its destination. Every solar panel you can assemble in Earth orbit is one less solar panel you have to stuff into a rocket. 

[Related: Have we been measuring gravity wrong this whole time?]

Another key advantage is cost. The cost of space launches, adjusted for inflation, has fallen more than 20-fold since the first Space Shuttle went up in 1981, but every pound of cargo can still cost over $1,000 to put into space. Space is now within reach of small companies and modest academic research groups, but every last ounce makes a significant price difference.

When it comes to other worlds like the moon and Mars, thinkers and planners have long thought about using the material that’s already there: lunar regolith or Martian soil, not to mention the water that’s found frozen on both worlds. In Earth’s orbit, that’s not quite as straightforward. (Architects can’t exactly turn the Van Allen radiation belts into building material.)

That’s where Ekblaw, Nisser, and their colleagues hope their resin-squirting approach might excel. It won’t create intricate components or complex circuitry in space, but every little part is one less that astronauts have to take up themselves.

“Ultimately, the purpose of this is to make this manufacturing process available and accessible to other researchers,” says Nisser.

The post ISS astronauts are building objects that couldn’t exist on Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Time doesn’t have to be exact—here’s why https://www.popsci.com/science/leap-second-day-length/ Sat, 31 Dec 2022 16:00:00 +0000 https://www.popsci.com/?p=501341
Gold clock with blue arms for minutes and seconds
Starting in 2035, we'll be shaving a second off our New Year's countdowns. Hector Achautla

The recent decision to axe the leap second shouldn't affect your countdowns or timekeeping too much.

The post Time doesn’t have to be exact—here’s why appeared first on Popular Science.

]]>
Gold clock with blue arms for minutes and seconds
Starting in 2035, we'll be shaving a second off our New Year's countdowns. Hector Achautla

It’s official: The leap second’s time is numbered

By 2035, computers around the world will have one less cause for glitching based on human time. Schoolchildren will have one less confusing calculation to learn when memorizing the calendar.

Our days are continually changing: Tiny differences in the Earth’s rotation build up over months or years. To compensate, every so often, authorities of world time insert an extra second to bring the day back in line. Since 1972, when the system was introduced, we’ve experienced 27 such leap seconds.

But the leap second has always represented a deeper discrepancy. Our idea of a day is based on how fast the Earth spins; yet we define the second—the actual base unit of time as far as scientists, computers, and the like are concerned—with the help of atoms. It’s a definitive gap that puts astronomy and atomic physics at odds with each other.

[Related: Refining the clock’s second takes time—and lasers]

Last month, the guardians of global standard time chose atomic physics over astronomy—and according to experts, that’s fine.

“We will never abandon the idea that timekeeping is regulated by the Earth’s rotation. [But] the fact is we don’t want it to be strictly regulated by the Earth’s rotation,” says Patrizia Tavella, a timekeeper at the International Bureau of Weights and Measures (BIPM) in Paris, a multigovernmental agency that, amongst other things, binds together nations’ official clocks.

The day is a rather odd unit of time. We usually think about it as the duration the Earth takes to complete one rotation about its axis: a number from astronomy. The problem is that the world’s most basic unit of time is not the day, but the second, which is measured by something far more miniscule: the cesium-133 atom, an isotope of the 55th element. 

As cesium-133’s nucleus experiences tiny shifts in energy, it releases photons with very predictable timing. Since 1967, atomic clocks have counted precisely 9,192,631,770 of these time-units in every second. So, as far as metrologists (people who study measurement itself) are concerned, a single day is 86,400 of those seconds.

Except a day isn’t always exactly 86,400 seconds, because the world’s revolutions aren’t constant.

Subtle motions, such as the moon’s tidal pull or the planet’s mass distribution shifting as its melty innards churn about, affect Earth’s spin. Some scientists even believe that a warming climate could shuffle heated air and melted water closer to the poles, which might speed up the rotation. Whatever the cause, it leads to millisecond differences in day length over the year that are unacceptable for today’s ultra-punctual timekeepers. Which is why they try to adjust for it.

The International Earth Rotation and Space Systems Service (IERS), a scientific nonprofit responsible for setting global time standards, publishes regular counts of just how large the difference is for the benefit of the world’s timekeepers. For most of December, Earth’s rotation has been between 15 and 20 milliseconds off the atomic-clock day.

Scientists think it will take about a century for the difference to build up to a minute. It will take about five millennia for it to build up to an hour.

Whenever that gap has gotten too large, IERS invokes the commandment of the leap second. Every January and July, the organization publishes a judgement on whether a leap second is in order. If one is necessary, the world’s timekeepers tack a 61st second onto the last minute of June 30 or December 31, depending on whichever comes next. But this November, the BIPM ruled that by 2035, the masters of the world’s clocks will shelve the leap second in favor of a still-undecided approach.

That means the Royal Observatory in Greenwich, London—the baseline for Greenwich Mean Time (GMT) and its modern successor, Universal Coordinated Time (UTC)—will drift out of sync with the days it once defined. Amateur astronomers might complain, too, as without the leap second, star sightings could become less predictable in the night sky.

But for most people, the leap second is an insignificant curiosity—especially compared to the maze of time zones that long-distance travelers face, or the shifts that humans must observe twice a year if they live in countries that observe daylight savings or summer time

On the other hand, adding a subtle second to shove the day into perfect alignment comes at a cost: technical glitches and nightmares for programmers who must already deal with different countries’ hodgepodge of timekeeping. “The absence of leap seconds will make things a little easier by removing the need for the occasional adjustment, but the difference will not be noticed by everyday users,” says Judah Levine, a timekeeper at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, the US government agency that sets the country’s official clocks.

[Related: It’s never too late to learn to be on time]

The new plan stipulates that in 2026, BIPM and related groups will meet again to determine how much they can let the discrepancy grow before the guardians of time need to take action. “We will have to propose the new tolerance, which could be one minute, one hour, or infinite,” says Tavella. They’ll also propose how often they (or their successors) will revise the number.

It’s not a decision that needs to be made right away. “It’s probably not necessary” to reconcile atomic time with astronomical time, says Elizabeth Donley, a timekeeper at NIST. “User groups that need to know time for astronomy and navigation can already look up the difference.”

We can’t currently predict the vagaries of Earth’s rotation, but scientists think it will take about a century for the difference to build up to a minute. “Hardly anyone will notice,” says Donley. It will take about five millennia for it to build up to an hour. 

In other words, we could just kick the conundrum of counting time down the road for our grandchildren or great-grandchildren to solve. “Maybe in the future, there will be better knowledge of the Earth’s movement,” says Tavella, “And maybe, another better solution will be proposed.”

The post Time doesn’t have to be exact—here’s why appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What the Energy Department’s laser breakthrough means for nuclear fusion https://www.popsci.com/science/nuclear-fusion-laser-net-gain/ Tue, 13 Dec 2022 18:33:00 +0000 https://www.popsci.com/?p=498247
Target fusion chamber of the National Ignition Facility
The National Ignition Facility's target chamber at Lawrence Livermore National Laboratory, where fusion experiments take place, with technicians inside. LLNL/Flickr

Nearly 200 lasers fired at a tiny bit of fuel to create a gain in energy, mimicking the power of the stars.

The post What the Energy Department’s laser breakthrough means for nuclear fusion appeared first on Popular Science.

]]>
Target fusion chamber of the National Ignition Facility
The National Ignition Facility's target chamber at Lawrence Livermore National Laboratory, where fusion experiments take place, with technicians inside. LLNL/Flickr

Since the 1950s, scientists have quested to bring nuclear fusion—the sort of reaction that powers the sun—down to Earth.

Just after 1 a.m. on December 5, scientists at the National Ignition Facility (NIF) in Lawrence Livermore National Laboratory (LLNL) in California finally reached a major milestone in the history of nuclear fusion: achieving a reaction that creates more energy than scientists put in.

This moment won’t bring a fusion power plant to your city just yet—but it is an important step to that goal, one which scientists have sought from the start of their quest.

“This lays the groundwork,” says Tammy Ma, a scientist at LLNL, in a US Department of Energy press conference today. “It demonstrates the basic scientific feasibility.”

On the outside, NIF is a nondescript industrial building in a semi-arid valley east of San Francisco. On the inside, scientists have quite literally been tinkering with the energy of the stars (alternating with NIF’s other major task, nuclear weapons research).

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

Nuclear fusion is how the sun generates the heat and light that warm and illuminate the Earth to sustain life. It involves crushing hydrogen atoms together. The resulting reaction creates helium and energy—quite a bit of energy. You’re alive today because of it, and the sun doesn’t produce a wisp of greenhouse gas in the process.

But to turn fusion into anything resembling an Earthling’s energy source, you need conditions that match the heart of the sun: millions of degrees in temperature. Creating a facsimile of that environment on Earth takes an immense amount of power—far eclipsing the amount of researchers usually end up producing.

Lasers aimed at a tiny target

For decades, scientists have struggled to answer one fundamental question: How do you fine-tune a fusion experiment to create the right conditions to actually gain energy?

NIF’s answer involves an arsenal of high-powered laser beams. First, experts stuff a peanut-sized, gold-plated, open-ended cylinder (known as a hohlraum) with a peppercorn-sized pellet containing deuterium and tritium, forms of hydrogen atoms that come with extra neutrons. 

Then, they fire a laser—which splits into 192 finely tuned beams that, in turn, enter the hohlraum from both ends and strike its inside wall. 

“We don’t just smack the target with all of the laser energy all at once,” says Annie Kritcher, a scientist at NIF, at the press conference. “We divide very specific powers at very specific times to achieve the desired conditions.”

As the chamber heats up to millions of degrees under the laser barrage, it starts producing a cascade of X-rays that violently wash over the fuel pellet. They shear off the pellet’s carbon outer shell and begin to compress the hydrogen inside—heating it to hundreds of millions of degrees—squeezing and crushing the atoms into pressures and densities higher than the center of the sun.

If all goes well, that kick-starts fusion.

Nuclear fusion energy experiment fuel source in a tiny metal capsule
This metal case, called a hohlraum, holds a tiny bit of fusion fuel. Eduard Dewald/LLNL

A new world record

When NIF launched in 2009, the fusion world record belonged to the Joint European Torus (JET) in the United Kingdom. In 1997, using a magnet-based method known as a tokamak, scientists at JET produced 67 percent of the energy they put in. 

That record stood for over two decades until late 2021, when NIF scientists bested it, reaching 70 percent. In its wake, many laser-watchers whispered the obvious question: Could NIF reach 100 percent? 

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

But fusion is a notoriously delicate science, and the results of a given fusion experiment are difficult to predict. Any object that’s this hot will want to cool off against scientists’ wishes. Tiny, accidental differences in the setup—from the angles of the laser beams to slight flaws in the pellet shape—can make immense differences in a reaction’s outcome.

It’s for that reason that each NIF test, which takes about a billionth of a second, involves months of meticulous planning.

“All that work led up to a moment just after 1 a.m. last Monday, when we took a shot … and as the data started to come in, we saw the first indications that we’d produced more fusion energy than the laser input,” says Alex Zylstra, a scientist at NIF, at the press conference.

This time, the NIF’s laser pumped 2.05 megajoules into the pellet—and the pellet burst out 3.15 megajoules (enough to power the average American home for about 43 minutes). Not only have NIF scientists achieved that 100-percent ignition milestone, they’ve gone farther, reaching more than 150 percent.

“To be honest…we’re not surprised,” says Mike Donaldson, a systems engineer at General Fusion, a Vancouver, Canada-based private firm that aims to build a commercially viable fusion plant by the 2030s, who was not involved with the NIF experiment. “I’d say this is right on track. It’s really a culmination of lots of years of incremental progress, and I think it’s fantastic.”

But there’s a catch

These numbers only account for the energy delivered by the laser—omitting the fact that this laser, one of the largest and most intricate on the planet, needed about 300 megajoules from California’s electric grid to power on in the first place.

“The laser wasn’t designed to be efficient,” says Mark Herrmann, a scientist at LLNL, at the press conference. “The laser was designed to give as much juice as possible.” Balancing that energy-hungry laser may seem daunting, but researchers are optimistic. The laser was built on late-20th-century technology, and NIF leaders say they do see a pathway to making it more efficient and even more powerful. 

Even if they do that, experts need to figure out how to fire repeated shots that gain energy. That’s another massive challenge, but it’s a key step toward making this a viable base for a power plant.

[Related: Inside France’s super-cooled, laser-powered nuclear test lab]

“Scientific results like today’s are fantastic,” says Donaldson. “We also need to focus on all the other challenges that are required to make fusion commercializable.”

A fusion power plant may very well involve a different technique. Many experimental reactors like JET and the under-construction ITER in southern France, in lieu lasers, try to recreate the sun by using powerful magnets to shape and sculpt super-hot plasma within a specially designed chamber. Most of the private-sector fusion efforts that have mushroomed of late are keying their efforts toward magnetic methods, too.

In any event, it will be a long time before you read an article like this on a device powered by cheap fusion energy—but that day has likely come an important milestone closer.

“It’s been 60 years since ignition was first dreamed of using lasers,” Ma said at the press conference. “It’s really a testament to the perseverance and dedication of the folks that made this happen. It also means we have the perseverance to get to fusion energy on the grid.”

The post What the Energy Department’s laser breakthrough means for nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What would happen if the Earth started to spin faster? https://www.popsci.com/earth-spin-faster/ Tue, 01 Jun 2021 22:59:42 +0000 https://www.popsci.com/uncategorized/earth-spin-faster/
Earth against moon and spun to show fast the planet spins. Realistic illustration.
Since the formation of the moon, the rate that Earth spins has been slowing down by about 3.8 mph every 10 million years. Deposit Photos

Even a mile-per-hour speed boost would make things pretty weird.

The post What would happen if the Earth started to spin faster? appeared first on Popular Science.

]]>
Earth against moon and spun to show fast the planet spins. Realistic illustration.
Since the formation of the moon, the rate that Earth spins has been slowing down by about 3.8 mph every 10 million years. Deposit Photos

There are enough things in this life to worry about. Like nuclear war, climate change, and whether or not you’re brushing your teeth correctly. The Earth spinning too fast should not be high up on your list, simply because it’s not very likely to happen anytime soon—and if it does, you’ll probably be too dead to worry about it. Nevertheless, we talked to some experts to see how it would all go down.

Let’s start with the basics, like: How fast does the Earth spin now? That depends on where you are, because the planet moves fastest around its waistline. As Earth twirls around its axis, its circumference is widest at the equator. So a spot on the equator has to travel a lot farther in 24 hours to loop around to its starting position than, say, Chicago, which sits on a narrower cross-section of Earth. To make up for the extra distance, the equator spins at 1,037 mph, whereas Chicago takes a more leisurely 750 mph pace. (This calculator will tell you the exact speed based on your latitude.)

The Earth does change pace every now and then, but only incrementally. This summer, for instance, it skimmed 1.59 milliseconds off its typical rotation time, making June 29 the shortest day on record. One hypothesis is that changes in pressure actually shift the planet’s axis of rotation, though not to the extent where regular human beings could feel the difference.

Little bumps aside, if the Earth were to suddenly spin much faster, there would be some drastic changes in store. Speeding up its rotation by one mile per hour, let’s say, would cause water to migrate from the poles and raise levels around the equator to by a few inches. “It might take a few years to notice it,” says Witold Fraczek, an analyst at ESRI, a company that makes geographic information system (GIS) software.

[Related: If the Earth is spinning, why can’t I feel it?]

What might be much more noticeable is that some of our satellites would be off-track. Satellites set to geosynchronous orbit fly around our planet at a speed that matches the Earth’s rotation, so that they can stay positioned over the same spot all the time. If the planet speeds up by 1 mph, then the satellites will no longer in their proper positions, meaning satellite communications, television broadcasting, and military and intelligence operations could be interrupted, at least temporarily. Some satellites carry fuel and may be able to adjust their positions and speeds accordingly, but others might have to be replaced, and that’s expensive. “These could disturb the life and comfort of some people,” says Fraczek, “but should not be catastrophic to anybody.”

But things would get more catastrophic the faster we spin.

You would lose weight, but not mass

Centrifugal force from the Earth’s spin is constantly trying to fling you off the planet, sort of like a kid on the edge of a fast merry-go-round. For now, gravity is stronger and it keeps you grounded. But if Earth were to spin faster, the centrifugal force would get a boost, says NASA astronomer Sten Odenwald.

Currently, if you weigh about 150 pounds in the Arctic Circle, you might weigh 149 pounds at the equator. That’s because of the extra centrifugal force that’s generated as the equator spins faster combats gravity. Press fast-forward on that, and your weight would drop even further.

Odenwald calculates that eventually, if the equator revved up to 17,641 mph, the centrifugal force would be great enough that you would be essentially weightless. (That is, if you’re still alive. More on that later.)

Everyone would be constantly jet-lagged

The faster the Earth spins, the shorter our days would become. With a 1 mph speed increase, the day would only get about a minute and a half shorter and our internal body clocks, which stick to a pretty strict 24-hour schedule, probably wouldn’t notice.

But if we were rotating 100 mph faster than usual, a day would be about 22 hours long. For our bodies, that would be like daylight savings time on boosters. Instead of setting the clocks back by an hour, you’d be setting them back by two hours every single day, without a chance for your body to adjust. And the changing day length would probably mess up plants and animals too.

But all this is only if Earth speeds up all of a sudden. “If it gradually speeds up over millions of years, we would adapt to deal with that,” says Odenwald.

Hurricanes would get stronger

If Earth’s rotation picked up slowly, it would carry the atmosphere with it—and we wouldn’t necessarily notice a big difference in the day-to-day winds and weather patterns. “Temperature difference is still going to be the main driver of winds,” says Odenwald. However, extreme weather could become more destructive. “Hurricanes will spin faster,” he says, “and there will be more energy in them.”

The reason why goes back to that weird phenomenon we mentioned earlier: the Earth spins faster around the equator.

If the Earth wasn’t spinning at all, winds from the north pole would blow in a straight line to the equator, and vice versa. But because we are spinning, the pathway of the winds gets deflected eastward. This curvature of the winds is called the Coriolis effect, and it’s what gives a hurricane its spin. And if the Earth spun faster, the winds would be deflected further eastward. “That effectively makes the rotation more severe,” says Odenwald.

Water would cover the world

Extra speed at the equator means the water in the oceans would start to amass there. At 1 mph faster, the water around the equator would get a few inches deeper within just a few days.

At 100 mph faster, the equator would start to drown. “I think the Amazon Basin, Northern Australia, and not to mention the islands in the equatorial region, they would all go under water,” says Fraczek. “How deep underwater, I’m not sure, but I’d estimate about 30 to 65 feet.”

If we double the speed at the equator, so that Earth spins 1,000 miles faster, “it would clearly be a disaster,” says Fraczek. The centrifugal force would pull hundreds of feet of water toward the Earth’s waistline. “Except for the highest mountains, such as Kilimanjaro or the highest summits of the Andes, I think everything in the equatorial region would be covered with water.” That extra water would be pulled out of the polar regions, where centrifugal force is lower, so the Arctic Ocean would be a lot shallower.

At 100 mph faster, the equator would start to drown.

Meanwhile, the added centrifugal force from spinning 1,000 mph faster means water at the equator would have an easier time combating gravity. The air would be heavy with moisture in these regions, Fraczek predicts. Shrouded in a dense fog and heavy clouds, these regions might experience constant rain—as if they’d need any more water.

Finally, at about 17,000 miles per hour, the centrifugal force at the equator would match the force of gravity. After that, “we might experience reverse rain,” Fraczek speculates. “Droplets of water could start moving up in the atmosphere.” At that point, the Earth would be spinning more than 17 times faster that it is now, and there probably wouldn’t be many humans left in the equatorial region to marvel at the phenomenon.

“If those few miserable humans would still be alive after most of Earth’s water had been transferred to the atmosphere and beyond, they would clearly want to run out of the equator area as soon as possible,” says Fraczek, “meaning that they should already be at the Polar regions, or at least middle latitudes.”

Seismic activity would rock the planet

At very fast speeds—like, about 24,000 mph—and over thousands of years, eventually the Earth’s crust would shift too, flattening out at the poles and bulging around the equator.

“We would have enormous earthquakes,” says Fraczek. “The tectonic plates would move quickly and that would be disastrous to life on the globe.”

How fast would the Earth spin in the future?

Believe it or not, Earth’s speed is constantly fluctuating, says Odenwald. Earthquakes, tsunamis, large air masses, and melting ice sheets can all change the spin rate at the millisecond level. If an earthquake swallows a bit of the ground, reducing the planet’s circumference ever so slightly, it effectively speeds up how quickly Earth completes its rotation. A large air mass can have the opposite effect, slowing our spins a smidgen like an ice skater who leaves her arms out instead of drawing them in.

The Earth’s rotation speed changes over time, too. About 4.4 billion years ago, the moon formed after something huge crashed into Earth. At that time, Odenwald calculates our planet was probably shaped like a flattened football and spinning so rapidly that each day might have been only about four hours long.

“This event dramatically distorted Earth’s shape and almost fragmented Earth completely,” says Odenwald. “Will this ever happen again? We had better hope not!”

[Related: 10 easy ways you can tell for yourself that the Earth is not flat]

Since the formation of the moon, Earth’s spin has been slowing down by about 3.8 mph every 10 million years, mostly due to the moon’s gravitational pull on our planet. So it’s a lot more likely that Earth’s spin will continue to slow down in the future, not speed up.

“There’s no conceivable way that the Earth could spin up so dramatically,” says Odenwald. “To spin faster it would have to be hit just right by the right object, and that would liquify the crust so we’d be dead anyway.”

This post has been updated. It was originally published on May 17, 2017.

The post What would happen if the Earth started to spin faster? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The small, mighty, world-changing transistor turns 75 https://www.popsci.com/science/transistors-changed-the-world/ Sun, 04 Dec 2022 18:00:00 +0000 https://www.popsci.com/?p=493705
Japanese woman holding gold Sharp calculator with transistor
Sharp employee Ema Tanaka displays a gold-colored electronic caluculator "EL-BN691,, which is the Japanese electronics company's commemoration model, next to the world's first all-transistor/diode desktop calculator CS-10A introduced in 1964. YOSHIKAZU TSUNO/AFP via Getty Images

Without this universal technology, our computers would probably be bulky vacuum-tube machines.

The post The small, mighty, world-changing transistor turns 75 appeared first on Popular Science.

]]>
Japanese woman holding gold Sharp calculator with transistor
Sharp employee Ema Tanaka displays a gold-colored electronic caluculator "EL-BN691,, which is the Japanese electronics company's commemoration model, next to the world's first all-transistor/diode desktop calculator CS-10A introduced in 1964. YOSHIKAZU TSUNO/AFP via Getty Images

It’s not an exaggeration to say that the modern world began 75 years ago in a nondescript office park in New Jersey.

This was the heyday of Bell Labs. Established as the research arm of a telephone company, it had become a playground for scientists and engineers by the 1940s. This office complex was the forge of innovation after innovation: radio telescopes, lasers, solar cells, and multiple programming languages. But none were as consequential as the transistor.

Some historians of technology have argued that the transistor, first crafted at Bell Labs in late 1947, is the most important invention in human history. Whether that’s true or not, what is without question is that the transistor helped trigger a revolution that digitized the world. Without the transistor, electronics as we know them could not exist. Almost everyone on Earth would be experiencing a vastly different day-to-day.

“Transistors have had a considerable impact in countries at all income levels,” says Manoj Saxena, senior member of the New Jersey-based Institute of Electrical and Electronics Engineers. “It is hard to overestimate the impact they have had on the lives of nearly every person on the planet,” Tod Sizer, a vice president at modern Nokia Bell Labs, writes in an email.

What is a transistor, anyway?

A transistor is, to put it simply, a device that can switch an electric current on or off. Think of it as an electric gate that can open and shut thousands upon thousands of times every second. Additionally, a transistor can boost current passing through it. Those abilities are fundamental for building all sorts of electronics, computers included.

Within the first decade of the transistor era, these powers were recognized when three Bell Labs scientists who built that first transistor—William Shockley, John Bardeen, Walter Brattain—won the 1956 Nobel Prize in Physics. (In later decades, much of the scientific community would condemn Shockley for his support of eugenics and racist ideas about IQ.)

Transistors are typically made from certain elements called semiconductors, which are useful for manipulating current. The first transistor, the size of a human palm, was fashioned from a metalloid, germanium. By the mid-1960s, most transistors were being made from silicon—the element just above germanium in the periodic table—and engineers were packing transistors together into complex integrated circuits: the foundation of computer chips.

[Related: Here’s the simple law behind your shrinking gadgets]

For decades, the development of transistors has stuck to a rule of thumb known as Moore’s law: The number of transistors you can pack into a state-of-the-art circuit doubles roughly every two years. Moore’s law, a buzzword in the computer chip world, has long been a cliché among engineers, though it still abides today.

Modern transistors are just a few nanometers in size. The typical processor in the device you’re using to read this probably packs billions of transistors onto a chip smaller than a human fingernail. 

What would a world without the transistor be like?

To answer that question, we have to look at what the transistor replaced—it wasn’t the only device that could amplify current. 

Before its dominance, electronics relied on vacuum tubes: bulbs, typically made of glass, that held charged plates inside an airless interior. Vacuum tubes have a few advantages over transistors. They could generate more power. Decades after the technology became obsolete, some audiophiles swore that vacuum tube music players sounded better than their transistor counterparts. 

But vacuum tubes are very bulky and delicate (they tend to burn out quickly, just like incandescent light bulbs). Moreover, they often need time to “warm up,” making vacuum tube gadgets a bit like creaky old radiators. 

The transistor seemed to be a convenient replacement. “The inventors of the transistors themselves believed that the transistor might be used in some special instruments and possibly in military radio equipment,” says Ravi Todi, current president of the IEEE Electron Devices Society.

The earliest transistor gadget to hit the market was a hearing aid released in 1953. Soon after came the transistor radio, which became emblematic of the 1960s. Portable vacuum tube radios did exist, but without the transistor, handheld radios likely wouldn’t have become the ubiquitous device that kick-started the ability to listen to music out and about.

Martin Luther King Jr listens to a transistor radio.
Civil rights activist Martin Luther King Jr listens to a transistor radio during the third march from Selma to Montgomery, Alabama, in 1965. William Lovelace/Daily Express/Hulton Archive/Getty Images

But even in the early years of the transistor era, these devices started to skyrocket in number—and in some cases, literally. The Apollo program’s onboard computer, which helped astronauts orient their ship through maneuvers in space, was built with transistors. Without it, engineers would either have had to fit a bulky vacuum tube device onto a cramped spacecraft, or astronauts would have had to rely on tedious commands from the ground.

Transistors had already begun revolutionizing computers themselves. A computer built just before the start of the transistor era—ENIAC, designed to conduct research for the US military—used 18,000 vacuum tubes and filled up a space the size of a ballroom.

Vacuum tube computers squeezed into smaller rooms over time. Even then, 1951’s UNIVAC I cost over a million dollars (not accounting for inflation), and its customers were large businesses or data-heavy government agencies like the Census Bureau. It wouldn’t be until the 1970s and 1980s when personal computers, powered by transistors, started to enter middle-class homes.

Without transistors, we might live in a world where a computer is something you’d use at work—not at home. Forget smartphones, handheld navigation, flatscreen displays, electronic timing screens in train stations, or even humble digital watches. All of those need transistors to work.

“The transistor is fundamental for all modern technology, including telecommunications, data communications, aviation, and audio and video recording equipment,” says Todi.

What do the next 75 years of transistor technologies look like?

It’s hard to deny that the world of 2022 looks vastly different from the world of 1947, largely thanks to transistors. So what should we expect from transistors 75 years in the future, in the world of 2097?

It’s hard to say with any amount of certainty. Almost all transistors today are made with silicon—how Silicon Valley got its name. But how long will that last? 

[Related: The trick to a more powerful computer chip? Going vertical.]

Silicon transistors are now small enough that engineers aren’t sure how much smaller they can get, indicating Moore’s law may have a finite end. And energy-conscious researchers want to make computer chips that use less power, partly in hopes of reducing the carbon footprint from data centers and other large facilities

A growing number of researchers are thinking up alternatives to silicon. They’re thinking of computer chips that harness weird quantum effects and tiny bits of magnets. They’re looking at alternative materials like germanium to exotic forms of carbon. Which of these, if any, may one day replace the silicon transistor? That isn’t certain yet.

“No one technology can meet all needs,” says Saxena. And it’s very possible that the defining technology of the 2090s hasn’t been invented yet.

The post The small, mighty, world-changing transistor turns 75 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists modeled a tiny wormhole on a quantum computer https://www.popsci.com/technology/wormhole-quantum-computer/ Thu, 01 Dec 2022 22:30:00 +0000 https://www.popsci.com/?p=493923
traversable wormhole illustration
inqnet/A. Mueller (Caltech)

It’s not a real rip in spacetime, but it’s still cool.

The post Scientists modeled a tiny wormhole on a quantum computer appeared first on Popular Science.

]]>
traversable wormhole illustration
inqnet/A. Mueller (Caltech)

Physicists, mathematicians, astronomers, and even filmmakers have long been fascinated by the concept of a wormhole: an unpredictable and oftentimes volatile phenomenon that is believed to create tunnels (and shortcuts between two distant locations) across spacetime. Another theory holds that if you link up two black holes in the right way, you can create a wormhole. 

Studying wormholes is like piecing together an incomplete puzzle without knowing what the final picture is supposed to look like. You can roughly deduce what’s supposed to go in the gaps based on the completed images around it, but you can’t know for sure. That’s because there has not yet been definitive proof that wormholes are in fact out there. However, some of the solutions to fundamental equations and theories in physics suggest that such an entity exists. 

In order to understand the properties of this cosmic phantom based on what has been deduced so far, researchers from Caltech, Harvard, MIT, Fermilab, and Google created a small “wormhole” effect between two quantum systems sitting on the same processor. What’s more, the team was able to send a signal through it. 

According to Quanta, this edges the Caltech-Google team ahead of an IBM-Quantinuum team that also sought to establish wormhole teleportation. 

While what they created is unfortunately not a real crack through the fabric of spacetime, the system does mimic the known dynamics of wormholes. In terms of the properties that physicists usually consider, like positive or negative energy, gravity and particle behavior, the computer simulation effectively looks and works like a tiny wormhole. This model, the team said in a press conference, is a way to study the fundamental problems of the universe in a laboratory setting. A paper describing this system was published this week in the journal Nature

“We found a quantum system that exhibits key properties of a gravitational wormhole yet is sufficiently small to implement on today’s quantum hardware,” Maria Spiropulu, a professor of physics at Caltech, said in a press release. “This work constitutes a step toward a larger program of testing quantum gravity physics using a quantum computer.” 

[Related: Chicago now has a 124-mile quantum network. This is what it’s for.]

Quantum gravity is a set of theories that posits how the rules governing gravity (which describes how matter and energy behave) and quantum mechanics (which describes how atoms and particles behave) fit together. Researchers don’t have the exact equation to describe quantum gravity in our universe yet. 

Although scientists have been mulling over the relationship between gravity and wormholes for around 100 years, it wasn’t until 2013 that entanglement (a quantum physics phenomenon), was thought to factor into the link. And in 2017, another group of scientists suggested that traversable wormholes worked kind of like quantum teleportation (in which information is transported across space using principles of entanglement). 

In the latest experiment, run on just 9 qubits (the quantum equivalent of binary bits in classical computing) in Google’s Sycamore quantum processor, the team used machine learning to set up a simplified version of the wormhole system “that could be encoded in the current quantum architectures and that would preserve the gravitational properties,” Spiropulu explained. During the experiment, they showed that information (in the form of qubits), could be sent through one system and reappear on the other system in the right order—a behavior that is wormhole-like. 

[Related: In photos: Journey to the center of a quantum computer]

So how do researchers go about setting up a little universe in a box with its own special rules and geometry in place? According to Google, a special type of correspondence (technically known as AdS/CFT) between different physical theories allowed the scientists to construct a hologram-like universe where they can “connect objects in space with specific ensembles of interacting qubits on the surface,” researchers wrote in a blog post. “This allows quantum processors to work directly with qubits while providing insights into spacetime physics. By carefully defining the parameters of the quantum computer to emulate a given model, we can look at black holes, or even go further and look at two black holes connected to each other — a configuration known as a wormhole.”

The researchers used machine learning to find the perfect quantum system that would preserve some key gravitational properties and maintain the energy dynamics that they wanted the model to portray. Plus, they had to simulate particles called fermions

The team noted in the press conference that there is strong evidence that our universe operates by similar rules as the hologram universe observed on the quantum chip. The researchers wrote in the Google blog item: “Gravity is only one example of the unique ability of quantum computers to probe complex physical theories: quantum processors can provide insight into time crystals, quantum chaos, and chemistry.” 

The post Scientists modeled a tiny wormhole on a quantum computer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Gravity could be bringing you down with IBS https://www.popsci.com/health/ibs-causes-gravity/ Thu, 01 Dec 2022 19:23:53 +0000 https://www.popsci.com/?p=493752
Toilet paper rolls on a yellow background to represent IBS causes
There are two prevailing ideas behind what causes IBS, and now, there might be a third. Deposit Photos

It's your GI tract versus the forces of nature.

The post Gravity could be bringing you down with IBS appeared first on Popular Science.

]]>
Toilet paper rolls on a yellow background to represent IBS causes
There are two prevailing ideas behind what causes IBS, and now, there might be a third. Deposit Photos

There is no official cause for irritable bowel syndrome (IBS), but for one gastroenterologist, the answer has been right under our noses. A review published today in The American Journal of Gastroenterology posits that the all-too-common ailment is caused by the forces of gravity. The unconventional hypothesis suggests the human body has evolved to live with this universal force, and when our ability to manage gravity falters, it can have dire ramifications on our health.

“Our relationship to gravity is a little bit like a fish’s relationship to water,” says Brennan Spiegel, a gastroenterologist at Cedars-Sinai Medical Center in Los Angeles and author of the new paper. “The fish evolved to have a body that survives and thrives in water, even if it may not know it’s in water to begin with.” Similarly, Spiegel explains that while we aren’t always conscious of gravity, it’s a constant influence on our lives. For example, our early human ancestors evolved to become bipedal organisms, spending two-thirds of their lives in an upright position. But standing erect would cause gravity to constantly pull our body system down toward the ground, meaning organs and other bodily systems must have a plan in place to manage and resist gravitational forces. (For example, mammalian brains have evolved ways to sense altered gravity conditions.)

[Related: What to do when you’re trying not to poop]

Spiegel says he thought about the gravity hypothesis in relation to IBS when visiting a sick family member at an assisted living center. Lying in bed for most of the day, he noticed an increase in GI problems during her stay, including constipation, bloating, and abdominal pain, making him evaluate whether lying down all day changes a person’s relationship to the force of gravity. “Why is it that she’s not able to move her intestines as well as she could before,” he first questioned. 

Think of the GI tract as a sack of potatoes. Humans internally lug around this sack their whole lives, though Spiegel argues that some people’s body compositions are better suited to carry that sack around better than others. But according to Newton’s Third Law (for every force in nature there is an equal and opposite reaction), because gravity is pulling our body down, our body’s must have “antigravity” mechanisms in place to stabilize organs. This support comes from musculoskeletal structures like the spine and mesentery that works as an internal suspension system to hold the intestines in place. What’s more, the rib cage along with the spine helps to secure the position of the diaphragm, which acts as a ceiling mount to suspend organs in the upright abdominal cavity. All together, these structures work as a crane to stabilize and keep the organs in place.

The gravity hypothesis is not meant to disprove other ideas on what causes IBS, but rather, a way to tie in all them into a concise explanation. 

But what happens when the antigravity mechanisms in our body fail? You’ll see symptoms very similar to those who have IBS, according to the research paper. When the musculoskeletal system is not aligned with gravity, it’s not capable of completely resisting this force of attraction. The mismatched strain between attractive and repulsive forces would theoretically cause tension in the body, resulting in muscle cramping and pain from being unable to properly support the contents in the abdomen. Additionally, excess pressure on the spine from trying to stabilize sagging structures would cause intense back pain. Finally, if the abdominal crane starts to sag and loosens its hold, the pull of gravity would cause the organ to move out of place, pushing the GI tract forward and giving little space for food to move in and out of the tract. All of these changes may compound in IBS symptoms.

One point Spiegel emphasizes is that the gravity hypothesis is not meant to disprove other ideas—two popular ones being that IBS is caused by changes in the gut microbiome or from elevated serotonin levels—but rather, a way to tie in all them into a concise explanation. 

“Intestines fall under the force of gravity, and they can develop a problem where they kink up almost like a twisted garden hose that makes it hard for water to get through,” Spiegel says. “As a result, they get bacterial overgrowth, and they get abdominal pain and gassiness.”

[Related: What happens if you get diarrhea in space?]

Julie Khlevner, a gastroenterologist at the NewYork-Presbyterian Morgan Stanley Children’s Hospital who was not affiliated with the research, says that while the gravity hypothesis is less conventional than other prevailing theories for IBS, it has been previously used to explain other diseases like amyotrophic lateral sclerosis. “Although [it’s] thought provoking and theoretically compatible with the clinical manifestations of IBS, it remains in its hypothetical stage and requires further research,” she cautions. “For now, the currently accepted concepts in pathophysiology of IBS [alterations in the bidirectional brain-gut-microbiome interaction] will remain the pillars for development of targeted therapies.”

If Spigel is right about his rationale, he could be onto something bigger. Understanding how gravity alters our bodily functions could help find answers on why certain exercises, such as yoga and tai chi, can relieve GI symptoms by strengthening musculoskeletal muscles and the anterior abdominal wall. Or why people experience more stomach problems at high altitudes like when climbing up mountains, or more generally, why women are disproportionately affected by IBS. Spiegel already has an explanation for the last issue (he says women have more elastic internal structures than men, including floppier and longer colons that are more susceptible to the pull of gravity), but he’s hoping others will pursue the same line of work and help bring relief to the millions of people living with IBS everyday.

The post Gravity could be bringing you down with IBS appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What this jellyfish-like sea creature can teach us about underwater vehicles of the future https://www.popsci.com/technology/marine-animal-siphonophore-design/ Tue, 29 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=492881
A jellyfish-like sea creature that's classified as a siphonophore
NOAA Photo Library / Flickr

Nanomia bijuga is built like bubble wrap, and it's a master of multi-jet propulsion.

The post What this jellyfish-like sea creature can teach us about underwater vehicles of the future appeared first on Popular Science.

]]>
A jellyfish-like sea creature that's classified as a siphonophore
NOAA Photo Library / Flickr

Sea creatures have developed many creative ways of getting around their watery worlds. Some have tails for swimming, some have flippers for gliding, and others propel themselves using jets. That last transportation mode is commonly associated with squids, octopuses, and jellyfish. For years, researchers have been interested in trying to transfer this type of movement to soft robots, although it’s been challenging. (Here are a few more examples.) 

A team led by researchers from the University of Oregon have sought to get a closer understanding of how these gelatinous organisms are steering themselves about their underwater domains, in order to brainstorm better ways of designing underwater vehicles of the future. Their findings were published this week in the journal PNAS. The creature they focused on was Nanomia bijuga, a close relative of jellyfish, which looks largely like two rows of bubble wrap with some ribbons attached on one end of it. 

This bubble wrap body is known as the nectosome, and each individual bubble is called a nectophore. All of the nectophores can produce jets of water independently by expanding and contracting to direct currents of seawater through a flexible opening. Technically speaking, each nectophore is an organism in and of itself, and they’re bundled together into a colony. The Monterey Bay Aquarium Research Institute describes these animals as “living commuter trains.” 

The bubble units can coordinate to swim together as one, produce jets in sequence, or do their own thing if they want. Importantly, a few patterns of firing the jets produces the major movements. Firing pairs of nectophores in sequence from the tip to the ribbon tail enables Nanomia to swim forward or in reverse. Firing all the nectophores on one side, or firing some individual nectophores, turns and rotates its body. Using these commands for the multiple jets, Namonia can migrate hundreds of yards twice a day down to depths of 2,300 feet (which includes the twilight zone). 

For Namonia, the number of nectophores can vary from animal to animal. So, to take this examination further, the team wanted to see whether this variation impacted swimming speed or efficiency. Both efficiency and speed appear to increase with more nectophores, but seem to hit a plateau at around 12.  

This system of propulsion lets the Namonia go about the ocean at similar rates to many fish (judged by speed in context of body length), but without the high metabolic cost of operating a neuromuscular system. 

[Related: This tiny AI-powered robot is learning to explore the ocean on its own]

So, how could this sea creature help inform the design of vehicles that travel beneath the waves? Caltech’s John Dabiri, one of the authors on the paper, has long been a proponent of taking inspiration from the fluid dynamics of critters like jellyfish to fashion aquatic vessels. And while the researchers in this paper do not suggest a specific design for a propulsion system for underwater vehicles, they do note that the behavior of these animals may offer helpful guidelines for engines that operate through multiple jets. “Analogously to [Namonia] bijuga, a single underwater vehicle with multiple propulsors could use different modes to adapt to context,” the researchers wrote in the paper. 

Simple changes in the timing of how the jets fire, or which jets fire together, can have a big impact on the energy efficiency and speed of a vehicle. For example, if engineers wanted to make a system that doesn’t need a lot of power, then it might be helpful to have jets that could be controlled independently. If the vehicle needs to be fast, then there needs to be a function that can operate all engines from one side at the same time.

“For underwater vehicles with few propulsors, adding propulsors may provide large performance benefits,” the researchers noted, “but when the number of propulsors is high, the increase in complexity from adding propulsors may outweigh the incremental performance gains.”

Learn more about Nanomia and watch it freestyle below: 

The post What this jellyfish-like sea creature can teach us about underwater vehicles of the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The leap second’s time will be up in 2035—and tech companies are thrilled https://www.popsci.com/technology/bipm-abandon-leap-second/ Sat, 26 Nov 2022 15:00:00 +0000 https://www.popsci.com/?p=490660
people walking in front of clock
Stijn te Strake / Unsplash

Y2Yay?

The post The leap second’s time will be up in 2035—and tech companies are thrilled appeared first on Popular Science.

]]>
people walking in front of clock
Stijn te Strake / Unsplash

It’s the final countdown for the leap second, a janky way of aligning the atomic clock with the natural variation in the Earth’s rotation—but we’ll get to that. At a meeting last week in Versailles, the International Bureau of Weights and Measures (BIPM) voted nearly unanimously to abandon the controversial convention in 2035 for at least 100 years. Basically, the world’s metrologists (people who study measurement) are crossing their fingers and hoping that someone will come up with a better solution for syncing human timekeeping with nature. Here’s why it matters. 

Unfortunately for us humans, the universe is a messy place. Approximate values work well for day-to-day life but aren’t sufficient for scientific measurements or advanced technology. Take years: Each one is 365 days long, right? Well, not quite. It actually takes the Earth something like 365.25 days to rotate around the sun. That’s why approximately every fourth year (except for years evenly divisible by 100 but not by 400) is 366 days long. The extra leap day keeps our calendar roughly aligned with the Earth’s actual rotation. 

Things get more frustrating the more accurately you try to measure things. A day is 86,400 seconds long—give or take a few milliseconds. The Earth’s rotation is actually slowing down due to lots of complicated factors including the ocean tides and shifts in how the Earth’s mass is distributed. All this means that days are getting ever so slightly longer, a few milliseconds at a time. If we continued to assume that all days are exactly 86,400 seconds long, our clocks would drift out of alignment with the sun. Wait long enough and it would start rising at midnight. 

In 1972, BIMP (it comes from the French name, Bureau International des Poids et Mesures) agreed to a simple fix: leap seconds. Like leap days, leap seconds would be inserted into the year so as to align Universal Coordinated Time (UTC) with the Earth-tracking Universal Time (UTI). Leap seconds aren’t needed predictably or very often. So, instead of having a regular pattern for adding them, BIMP would tally up all the extra milliseconds and it was necessary, tell everyone to add one whole millisecond to the clock. Between 1972 and now, 27 leap seconds have been inserted into UTC. 

While probably not the best idea even back in the 70s, the leap second has become a progressively worse idea as computers made precision timekeeping more widespread. When the leap second was created, accurate clocks were the preserve of research laboratories and military installations. Now, every smartphone can get the exact UTC time accurate to 100 billionth of a second from the GPS and other navigation satellites in orbit. 

The problem is that all the interlinked computers on the internet use UTC to function, not just let you know that it’s time for lunch. When files are saved to a database, they’re time stamped with UTC; when you play an online video game, it relies on UTC to work out who shot first; if you post a Tweet, UTC is in the mix. Keeping everything on track is a major headache for large tech companies like Meta—which recently published a blog post calling for the abolition of the leap second—that rely on UTC to keep their servers in sync and operational.

That’s because the process of adding leap seconds—or possibly removing one as the Earth appears to be speeding up again for some reason—break key assumptions that computers have about how time works. These are simple rules: Minutes have 60 seconds, time always goes forward, doesn’t repeat, doesn’t stop, and so on. Inserting and removing leap seconds makes it very easy for two computers that are meant to be in sync to get out of sync—and when that happens, things break. 

When a leap second was added in 2012, Reddit went down for 40 minutes. DNS provider Cloudflare had an outage on New Year’s Day in 2017 when the most recent leap second was added. And these happened despite the best efforts of the companies involved to account for the leap second and mitigate any adverse effects.

While large companies have developed techniques like “smearing,” where the leap second is added over a number of hours rather than all at once. Still, it would make things a lot easier if they didn’t have to at all. 

Of course, that brings us back to last Friday’s important decision. From 2035, leap seconds are no longer going to matter. BIMP is going to allow UTC and UTI to drift apart until at least 2135, hoping that scientists can come up with a better system of accounting for lost time—or computers can get smarter about handling clock changes. It’s not a perfect fix, but like many modern problems, it might be easier to kick it down the line.

The post The leap second’s time will be up in 2035—and tech companies are thrilled appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Astronomers now know how supermassive black holes blast us with energy https://www.popsci.com/science/black-hole-light-energy-x-ray/ Wed, 23 Nov 2022 18:54:41 +0000 https://www.popsci.com/?p=490856
Black hole shooting beam of energy out speed of light and being caught by a space telescope in an illustration. There's an inset showing blue and purple electromagnetic waves,
This illustration shows the IXPE spacecraft, at right, observing blazar Markarian 501, at left. A blazar is a black hole surrounded by a disk of gas and dust with a bright jet of high-energy particles pointed toward Earth. The inset illustration shows high-energy particles in the jet (blue). Pablo Garcia (NASA/MSFC)

An extreme particle accelerator millions of light-years away is directing immensely fast electromagnetic waves at Earth.

The post Astronomers now know how supermassive black holes blast us with energy appeared first on Popular Science.

]]>
Black hole shooting beam of energy out speed of light and being caught by a space telescope in an illustration. There's an inset showing blue and purple electromagnetic waves,
This illustration shows the IXPE spacecraft, at right, observing blazar Markarian 501, at left. A blazar is a black hole surrounded by a disk of gas and dust with a bright jet of high-energy particles pointed toward Earth. The inset illustration shows high-energy particles in the jet (blue). Pablo Garcia (NASA/MSFC)


Some 450 million light-years away from Earth in the constellation Hercules lies a galaxy named Markarian 501. In the visible-light images we have of it, Markarian 501 looks like a simple, uninteresting blob.

But looks can be deceiving, especially in space. Markarian 501 is a launchpad for charged particles traveling near the speed of light. From the galaxy’s heart erupts a bright jet of high-energy particles and radiation, rushing right in Earth’s direction. That makes it a perfect natural laboratory to study those accelerating particles—if only scientists could understand what causes them.

In a paper published in the journal Nature today, astronomers have been able to take a never-before-seen look deep into the heart of one of those jets and see what drives those particles out in the first place. “This is the first time we are able to directly test models of particle acceleration,” says Yannis Liodakis, an astronomer at the University of Turku in Finland and the paper’s lead author.

Markarian 501 is a literally shining example of a special class of galaxy called a blazar. What makes this galaxy so bright is the supermassive black hole at its center. The gravity-dense region spews a colossal wellspring of high-energy particles, forming a jet that travels very near the speed of light and stretches over hundreds of millions of light-years.

Many galaxies have supermassive black holes spew out jets like this—they’re what astronomers call active galactic nuclei. But blazars like Markarian 501 are defined by the fact that their jets are pointed right in Earth’s general direction. Astronomers can use telescopes trained at it to look upstream and get a clear view of a constant torrent of particles riding through waves of every part of the electromagnetic spectrum, from bright radio waves to visible light to blazing gamma rays.

[Related: You’ve probably never heard of terahertz waves, but they could change your life]

A blazar can spread its influence far beyond its own corner of the universe. For instance, a detector buried under the Antarctic ice caught a neutrino—a ghostly, low-mass particle that does its best to elude physicists—coming from a blazar called TXS 0506+56. It was the first time researchers had ever picked up a neutrino alighting on Earth from a point of origin outside the solar system (and from 5 billion light-years away, at that).

But what actually causes a supermassive black hole to form light and other electromagnetic waves? What happens inside that jet? If you were surfing inside of it, what exactly would you feel and see?

Scientists want to know these answers, too, and not just because they make for a fun, extreme thought experiment. Blazars are natural particle accelerators, and they’re far larger and more powerful than any accelerator we can currently hope to build on Earth. By analyzing the dynamics of a blazar jet, they can learn what natural processes can accelerate matter to near the speed of light. What’s more, Markarian 501 is one of the more desirable blazars to study, given that it’s relatively close to the Earth, at least compared to other blazars that can be many billions of light-years farther still.

[Related: What would happen if you fell into a black hole?]

So, Liodakis and dozens of colleagues from around the world took to observing it. They used the Imaging X-ray Polarization Explorer (IXPE), a jellyfish-like telescope launched by NASA in December 2021, to look down the length of that jet. In particular, IXPE studied if distant X-rays were polarized, and how their electromagnetic waves are oriented in space. The waves from a light bulb, for instance, aren’t polarized—they wiggle every which way. The waves from an LCD screen, on the other hand, are polarized and only wiggle in one direction, which is why you can pull tricks like making your screen invisible to everyone else. 

Back to the sky, if astronomers know the polarization of a source like a black hole, they might be able to reconstruct what happened at it. Liodakis and his colleagues had some idea of what to expect, because experts in their field had previously spent years modeling and simulating jets on their computers. “This was the first time we were able to directly test the predictions from those models,” he explains.

They found that the culprits were shockwaves: fronts of fast-moving particles crashing into slower-moving particles, speeding them along like flotsam pushed by rushing water. The violent crashes created the X-rays that the astronomers saw in IXPE’s readings.

It’s the first time that astronomers have used the X-ray polarization method to see results. “This is really a breakthrough in our understanding of these sources,” says Liodakis.

In an accompanying perspective in Nature, Lea Marcotulli, an astrophysicist at Yale University who wasn’t an author on the paper, called the result “dazzling.” “This huge leap forward brings us yet another step closer to understanding these extreme particle accelerators,” she wrote.

Of course, there are still many unanswered questions surrounding the jets. Do these shockwaves account for all the particles accelerating from Markarian 501’s black hole? And do other blazars and galaxies have shockwaves like them?

Liodakis says his group will continue to study the X-rays from Markarian 501, at least into 2023. With an object this dazzling, it’s hard to look away.

The post Astronomers now know how supermassive black holes blast us with energy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Magnets might be the future of nuclear fusion https://www.popsci.com/science/nuclear-fusion-magnet-nif/ Fri, 11 Nov 2022 11:00:00 +0000 https://www.popsci.com/?p=486077
A hohlraum at the Lawrence Livermore National Laboratory.
A target at the NIF, pictured here in 2008, includes the cylindrical fuel container called a hohlraum. Lawrence Livermore National Laboratory

When shooting lasers at a nuclear fusion target, magnets give you a major energy increase.

The post Magnets might be the future of nuclear fusion appeared first on Popular Science.

]]>
A hohlraum at the Lawrence Livermore National Laboratory.
A target at the NIF, pictured here in 2008, includes the cylindrical fuel container called a hohlraum. Lawrence Livermore National Laboratory

For scientists and dreamers alike, one of the greatest hopes for a future of bountiful energy is nestled in a winery-coated vale east of San Francisco. 

Here lies the National Ignition Facility (NIF) in California’s Lawrence Livermore National Laboratory. Inside NIF’s boxy walls, scientists are working to create nuclear fusion, the same physics that powers the sun. About a year ago, NIF scientists came closer than anyone to a key checkpoint in the quest for fusion: creating more energy than was put in.

Unfortunately—but in a familiar outcome to those familiar with fusion—that world would have to wait. In the months after the achievement, NIF scientists weren’t able to replicate their feat. 

But they haven’t given up. And a recent paper, published in the journal Physics Review Letters on November 4, might bring them one step closer to cracking a problem that has confounded energy-seekers for decades. Their latest trick: lighting up fusion within the flux of a strong magnetic field. 

Fusion power, to put it simply, aims to ape the sun’s interior. By smashing certain hydrogen atoms together and making them stick, you get helium and a lot of energy. The catch is that actually making atoms stick together requires very high temperatures—which, in turn, requires fusion-operators to spend incredible amounts of energy in the first place. 

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

Before you can even think about making a feasible fusion power plant, you need to somehow create more energy than you put in. That tipping point—a point that plasma physicists call ignition—has been fusion’s longest-sought goal.

The NIF’s container of choice is a gold-plated cylinder, smaller than a human fingernail. Scientists call that cylinder a hohlraum; it houses a peppercorn-sized pellet of hydrogen fuel.

At fusion time, scientists fire finely tuned laser beams at the hohlraum—in NIF’s case, 192 beams in all—energizing the cylinder enough to evoke violent X-rays within. In turn, those X-rays wash over the pellet, squeezing and battering it into an implosion that fuses hydrogen atoms together. That, at least, is the hope.

NIF used this method to achieve its smashing result in late 2021: creating some 70 percent of the energy put in, far and away the record at the time. For plasma physicists, it was a siren call. “It has breathed new enthusiasm into the community,” says Matt Zepf, a physicist at the Helmholtz Institute Jena in Germany. Fusion-folk wondered: Could NIF do it again?

As it happens, they would have to wait. Subsequent laser shots didn’t succeed at coming even close to that original. Part of the problem is that, even with all the knowledge and capabilities they have, scientists have a very hard time predicting what exactly a shot will do.

[Related: Nuclear power’s biggest problem could have a small solution]

“NIF implosions are currently showing significant fluctuations in their performance, which is caused by slight variations in the target quality and laser quality,” says John Moody, a physicist at NIF. “The targets are very, very good, but slight imperfections can have a big effect.”

Physicists could continue fine-tuning their laser or tinkering with their fuel pullet. But there might be a third way to improve that performance: bathing the hohlraum and its fuel pellet in a magnetic field.

Tests with other lasers, like OMEGA in Rochester, New York, and the Z-machine in Sandia, New Mexico—had shown that this method could prove fruitful. Moreover, computer simulations of NIF’s own laser suggested that a magnetic field could double the energy of NIF’s best-performing shots. 

“Pre-magnetized fuel will allow us to get good performance even with targets or laser delivery that is a little off of what we want,” says Moody, one of the paper’s authors.

So NIF scientists decided to try it out themselves.

They had to swap out the hohlraum first. Pure gold wouldn’t do well—putting the metal under a magnetic field like theirs would create electric currents in the cylinder walls, tearing it apart. So the scientists crafted a new cylinder, forged from an alloy of gold and tantalum, a rare metal found in some electronics.

Then, the scientists stuffed their new hohlraum with a hydrogen pellet, switched on the magnetic field, and lined up a shot.

As it happened, the magnetic field indeed made a difference. Compared to similar magnetless shots, the energy increased threefold. It was a low-powered test shot, to be sure, but the results give scientists a new glimmer of hope. “The paper marks a major achievement,” says Zepf, who was not an author of the report.

Still, the results are early days, “essentially learning to walk before we run,” Moody cautions. Next, the NIF scientists will try to replicate the experiment with other laser setups. If they can do that, they’ll know they can add a magnetic field to a wide range of shots.

As with anything in this misty plane of physics, this alone won’t be enough to solve all of fusion’s problems. Even if NIF does achieve ignition, afterward comes phase two: being able to create significantly more energy than you put in, something that physicists call “gain.” Especially for a laser of NIF’s limited size, says Zepf, that is an even more foreboding quest.

Nonetheless, the eyes of the fusion world will be watching. Zepf says that NIF’s results can teach similar facilities around the world how to get the most from their laser shots.

Achieving a high enough gain is a prerequisite for a phase that’s even further into the future: actually turning the heat of fusion power into a feasible power plant design. That’s still another step for particle physicists—and it’s a project that the fusion community is already working on.

The post Magnets might be the future of nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bees can sense a flower’s electric field—unless fertilizer messes with the buzz https://www.popsci.com/science/bumblebees-flowers-cues-electric-fields/ Wed, 09 Nov 2022 22:00:00 +0000 https://www.popsci.com/?p=485757
a fuzzy bumblebee settles on a pink flower
Pollinators, like this bumblebee (Bombus terrestris), can detect all kinds of sensory cues from flowers. Deposit Photos

Bumblebees are really good at picking up on cues from flowers, even electrical signals.

The post Bees can sense a flower’s electric field—unless fertilizer messes with the buzz appeared first on Popular Science.

]]>
a fuzzy bumblebee settles on a pink flower
Pollinators, like this bumblebee (Bombus terrestris), can detect all kinds of sensory cues from flowers. Deposit Photos

Bees are well-versed in the unspoken language of flowers. These buzzing pollinators are in tune with many features of flowering plants—the shape of the bulbs, the diversity of colors, and their alluring scents—which bees rely on to tell whether a reward of nectar and pollen is near. But bees can also detect signals that go beyond sight and smell. The tiny hairs covering their bodies, for instance, are ultra-sensitive to electric fields that help bees identify flowers. These electric fields can influence how bees forage—or, if those fields are artificially changed, even disrupt that behavior.

Today in the journal PNAS Nexus, biologists found that synthetic spray fertilizers can temporarily alter electric cues of flowers, a shift that causes bumblebees to land less frequently on plants. The team also tested a type of neonicotinoid pesticide—known to be toxic and detrimental to honeybee health—called imidacloprid, and detected changes to the electric field around flowers. Interestingly, the chemicals did not seem to impact vision and smell cues, hinting that this lesser-known signal is playing a greater role in communication. 

“Everything has an electric field,” says Ellard Hunting, lead study author and sensory biophysicist at the University of Bristol. “If you are really small, small weak electric fields become very profound, especially if you have lots of hairs, like bees and insects.” 

[Related: A swarm of honeybees can have the same electrical charge as a storm cloud]

Biologists are just beginning to understand how important electric signals are in the world of floral cues. To distinguish between more and less resource-rich flowers within a species, bees, for instance, can recognize specific visual patterns on petals, like spots on the surface, and remember them for future visits. Shape of the bloom also matters—larger, more open flowers might be an easier landing pad for less agile beetles, while narrow tube-shaped bulbs are hotspots for butterflies with long mouthparts that can reach nectar. Changes in humidity around a flower have also been found to influence hawkmoths, as newly opened flowers typically have higher humidity levels.   

An electrical cue, though, is “a pretty recent thing that we found out about,” says Carla Essenberg, a biologist studying pollination ecology at Bates College in Maine who was not involved in the study. A 2016 study found that foraging bumblebees change a flower’s electric field for about 1 to 2 minutes. The study authors suggested that even this short change might be detectable by other passerby bees, informing them the flower was recently visited—and has less nectar and pollen to offer. 

A flower’s natural electric field is largely created by its bioelectric potential—the flow of charge produced by or occurring within living organisms.  But electric fields are a dynamic phenomenon, explains Hunting. “Flowers typically have a negative potential and bees have a positive potential,” Hunting says. “Once bees approach, they can sense a field.” The wind, a bee’s landing, or other interactions will trigger immediate changes in a flower’s bioelectric potential and its surrounding field. Knowing this, Hunting had the idea to investigate any electric field changes caused by chemical applications, and if they deterred bee visits. 

He first started out with pesticides because of the well-studied impacts they can have on insects. “But then I figured, fertilizer also has a charge, and they are also applied and it is way more relevant on a larger-scale,” he says. These chemical mixtures used in agriculture and gardens often contain various levels of nitrogen, phosphorus, and potassium. “Everyone uses [fertilizers], and they’re claimed to be non-toxic.”  

First, to assess bumblebee foraging behavior, Hunting and his colleagues set up an experiment in a rural field site at the University of Bristol campus using two potted lavender plants. They sprayed a commercially available fertilizer mixture on one of the potted plants while spraying the other with demineralized water. Then, the team watched as bumblebees bypassed the fertilizer-covered lavender. Sprays that contained the pesticide or fertilizer changed the bioelectric potential of the flower for up to 25 minutes—much longer than shifts caused by wind or a bee landing. 

[Related: Arachnids may sense electrical fields to gain a true spidey sense]

But to confirm that the bees were avoiding the fertilizer because of a change in electric field—and not because of the chemical compounds or other factors—the researchers needed to recreate the electric shift in the flower, without actually spraying. In his soccer-pitch-sized backyard, a natural area free of other sources of electricity, Hunting manipulated the bioelectrical potential of lavender plants in order to mimic the change. He placed the stems in water, wired them with electrodes, and streamed a current with a DC powerbank battery. This created an electric field around the plant in the same way as the fertilizer. 

He observed that while the bees approached the electrically manipulated flowers, they did not land on them. They also approached the flowers significantly less than the control flowers, Hunting says. “This shows that the electrics alone already elicit avoidance behavior.”

Hunting suggests that the plant’s defense mechanism might be at the root of the electrical change. “What actually happens if you apply chemicals to plant cells, it triggers a chemical stress response in the plant, similar to a wounding response,” he explains. The plant sends metabolites—which have ionic charge—to start to fix the tissue. This flux of ions generates an electric current, which the bees detect. 

The researchers also noted that the chemicals didn’t seem to impact vision or smell, and that, interestingly, the plants sprayed with pesticide and fertilizers seemed to experience a shift in electric field again after it rained. This could indicate that the effect persists beyond just one spray. The new findings could have implications for casual gardeners and major agricultural industries, the researchers note. 

“Ideally, you would apply fertilizer to the soil [instead of spraying directly on the plant],” Hunting says. But that would require more labor than the approach used by many in US agriculture, in which airplanes spray massive fields. 

[Related: Build a garden that’ll have pollinators buzzin’]

Essenberg says that luckily the electric field changes are relatively short lived, making it a bit easier for farmers to find workarounds. For instance, they could spray agricultural chemicals during the middle of the day, when pollinators forage less frequently because many flowers open in the morning and typically run out of pollen by then. 

The toxicity of chemical sprays is probably a bigger influence “at the population level” on bee decline, Essenberg says. But this study offers a new idea: that change in electric potential might need to be taken into account for effectively spraying plants. “It raises questions about what other kinds of things might influence that potential,” she adds, such as contaminants in the air or pollution that falls with the rain.

Essenberg says it would be helpful to look at the impacts of electric field changes in more realistic foraging settings over longer periods of time. Hunting agrees. “Whether the phenomenon is really relevant in the long run, it might be, but we need to uncover more about this new mechanism.” 

The post Bees can sense a flower’s electric field—unless fertilizer messes with the buzz appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This far-off galaxy is probably shooting us with oodles of ghostly particles https://www.popsci.com/science/icecube-neutrino-spiral-galaxy/ Thu, 03 Nov 2022 18:00:00 +0000 https://www.popsci.com/?p=483938
The center of Messier 77's spiral galaxy.
The center of the galaxy NGC 1068 (also known as Messier 77) where neutrinos may originate, as captured by the Hubble Space Telescope. NASA, ESA & A. van der Hoeven

A sophisticated experiment buried under Antarctica is tracing neutrinos to their extraterrestrial origin.

The post This far-off galaxy is probably shooting us with oodles of ghostly particles appeared first on Popular Science.

]]>
The center of Messier 77's spiral galaxy.
The center of the galaxy NGC 1068 (also known as Messier 77) where neutrinos may originate, as captured by the Hubble Space Telescope. NASA, ESA & A. van der Hoeven

Deep under the South Pole sits an icebound forest of wiring called IceCube. It’s no cube: IceCube is a hexagonal formation of kilometer-deep holes in the ice, drilled with hot water and filled with electronics. Its purpose is to pick up flickers of neutrinos—ghostly little particles that often come from space and phase through Earth with hardly a trace. 

Four years ago, IceCube helped scientists find their first hint of a neutrino source outside our solar system. Now, for the second time, IceCube scientists have pinpointed a fountain of far-traveler neutrinos, originating from NGC 1068, a galaxy about 47 million light-years away.

Their results, published in the journal Science on November 3, need further confirmation. But if these observations are correct, they’re a key step in helping astronomers understand where in the universe those neutrinos originate. And, curiously, NGC 1068 is very different from IceCube’s first suspect.

Neutrinos are little phantoms. By some estimates, 100 trillion pass through your body every single second—and virtually none of them interact with your body’s atoms. Unlike charged particles such as protons and electrons, neutrinos are immune to the pulling and pushing of electromagnetism. Neutrinos have so little mass that, for many years, physicists thought they had no mass at all.

Most neutrinos that we see from space spew out from the sun. But scientists are especially interested in the even more elusive breed of neutrinos that come from outside the solar system. For astronomers, IceCube represents a wish: If researchers can observe far-flung neutrinos, they can use them to see through gas and dust clouds, which light typically doesn’t pass through.

[Related: We may finally know where the ‘ghost particles’ that surround us come from]

IceCube’s mission is to find those neutrinos, which reach Earth with far more energy than any solar neutrino. Although it’s at the south pole, IceCube actually focuses on neutrinos striking the northern hemisphere. IceCube’s detectors try to discern the direction a neutrino is traveling. If IceCube detects particles pointing downwards, scientists struggle to discern them from the raging static of cosmic radiation that constantly batters Earth’s atmosphere. If IceCube detects particles pointing upwards, on the other hand, scientists know that they’ve come from the north, having passed through the bulk of our planet before striking icebound detectors.

“We discovered neutrinos reaching us from the cosmos in 2013,” says Francis Halzen, a physicist at the University of Wisconsin-Madison and part of the IceCube collaboration who authored the paper, “which raised the question of where they originate.”

Finding neutrinos is already hard; finding where they come from is orders of magnitude harder. Identifying a neutrino origin involves painstaking data analysis that can take years to complete.

Crucially, this isn’t IceCube’s first identification. In 2018, scientists comparing IceCube data to observations from traditional telescopes pinpointed one possible neutrino source, more than 5 billion light years away: TXS 0506+56. TXS 0506+56 is an example of what astronomers call a blazar: a distant, high-energy galaxy with a central black hole that spews out a jet directly in Earth’s direction. It’s loud, bright, and the exact sort of object that astronomers thought created neutrinos. 

But not everybody was convinced they had the whole picture.

“The interpretation has been under debate,” says Kohta Murase, a physicist at Pennsylvania State University, who wasn’t an author of the new paper. “Many researchers think that other source classes are necessary to explain the origin of high-energy neutrinos coming from different directions over the sky.”

So IceCube scientists got to work. They combed through nine years’ worth of IceCube observations, from 2011 to 2020. Since blazars such as TXS 0506+56 tend to spew out torrents of gamma rays, the researchers tried to match the neutrinos with known gamma-ray sources.

As it happened, the source they found wasn’t the gamma-ray source they expected.

[Related: This ghostly particle may be why dark matter keeps eluding us]

NGC 1068 (also known as M77), located some 47 million light-years from us, is not unlike our own galaxy. Like the Milky Way, it’s shaped like a spiral. Like the Milky Way, it has a supermassive black hole at its heart. Some astronomers had suspected it as a neutrino source, but any proof remained elusive.

That black hole produces a torrent of what astrophysicists call cosmic rays. Despite their name (the scientists who first discovered them thought they were rays), cosmic rays are actually ultra-energized protons and atomic nuclei hurtling through the universe at nearly the speed of light. 

But, unlike its counterpart at the center of the Milky Way, NGC 1068’s black hole is shrouded behind a thick veil of gas and dust, which blocks many of the gamma rays that would otherwise emerge. That, astronomers say, complicates the old picture of where neutrinos came from. “This is the key issue,” says Halzen. “The sources we detect are not seen in high energy gamma rays.”

As cosmic rays crash into that veil, they cause a cascade of nuclear reactions that spew out neutrinos. (In fact, cosmic rays do the same when they strike Earth’s atmosphere). One reason why the NGC 1068 discovery is so exciting, then, is that the ensuing neutrinos might give astronomers clues about those cosmic rays.

It’s not final proof; there’s not enough data quite yet to be certain. That will take more observations, more years of painstaking data analysis. Even so, Murase says, other astronomers might search the sky for galaxies like NGC 1068, galaxies whose central black holes are occluded.

Meanwhile, other astronomers believe that there are even more places high-energy neutrinos could flow from. If a star passes too close to a supermassive black hole, for instance, the black hole’s gravity might rip the star apart and unleash neutrinos in the process. As astronomers prepare to look for neutrinos, they’ll want to look for new, more diverse points in the sky, too.

They’ll soon have more than just IceCube to work with. Astronomers are laying the groundwork—or seawork—for additional high-sensitivity neutrino detectors: one at the bottom of Siberia’s Lake Baikal and another on the Mediterranean Sea floor. Soon, those may join the hunt for distant, far-traveler neutrinos.

The post This far-off galaxy is probably shooting us with oodles of ghostly particles appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Here’s what the Earth’s magnetic field would sound like if we could hear it https://www.popsci.com/science/earth-magnetic-field-sounds/ Fri, 28 Oct 2022 15:44:47 +0000 https://www.popsci.com/?p=481875
Earth's magnetic field in a black, red, and white illustration
Our magnetic field protects us from cosmic radiation and charged particles that bombard Earth in solar winds. ESA/ATG medialab

Listen to magnetic signals battle with a solar storm.

The post Here’s what the Earth’s magnetic field would sound like if we could hear it appeared first on Popular Science.

]]>
Earth's magnetic field in a black, red, and white illustration
Our magnetic field protects us from cosmic radiation and charged particles that bombard Earth in solar winds. ESA/ATG medialab

As the end of spooky season approaches, the icy cold and darkness of space has some extra scary vibes to offer to anyone willing to listen very closely. Researchers at the Technical University of Denmark in Lyngby have converted the radiofrequency signals from Earth’s magnetic field into sounds. The signals were measured by the European Space Agency’s (ESA) Swarm satellite mission. Take a listen.


The moody sound bite comes from the magnetic field generated by Earth’s core and its interaction with a solar storm. Earth’s magnetic field is essential to life on the planet, but it isn’t something that we can typically see or hear. It’s basically a shield that protects the planet from the cosmic radiation and charged particles coming from solar winds.

When the colorful aurora borealis (or northern lights) dances across the night sky in the upper latitudes, it’s a visual example of charged particles from the sun interacting with Earth’s magnetic field. However, hearing the sounds that occur when the magnetic field interacts with charged particles from the sun is a bit more more tricky.

[Related: Will Earth’s shifting magnetic poles push the Northern Lights too?]

According to the ESA, “Our magnetic field is largely generated by an ocean of superheated, swirling liquid iron that makes up the outer core around 1,864 miles beneath our feet. Acting as a spinning conductor in a bicycle dynamo, it creates electrical currents, which in turn, generate our continuously changing electromagnetic field.”

Earth's magnetic field
Strength of the magnetic field at Earth’s surface. CREDIT: DTU/ESA

DTU/ESA

In 2013, the ESA launched three Swarm satellites for a mission to help decode how Earth’s magnetic field is generated by precisely measuring the radiofrequency signals coming from the planet’s core, mantle, crust, and oceans, as well as from the ionosphere and magnetosphere. Swarm is also helping scientists better understand space weather.

[Related: Astronomers used telescopic ‘sunglasses’ to photograph a black hole’s magnetic field.]

“The team used data from ESA’s Swarm satellites, as well as other sources, and used these magnetic signals to manipulate and control a sonic representation of the core field,” explained musician and project supporter Klaus Nielsen, from the Technical University of Denmark, in a statement. “The project has certainly been a rewarding exercise in bringing art and science together.”

The team set up 30 loudspeakers to play the sounds in Solbjerg Square in Copenhagen until October 30, with each speaker representing a different spot on Earth to demonstrate how the planet’s magnetic field has fluctuated over the past 100,000 years.

“The rumbling of Earth’s magnetic field is accompanied by a representation of a geomagnetic storm that resulted from a solar flare on November 3, 2011, and indeed it sounds pretty scary,” added Nielsen.

According to the scientists, the intention isn’t to spook people, but use the sounds as a clever way to remind us that Earth’s magnetic field exists and has a pull on our lives.

The post Here’s what the Earth’s magnetic field would sound like if we could hear it appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
To set the record straight: Nothing can break the speed of light https://www.popsci.com/science/whats-faster-than-the-speed-of-light/ Mon, 24 Oct 2022 12:35:47 +0000 https://www.popsci.com/?p=480200
Gamma-ray burst from exploding galaxy in NASA Hubble telescope rendition
Gamma-ray bursts (like the one in this illustration) from distant exploding galaxies transmit more powerful light than the visible wavelengths we see. But that doesn't mean they're faster. NASA, ESA and M. Kornmesser

Objects may not be as fast as they appear with this universal illusion.

The post To set the record straight: Nothing can break the speed of light appeared first on Popular Science.

]]>
Gamma-ray burst from exploding galaxy in NASA Hubble telescope rendition
Gamma-ray bursts (like the one in this illustration) from distant exploding galaxies transmit more powerful light than the visible wavelengths we see. But that doesn't mean they're faster. NASA, ESA and M. Kornmesser

Back in 2018, astronomers examining the ruins of two collided neutron stars in Hubble Space Telescope images noticed something peculiar: a stream of bright high-energy ions, jetting away from the merger in Earth’s direction at seven times the speed of light.

That didn’t seem right, so the team recalculated with observations from a different radio telescope. In those observations, the stream was flying past at only four times the speed of light.

That still didn’t seem right. Nothing in the universe can go faster than the speed of light. As it happens, it was an illusion, a study published in the journal Nature explained earlier this month.

[Related: Have we been measuring gravity wrong this whole time?]

The phenomenon that makes particles in space appear to travel faster than light is called superluminal motion. The phrase fits the illusion: It means “more than light,” but actually describes a trick where an object moving toward you appears much faster than its actual speed. There are high-energy streams out in space there that can pretend to move faster than light—today, astronomers are seeing a growing number of them.

“They look like they’re moving across the sky, crazy fast, but it’s just that they’re moving toward you and across the sky at the same time,” says Jay Anderson, an astronomer at the Space Telescope Science Institute in Maryland who has worked extensively with Hubble and helped author the Nature paper.

To get their jet’s true speed, Anderson and his collaborates compared Hubble and radio telescope observations. Ultimately, they estimated that the jet was zooming directly at Earth at around 99.95 percent the speed of light. That’s very close to the speed of light, but not quite faster than it.

Indeed, to our knowledge so far, nothing on or off our planet can travel faster than the speed of light. This has been proven time and time again through the laws of special relativity, put on paper by Albert Einstein a century ago. Light, which moves at about 670 million miles per hour, is the ultimate cosmic speed limit. Not only that, special relativity holds that the speed of light is a constant no matter who or what is observing it.

But special relativity doesn’t limit things from traveling super close to the speed of light (cosmic rays and the particles from solar flares are some examples). That’s where superluminal motion kicks in. As something moves toward you, the distance that its light and image needs to reach you decreases. In everyday life, that’s not really a factor: Even seemingly speedy things, like a plane moving through the sky above you, don’t move anywhere near the speed of light. 

[Related: Check out the latest version of Boom’s supersonic plane]

But when something is moving at high speeds at hundreds of millions of miles per hour in the proper direction, the distance between the object and the perceiver (whether it be a person or a camera lens) drops very quickly. This gives the illusion that something is approaching more rapidly than it actually is. Neither our eyes nor our telescopes can tell the difference, which means astronomers have to calculate an object’s actual speed from data collected in images.

The researchers behind the new Nature paper weren’t the first to grapple with superluminal motion. In fact, they’re more than a century late. In 1901, astronomers scanning the night sky caught a glimpse of a nova in the direction of the constellation Perseus. It was the remnants of a white dwarf that ate the outer shells of a nearby gas giant, briefly lighting up bright enough to see with the naked eye. Astronomers caught a bubble inflating from the nova at breakneck speed. But because there was no theory of general relativity at the time, the event quickly faded from memory.

The phenomenon gained buzz again by the 1970s and 1980s. By then, astronomers were finding all sorts of odd high-energy objects in distant corners of the universe: quasars and active galaxies, all of which could shoot out jets of material. Most of the time, these objects were powered by black holes that spewed out high-energy jets almost moving at the speed of the light. Depending on the mass and strength of the black hole they come from, they could stretch for thousands, hundreds of thousands, or even millions of light-years to reach Earth.

As distant objects close in, neither our eyes nor our telescopes can tell the difference, giving us the illusion that they’re moving faster and faster.

Around the same time, scientists studying radio waves began seeing enough faux-speeders to raise alarms. They even found a jet from one distant galaxy that appeared to be racing at nearly 10 times the speed of light. The observations garnered a fair amount of alarm among astronomers, though by then the mechanisms were well-understood.

In the decades since, observations of superluminal motion have added up. Astronomers are seeing an ever-increasing number of jets through telescopes, particularly ones that are floating through space like Hubble or the James Webb Space Telescope. When light doesn’t have to pass through Earth’s atmosphere, their captures can be much higher in resolution. This helps teams find more jets that are farther away (such as from ancient, distant galaxies), and it helps them view closer jets in more detail. “Things stand out much better in Hubble images than they do in ground-based images,” says Anderson. 

[Related: This image wiggles when you scroll—or does it?]

Take, for instance, the distant galaxy M87, whose gargantuan central black hole launched a jet that apparently clocked in at between 4 and 6 times the speed of light. By the 1990s, Hubble could actually peer into the stream of energy and reveal that parts it were traveling at different speeds. “You could actually see features in the jet moving, and you could measure the locations of those features,” Anderson explains.

There are good reasons for astronomers to be interested in such breakneck jets, especially now. In the case of the smashing neutron stars from the Nature study, the crash caused a gamma-ray burst, a type of high-energy explosion that remains poorly understood. The event also stirred up a storm of gravitational waves, causing rippled in space-time that researchers can now pick up and observe. But until they uncover some strange new physics in the matter flying through space, the speed of light remains the hard limit.

The post To set the record straight: Nothing can break the speed of light appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Could quantum physics unlock teleportation? https://www.popsci.com/science/quantum-teleportation-history/ Thu, 20 Oct 2022 15:30:00 +0000 https://www.popsci.com/?p=479596
illustrations of a person being teleported in a 1960s style
The article 'Teleportation: Beam Me Up, Bob' appeared in the November 1993 issue of Popular Science. Popular Science

Physicists are making leaps in quantum teleportation, but it's still a long ways from 'Star Trek.'

The post Could quantum physics unlock teleportation? appeared first on Popular Science.

]]>
illustrations of a person being teleported in a 1960s style
The article 'Teleportation: Beam Me Up, Bob' appeared in the November 1993 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

Jetpacks, flying cars, hoverboards, bullet trains—inventors have dreamt up all kinds of creative ways, from science fiction to science fact, to get from point A to point B. But when it comes to transportation nirvana, nothing beats teleportation—vehicle-free, instantaneous travel. If beam-me-up-Scotty technology has gotten less attention than other transportation tropes—Popular Science ran short explainers in November 1993 and September 2004—it’s not because the idea isn’t appealing. Regrettably, over the decades there just hasn’t been much progress in teleportation science to report. However, since the 2010s, new discoveries on the subatomic level are shaking up the playing field: specifically, quantum teleportation.

Just this month, the 2022 Nobel Prize in Physics was awarded to three scientists “for experiments with entangled photons,” according to the Royal Swedish Academy of Sciences, which selects the winners. The recipients’ work demonstrated that teleportation is possible—well, at least between photons (and with some serious caveats on what could be teleported). The physicists—Alain Aspect, John Clauser, and Anton Zeilinger—had independent breakthroughs over the last several decades. The result of their work not only demonstrated quantum entanglement in action but also showed how the arcane property could be a channel to teleport quantum information from one photon to another. While their findings are not anywhere close to transforming airports and train stations into Star Trek-style transporters, they have been making their way into promising applications, including quantum computing, quantum networks, and quantum encryption

“Teleportation is a very inspiring word,” says Maria Spiropulu, the Shang-Yi Ch’en professor of physics at the California Institute of Technology, and director of the INQNET quantum network program. “It evokes our senses and suggests that a weird phenomenon is taking place. But nothing weird is taking place in quantum teleportation.”

When quantum mechanics was being hashed out in the early 20th century between physicists like Max Planck, Albert Einstein, Niels Bohr, and Erwin Schrödinger, it was becoming clear that at the subatomic particle level, nature appeared to have its own hidden communication channel, called quantum entanglement. Einstein described the phenomenon scientifically in a paper published in 1935, but famously called it “spooky action at a distance” because it appeared to defy the normal rules of physics. At the time, it seemed as fantastical as teleportation, a phrase first coined by writer Charles Fort just four years earlier to describe unexplainable spectacles like UFOs and poltergeists.

“Fifty years ago, when scientists started doing [quantum] experiments,” says Spiropulu, “it was still considered quite esoteric.” As if in tribute to those scientists, Spiropulu has  a print honoring physicist Richard Feynman in her office. Feynman won the Nobel Prize in 1965 for his Feynman diagrams, a graphical interpretation of quantum mechanics.

Spiropulu equates quantum entanglement with shared memories. “Once you marry, it doesn’t matter how many divorces you may have,” she explains. Because you’ve made memories together, “you are connected forever.” At a subatomic level, the “shared memories” between particles enables instantaneous transfer of information about quantum states—like atomic spin and photon polarization—between distant particles. These bits of information are called quantum bits, or qubits. Classical digital bits are binary, meaning that they can only hold the value of 1 or 0, but qubits can represent any range between 0 and 1 in a superposition, meaning there’s a certain probability of being 0 and certain probability of being 1 at the same time. Qubits’ ability to take on an infinite number of potential values simultaneously allows them to process information much faster—and that’s just what physicists are looking for in a system that leverages quantum teleportation. 

[Related: Quantum teleportation is real, but it’s not what you think]

But for qubits to work as information processors, they need to share information the way classical computer chips share information. Enter entanglement and teleportation. By entangling subatomic particles, like photons or electrons—the qubits—and then separating them, operations can be performed on one that generates an instantaneous response in its entangled twin. 

The farthest distance to date that qubits have been separated was set by Chinese scientists, who used quantum entanglement to send information from Tibet to a satellite in orbit 870 miles away. On terra firma, the record is just tens of miles, traveling through fiber optic connections and through air (line of sight lasers).

Qubits’ strange behavior—acting like they’re still together no matter how far apart they’ve been separated—continues to puzzle but amaze physicists. “It does appear magical,” Spiropulu admits. “The effect appears very, ‘wow!’ But once you break it down, then it’s engineering.” And in just the past five years, great strides have been made in quantum engineering to apply the mysterious but predictable characteristics of qubits. Besides quantum computing advances made by tech giants like Google, IBM, and Microsoft, Spiropulu has been spearheading a government- and privately funded program to build out a quantum internet that leverages quantum teleportation. 
With some guidance from Spiropulu’s postdoctoral researchers at Caltech, Venkata R. (Raju) Valivarthi and Neil Sinclair, this is how state-of-the-art quantum teleportation would work (you might want to strap yourself in):

Step 1: Entangle

a diagram of an orange unlabeled circle representing a photon pointing towards a pyramid representing a crystal and getting split into two photons labeled one and two

Using a laser, a stream of photons shoots through a special optical crystal that can split photons into pairs. The pair of photons are now entangled, meaning they share information. When one changes, the other will, too.

Step 2: Open a quantum teleportation channel

a diagram of photon 1 and 2 connected by a dotted line representing the quantum channel. the photons are in two locations

Then, one of the two photons is sent over a fiber optic cable (or another medium capable of transmitting light, such as air or space) to a distant location.This opens a quantum channel for teleportation. The distant photon (labeled photon one above) becomes the receiver, while the photon that remains behind (labeled photon two) is the transmitter. This channel does not necessarily indicate the direction of information flow as the photons could be distributed in roundabout ways.

Step 3: Prepare a message for teleportation

a diagram of a message icon with an arrow pointing at a photon labeled three. above the arrow are some dots and lines representing that the message is encoded

A third photon is added to the mix, and is encoded with the information to be teleported. This third photon is the message carrier. The types of information transmitted could be encoded into what’s called the photon’s properties, or state, such as its position, polarization, and momenta. (This is where qubits come in, if you think of the encoded message in terms of 0s, 1s, and their superpositions.)

Step 4: Teleport the encoded message

a diagram of step four with the photons changing states

One of the curious properties of quantum physics is that a particle’s state, or properties, such as its spin or position, cannot be known until it is measured. You can think of it like dice. A single die can hold up to six values, but its value isn’t known until it’s rolled. Measuring a particle is like rolling dice, it locks in a specific value. In teleportation, once the third photon is encoded, a joint measurement is taken of the second and third photons’ properties, which means their states are measured at the same time and their values are locked in (like viewing the value of a pair of dice). The act of measuring changes the state of the second photon to match the state of the third photon. As soon as the second photon changes, the first photon, on the receiving end of the quantum channel, snaps into a matching state.

Now the information lies with photon one—the receiver. However, even though the information has been teleported to the distant location, it’s still encoded, which means that like an unrolled die it’s indeterminate until it can be decoded, or measured. The measurement of photon one needs to match the joint measurement taken on photons two and three. So the outcome of the joint measurement taken on photons two and three is recorded and sent to photon one’s location so it can be repeated to unlock the information. At this point, photons two and three are gone because the act of measuring photons destroys them. Photons are absorbed by whatever is used to measure them, like our eyes. 

Step 5: Complete the teleportation

step five diagram shows photons three and two whited out (meaning they are gone) and photon one with the message decoded

To decode the state of photon one and complete the teleportation, photon one must be manipulated based on the outcome of the joint measurement, also called rotating it, which is like rolling the dice the same way they were rolled before for photons one and two. This decodes the message—similar to how binary 1s and 0s are translated into text or numeric values. The teleportation may seem instantaneous on the surface, but because the decoding instructions from the joint measurement can only be sent using light (in this scenario over a fiber optic cable), the photons only transfer the information at the speed of light. That’s important because teleportation would otherwise violate Einstein’s relativity principle, which states that nothing travels faster than the speed of light—if it did, this would lead to all sorts of bizarre implications and possibly upend physics. Now, the encoded information in photon three (the messenger) has been teleported from photon two’s position (transmitter) to photon one’s position (receiver) and decoded.

Whew! Quantum teleportation complete. 

Since we transmit digital bits today using light, it might seem like quantum teleportation and quantum networks offer no inherent advantage. But the difference is significant. Qubits can convey much more information than bits. Plus, quantum networks are more secure, since attempts to interfere with quantum entanglement would destroy the open quantum channel.

Researchers have discovered many different ways to entangle, transmit, and measure subatomic information. Plus, they’re upgrading from teleporting information about photons, to teleporting information about larger-sized particles like electrons, and even atoms

[Related: Warp speed space travel just got a tiny bit more realistic]

But it’s still just information being transmitted, not matter—the stuff that humans are made of. While the ultimate dream may be human teleportation, it actually might be a good thing we’re not there yet. 

The Star Trek television and film franchise not only helped popularize teleportation but also glamorized it with a glittery dissolve effect and catchy transporter-tone. The Fly, on the other hand, a movie about teleportation gone wrong, painted a much darker, but possibly scientifically truer picture of teleportation. That’s because teleportation is really an act of reincarnation. Teleportation of living matter is risky business: It would require scanning the traveler’s information at the point of departure, transmitting that information to the desired coordinates, and deconstructing them at the point of departure while simultaneously reconstructing the traveler at the point of arrival—we wouldn’t want errant copies of ourselves on the loose. Nor would we want to arrive as a lifeless copy of ourselves. We would have to arrive with all our beating, breathing, blinking systems intact in order for the process to be a success. Teleporting living beings, at its core, is a matter of life and death.

Or not.

Formidable minds, such as Stephen Hawking, have proposed that the information, or vector state, that is teleported over quantum entanglement channels does not have to be confined to subatomic particle properties. In fact, entire blackholes’ worth of trapped information could be teleported, according to this theory. It gets weird, but by entangling two blackholes and connecting them with a wormhole (a space-time shortcut), information that disappears into one blackhole might emerge from the other as a hologram. Under this reasoning, the vector states of molecules, humans, and even entire planets could theoretically be teleported as holograms. 

Kip Thorne, a Caltech physicist who won the 2017 Nobel Prize in Physics for gravity wave detection, may have best explained the possibilities of teleportation and time travel as far back as 1988: “One can imagine an advanced civilization pulling a wormhole out of the quantum foam and enlarging it to classical size. This might be analyzed by techniques now being developed for computation of spontaneous wormhole production by quantum tunneling.” 

For now, Spiropulu remains focused on the immediate promise of quantum teleportation. But it won’t look anything like Star Trek. “‘Beam me up, Scotty?’ No such things,” she says. “But yes, a lot of progress. And it’s transformative.”

The post Could quantum physics unlock teleportation? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Refining the clock’s second takes time—and lasers https://www.popsci.com/technology/measure-second-optical-clocks-laser/ Wed, 19 Oct 2022 19:30:00 +0000 https://www.popsci.com/?p=479419
Gold pocket watch and hourglass
Metrologists hope optical clocks will provide a redefined 'second' by 2030. Deposit Photos

A Chinese research team set a new distance record for syncing two timepieces thanks to some very precise lasers.

The post Refining the clock’s second takes time—and lasers appeared first on Popular Science.

]]>
Gold pocket watch and hourglass
Metrologists hope optical clocks will provide a redefined 'second' by 2030. Deposit Photos

As technology progresses, so does our ability to more precisely delineate the passage of time. Since their introduction in 1967, the standard bearers of accurate timekeeping have been atomic clocks, whose caesium-33 atoms’ oscillations serve as a reference point for a single “second.” But as a new paper published with Nature earlier in this month reminds us, atomic clocks are so literally and metaphorically yesterday’s news.

According to the publication’s writeup yesterday, a group of researchers at the University of Science and Technology of China in Hefei recently synced optical clocks located 113 kilometers (about 70.2 miles) apart using precise pulses of laser light—roughly seven times farther than that of the previous record. The milestone represents a significant step forward for metrologists, scientists who study measurement, to “redefine” the second by the end of the decade. Once successful, their studies could be an estimated 100 times more accurate than those using existing atomic clock readings.

[Related: What the heck is a time crystal?]

Unlike atomic clocks’ caesium microwaves, optical clocks are reliant on the movement of higher-frequency elements such as strontium and ytterbium to measure time. To measure this, metrologists need to transmit and compare clocks’ readings on different continents, but since satellites are required to accomplish this, our atmosphere’s occluding effects need to be addressed to ensure as accurate a measurement as possible. These latest advances using lasers offer a major step forward towards bypassing these hurdles.

There are a bunch of potential optical clock benefits apart from simply getting an even more nitty-gritty second. According to Nature, researchers will be able to more accurately test the general theory of relativity, which states that time passes slower in regions with higher gravitational pull, i.e. lower altitudes. Optical clocks’ ticking “could even reveal subtle changes in gravitational fields caused by the movement of masses — for example by shifting tectonic plates,” explains the writeup.

[Related: Best sunrise clocks of 2022.]

Researchers still have a lot of work ahead of them before they can confidently reboot the second. In particular, sending signals to orbiting satellites—while roughly the same distance as what the Chinese team just pulled off—must take other factors into consideration, particularly their orbital speeds. For this, metrologists will turn to their recent advances in a whole other field—quantum-communications satellites.

The post Refining the clock’s second takes time—and lasers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Geologists are searching for when the Earth took its first breath https://www.popsci.com/science/earths-first-breath/ Fri, 14 Oct 2022 20:21:42 +0000 https://www.popsci.com/?p=478159
Volcano belching lava and gas above ocean to represent Great Oxygenation Event
At first the Earth's atmosphere was filled with helium and volcanic emissions. Then it slowly got doses of oxygen from the oceans and tectonic activity. Deposit Photos

The planet's early oxygenation events were more like rollercoaster rides than spikes.

The post Geologists are searching for when the Earth took its first breath appeared first on Popular Science.

]]>
Volcano belching lava and gas above ocean to represent Great Oxygenation Event
At first the Earth's atmosphere was filled with helium and volcanic emissions. Then it slowly got doses of oxygen from the oceans and tectonic activity. Deposit Photos

Many eons ago, the Earth was a vastly different place from our home. A great supercontinent called Rodinia was fragmenting into shards with faintly familiar names like Laurentia, Baltica, and Gondwanaland. For a time, Earth was covered, in its entirety, by sheets of ice. Life was barely clinging onto this drastically changing world.

All this came from the chapter of our planet’s history that scientists today have titled the Neoproterozoic Era, which lasted from roughly 1 billion to 540 million years ago. The long stretches of time within its stony pages were a very distant prelude to our world today: a time when the first animals stirred to life, evolving from protists in ancient seas.

Just as humans and their fellow animals do today, these ancient precursors would have needed oxygen to live. But where did it come from, and when? We still don’t have firm answers. But experts have developed a blurry snapshot of how oxygen built up in the Neoproterozoic, published today in the journal Science Advances. And that picture is a bumpy ride, filled with periods of oxygen entering the atmosphere before disappearing again, on repeat, in cycles that lasted tens of millions of years.

To look that far back, you have to throw much of what we take for granted about the modern world right out the window. “As you go further back in time, the more alien of a world Earth becomes,” says Alexander Krause, a geologist at University College London in the United Kingdom, and one of the paper’s authors.

[Related: Here’s how life on Earth might have formed out of thin air and water]

Indeed, after the Earth formed, its early atmosphere was a medley of gases burped out by volcanic eruptions. Over several billion years, they coated our planet with a stew of noxious methane, hydrogen sulfide, carbon dioxide, and water vapor.

That would change in time. We know that some 2.3 billion years ago, microorganisms called cyanobacteria created a windfall of oxygen through photosynthesis. Scientists call these first drops of the gas, creatively, the Great Oxygenation Event. But despite its grandiose name, the juncture only brought our atmosphere’s oxygen to at most a small fraction of today’s levels. 

What happened between then and now is still a murky question. Many experts think that there was another oxygenation event about 400 million years ago in the Paleozoic Era, just as animals were starting to crawl out of the ocean and onto land. Another camp, including the authors of this new research, think there was a third event, sometime around 700 million years ago in the Neoproterozoic. But no one knows for sure if oxygen gradually increased over time, or if it fluctuated wildly. 

That’s important for geologists to know, because atmospheric oxygen is involved in virtually every process on Earth’s surface. Even if early life mostly lived in the sea, the upper levels of the ocean and the atmosphere constantly exchange gases.

To learn more, Krause and his collaborators simulated the atmosphere from 1.5 billion years ago until today—and how oxygen levels in the air fluctuated over that span. Though they didn’t have the technology to take a whiff of billion-year-old air, there are a few fingerprints geologists can use to reconstruct what the ancient atmosphere might have looked like. By probing sedimentary rocks from that era, they’re able to measure the carbon and sulfur isotopes within, which rely on oxygen in the atmosphere to form.

Additionally, as the planet’s tectonic plates move, oxygen buried deep within the mantle can emerge and bubble up into the air through a process known as tectonic degassing. Using information on tectonic activity from the relevant eras, Krause and his colleagues previously estimated the history of degassing over time.

No one knows for sure if oxygen gradually increased over time, or if it fluctuated wildly. 

By putting those scraps of evidence together, the team came up with a projection of how oxygen levels wavered in the air until the present day. It’s not the first time scientists have tried to make such a model, but according to Krause, it’s the first time anyone has tried it over a billion-year timescale. “Others have only reconstructed it for a few tens of millions of years,” Krause says.

He and his colleagues found that atmospheric oxygen levels didn’t follow a straight line over the Earth’s history. Instead, imagine it like an oxygen roller coaster. Across 100-million-year stretches or so, oxygen levels rose to around 50 percent of modern levels, and then plummeted again. The Neoproterozoic alone saw five such peaks.

Only after 540 million years ago, in the Paleozoic Era, did the atmosphere really start to fill up. Finally, close to 350 million years ago, oxygen reached something close to current-day levels. That increase coincided with the great burst of life’s diversity known as the Cambrian Explosion. Since then, while oxygen levels have continued to fluctuate, they’ve never dropped below around 60 percent of the present.

“It’s an interesting paper,” says Maxwell Lechte, a geologist at McGill University in Montréal, who wasn’t involved in the research. “It’s probably one of the big contentious discussion points of the last 10 years or so” in the study of Earth’s distant past.

[Related: Enjoy breathing oxygen? Thank the moon.]

It’s important to note, however, that the data set used for the simulation was incomplete. “There’s still a lot of rock out there that hasn’t been looked at,” says Lechte. “As more studies come out, they can probably update the model, and it would potentially change the outputs significantly.”

The obvious question then is how oxygen trends left ripple effects on the evolution of life.After all, it’s during that third possible oxygenation event that protists began to diversify and fan out into the very first animals—multicellular creatures that required oxygen to live. Paleontologists have found an abundance of fossils that date to the very end of the era, including a contested 890-million-year-old sponge.

Those animals might have developed and thrived in periods when oxygen levels were sufficiently high, like the flourishing Cambrian Explosion. Meanwhile, drops in oxygen levels might have coincided with great die-offs. 

Astronomers might take note of this work, too. Any oxygenation answers have serious implications for what we might find on distant Earth-like exoplanets. If these geologists are correct, then it’s evidence that Earth’s history is not linear, but rather bumpy, twisted, and sometimes violent. “These questions that this paper deals with represent a fundamental gap in our understanding of how our planet actually operates,” says Lechte.

The post Geologists are searching for when the Earth took its first breath appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These islanders live and thrive alongside lava https://www.popsci.com/science/living-alongside-volcanoes-cris-toala-olivares/ Thu, 06 Oct 2022 11:00:00 +0000 https://www.popsci.com/?p=475478
People taking photos of glowing lava on Cape Verde
The 2014 eruption of the Fogo volcano ate up 75 percent of the surrounding villages and 25 percent of the farmland. Still residents returned. Lannoo Publishers/Cris Toala Olivares

Photographer Cris Toala Olivares visits communities who've built a relationship with one of nature's most terrifying forces.

The post These islanders live and thrive alongside lava appeared first on Popular Science.

]]>
People taking photos of glowing lava on Cape Verde
The 2014 eruption of the Fogo volcano ate up 75 percent of the surrounding villages and 25 percent of the farmland. Still residents returned. Lannoo Publishers/Cris Toala Olivares

Excerpt and photographs from Living with Volcanoes by Cris Toala Olivares. Copyright © 2022 by Cris Toala Olivares. Reprinted by permission of Lannoo Publishers.

Visiting the island of Fogo in Cape Verde in 2014 was a rare chance to people who actually live in a crater of a volcano, side by side with lava. I wanted to see this with my own eyes, despite the difficulties I had reaching the island, including a six-hour ride in a cargo boat that left many fellow travellers seasick.

While I was there, the Cha das Caldeiras community in the crater was being evacuated by authorities as the lava flow from an eruption engulfed their land and houses. Despite the dangers, the people were passionately trying to return to their homes due to the connection they feel with the place they are from. They were attempting to force their way back in, saying: “I was born with lava, and I will die with lava.”

Living With Volcanoes book cover
Courtesy of Lannoo Publishers

I met and travelled with volcano guide Manuel during my visit. Like many residents of the crater, he was a confident character and strongly attached to his way of life. Most people would move away from the lava, but he was desperate to remain and help his friends and neighbors. While I was with him, I felt safe, he understood the lava tracks and he knew where to walk and where to avoid.

When I was in the crater, I experienced how it is to live in this environment: it is like being in the volcano’s womb. You feel heat all around you, as if you are in an oven, and there is a comforting circulation of warm and dry winds. This was also the first time I saw flowing rivers of bright red lava. The people here have everything they need, and they know how to work with the nature surrounding them. They also produce food for all of Cape Verde on the volcanic soils of their farms, including beans, fruit, and wine. Everyone lives close by each other, many in the same house. Due to the risks of the lava, they know the importance of cooperation and solidarity.

I was struck by the loyalty and the bond these people have to their traditions and the life they know. In a world where many others like to move around and lifestyles change so fast, it was inspiring to see people wanting to hold onto their way of doing things no matter what.

Lava flowing from Fogo volcano through villages at night
River of lava flowing between the houses from the upper village Portela towards the lower village Bangaeira in Cha das Caldeiras, Ilha do Fogo, Cape Verde. The main cone last erupted in 1675, causing mass emigration from the island. In 1847 an eruption folowed by earthquakes killed several people. The thirt eruption was dated in 1951. Fourtyfour years later on the night of 2 of april, 1995 an other vent erupted. Residents of Cha das Caldeiras where evacuated Picture taken near the vents at Pico do Fogo on 8th of december 2014 Lannoo Publishers/Cris Toala Olivares
Black lava and ash covering an island village at sunrise
Sunrise overlooking the massive destruction the volcano has caused 16 days after the first eruption, burring the second village of Bangaeira with lava. Lannoo Publishers/Cris Toala Olivares
Two Fogo family members moving their belongings as a volcanic cone erupts in the distance
Overall, the island has a population of about 48,000 people. The name Fogo means “fire” in itself. Lannoo Publishers/Cris Toala Olivares
Tree with red leaves growing in ash of Fogo volcano
Like the people, many of the plants on Fogo find benefits in the volcanic ash. Trees, shrubs, and grape vines all grow beautifully in the aftermath of eruptions. Lannoo Publishers/Cris Toala Olivares

Buy Living With Volcanoes by Cris Toala Olivares here.

The post These islanders live and thrive alongside lava appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum entanglement theorists win Nobel Prize for loophole-busting experiments https://www.popsci.com/science/nobel-prize-physics-2022-winners/ Tue, 04 Oct 2022 18:00:00 +0000 https://www.popsci.com/?p=474805
Nobel Prize Physics 2022 winners Alain Aspect, John F. Clauser and Anton Zeilinger in gold and black illustration
(From left) Alain Aspect, John F. Clauser, and Anton Zeilinger. Ill. Niklas Elmehed © Nobel Prize Outreach

A concept Einstein once called 'spooky action at a distance' earns a major scientific distinction.

The post Quantum entanglement theorists win Nobel Prize for loophole-busting experiments appeared first on Popular Science.

]]>
Nobel Prize Physics 2022 winners Alain Aspect, John F. Clauser and Anton Zeilinger in gold and black illustration
(From left) Alain Aspect, John F. Clauser, and Anton Zeilinger. Ill. Niklas Elmehed © Nobel Prize Outreach

After awarding three climate change modelers with the physics prize last year, the Nobel Committee recognized another trio of theorists in the field this year. Earlier today, it announced John F. Clauser, Alain Aspect, and Anton Zeilinger as the winners of the 2022 Nobel Prize in Physics for their independent contributions to understanding quantum entanglement.

Quantum mechanics represents a relatively new arena of physics focused on the mysterious atomic and subatomic properties of particles. Much of the research dwells on individual conditions and reactions; however, some experts theorize that two or more, say, photons can share the same state while keeping their distance from each other. If so, an expert can analyze the first sample and assume what the second, third, or fourth ones might be like. 

[Related: Nobel Prize in Medicine awarded to scientist who sequenced the Neanderthal genome.]

The phenomenon, called quantum entanglement, could hold answers to how energy flows through the universe and how information can travel over isolated networks. But some detractors wonder if the similarities in states are simply coincidental, or borne from other hard physics variables. Albert Einstein himself was skeptical of the explanation, calling it “spooky action at a distance” and a paradox in a letter to a colleague.

That’s where Clauser, Aspect, and Zeilinger come in. All three have designed experiments that address potential loopholes in the quantum entanglement theory, otherwise known as Bell inequalities. Clauser, an independent research physicist based in California, tested the polarization of photons emitted by lit-up calcium atoms with the help of a graduate student in 1972. His measurements matched those from previous physics formulas, but he worried that the way he produced the particles still left room for other correlations. 

In response, French physicist Alain Aspect recreated the experiment in a way that detected the photons and their shared states much better. His results, the Nobel Committee stated, “closed an important loophole and provided a very clear result: quantum mechanics is correct and there are no hidden variables.”

[Related: NASA is launching a new quantum entanglement experiment in space.]

While Clauser and Aspect looked at entanglement in pure particle physics, Zeilinger expanded on it with the emerging fields of computation and encryption. The professor emeritus at the University of Vienna fired lasers at crystals to create mirroring photons, and held them at various measurements to compare their properties. He also tied in data from cosmic radiation to ensure that signals from outer space weren’t influencing the particles. His work set the stage for technology’s adoption of quantum mechanics, and has now been applied to transistors, satellites, optical fibers, and IBM computers

The Institute of Science and Technology Austria issued a statement this morning congratulating Zeilinger, a former vice president in the group, and his fellow Nobel Prize recipients for their advancements. “It was the extraordinary work of Aspect, Clauser, and Zeilinger that translated the revolutionary theory of quantum physics into experiments,” they wrote. “Their demonstrations uncovered profound and mind-boggling properties of our natural world. Violations of the so-called Bell inequality continue to challenge our most profound intuitions about reality and causality. By exploring quantum states experimentally, driven only by curiosity, a range of new phenomena was discovered: quantum teleportation, many-particle and higher-order entanglements, and the technological prospects for quantum cryptography and quantum computation.”

The post Quantum entanglement theorists win Nobel Prize for loophole-busting experiments appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Here’s how life on Earth might have formed out of thin air and water https://www.popsci.com/science/water-peptide-life-earth/ Tue, 04 Oct 2022 15:30:00 +0000 https://www.popsci.com/?p=474634
Water droplets rising from Iceland's Skogafoss waterfall.
This is the first display of simple amino acids forming peptides in droplets of water. Deposit Photos

When droplets of water react with the air, life-starting things may happen.

The post Here’s how life on Earth might have formed out of thin air and water appeared first on Popular Science.

]]>
Water droplets rising from Iceland's Skogafoss waterfall.
This is the first display of simple amino acids forming peptides in droplets of water. Deposit Photos

The origins of how life on Earth arose remains a deep existential and scientific mystery. It’s long been theorized that our planet’s plentiful oceans could hold key to the secret. A new study from scientists at Purdue University could advance that idea one step further.

The paper published, on October 3 in the Proceedings of the National Academy of Sciences (PNAS), look at peptides: strings of amino acids that are important and tiny building blocks of protein and life itself. The authors found that amino peptides can spontaneously generate in droplets of water during the quick reactions that happen when water meets the atmosphere, such as when a waterfall crashes down to a rock and the spray is lifted into the air. It’s possible that this action happened when the Earth was a life-less, volcanic, watery, molten rock-filled planet about four billion years ago when life first began.

“This is essentially the chemistry behind the origin of life,” Graham Cooks, an author of the study and professor of analytical chemistry at Purdue, said in a press release. “This is the first demonstration that primordial molecules, simple amino acids, spontaneously form peptides, the building blocks of life, in droplets of pure water. This is a dramatic discovery.”

[Related: A primer on the primal origins of humans on Earth.]

In the study, the authors write that this discovery provides “a plausible route for the formation of the first biopolymers,” or the complex structures produced by living things. Scientists have been chipping away at the goal of understanding how this works for decades, since decoding the secret of how (and even why) life arose on Earth can help scientists better search for life on other planets, or even moons in our galaxy and beyond.

Understanding this water-based chemistry circles back to the proteins that created life on Earth itself. Billions of years ago, the raw amino acids that built life are believed to have been delivered to Earth by meteorites. These amino acids reacted and clung together to form peptides, the building blocks of proteins and eventually life itself. However, a water molecule must be lost when the amino acids cling together for peptides to form. That’s not easy to do in a planet that is mostly covered in water. Basically, for life to form, it needs water, but also the loss of some water.

Cooks explained this “water paradox,” to VICE. “The water paradox is the contradiction between (i) the very considerable evidence that the chemical reactions leading to life occurred in the prebiotic ocean and (ii) the thermodynamic constraint against exactly these (water loss) reactions occurring in water. Proteins are formed from amino acids by loss of water” and “loss of water in water will not occur because the process will be reversed by the water (thermodynamically forbidden).”

The new study has taken a rare glimpse into the Earth’s early years, when nonliving compounds suddenly combined to form living things. This process of nonliving things giving rise to life called abiogenesis and it is a still not completely clear how it works. Since peptides form the basis of proteins (and other biomolecules that can self-replicate), the creation of peptides is a crucial step in abiogenesis.

[Related: Comets Could Have Kickstarted Life On Earth And Other Worlds.]

Cooks and his team demonstrated that peptides can readily form in the kinds of chemical environments that were present on Earth billions of years ago. A key aspect, however, is the size of the tiny droplets flying through the air or sliding down rocks, interacting with the air and forming quick chemical reactions. “The rates of reactions in droplets are anywhere from a hundred to a million times faster than the same chemicals reacting in bulk solution,” said Cooks.

This speedy chemical reactions do not require a catalyst to begin the reaction, which made the evolution of life on Earth possible. The team used “droplet fusion” experiments to reconstruct the possible formation of peptides, that simulate how water droplets collide in the air. Understanding the chemical synthesis process at play when amino acids built themselves into protein could help synthetic chemists speed up the chemical reactions critical to creating new drugs and therapeutic treatments for diseases.

“If you walk through an academic campus at night, the buildings with the lights on are where synthetic chemists are working,” Cooks said. “Their experiments are so slow that they run for days or weeks at a time. This isn’t necessary, and using droplet chemistry, we have built an apparatus, which is being used at Purdue now, to speed up the synthesis of novel chemicals and potential new drugs.” 

The post Here’s how life on Earth might have formed out of thin air and water appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Europe’s energy crisis could shut down the Large Hadron Collider https://www.popsci.com/science/europe-gas-crisis-cern/ Mon, 26 Sep 2022 21:00:00 +0000 https://www.popsci.com/?p=472868
Large Hadron Collider experiment view with CERN staff in a hard hat standing near during the Europe gas crisis
Large Hadron Collider experiments like beauty might be put on ice for a few months, or even a year. CERN

In light of Russia's war in Ukraine, CERN officials are considering the energy costs of particle physics experiments.

The post Europe’s energy crisis could shut down the Large Hadron Collider appeared first on Popular Science.

]]>
Large Hadron Collider experiment view with CERN staff in a hard hat standing near during the Europe gas crisis
Large Hadron Collider experiments like beauty might be put on ice for a few months, or even a year. CERN

Europe is now suffering an energy crisis. The fallout from the invasion of Ukraine, resulting in the Russian government choking gas supplies, has pushed the continent’s heating and electricity prices up to a much higher order of magnitude.

In the heart of Europe, along the French-Swiss border, the particle physics laboratory at CERN is facing the same plight. This month, it’s been reported that CERN officials are drawing up plans to limit or even shut down the recently rebooted Large Hadron Collider (LHC).

If the LHC, the largest and most expensive collider in the world, does shut down for a short stint, it wouldn’t be out of the ordinary for particle accelerator research. But if it has to go into hibernation for a longer period, complications might arise.

[Related: The green revolution is coming for power-hungry particle accelerators]

Some say that CERN uses as much electricity as a small city, and there’s some truth in that. By the group’s own admission, in a year, its facility consumes about one-third the electricity as nearby Geneva, Switzerland. The exact numbers do vary from month to month and year to year, but the lab’s particle accelerators can account for around 90 percent of CERN’s electric bill.

For an observer on the ground, it’s very easy to wonder why so much energy is going into arcane physics experiments involving subatomic particles, plasma, and dark matter. “Given the current context and as part of its social responsibility, CERN is drawing up a plan to reduce its energy consumption this winter,” Maïlys Nicolet, a spokesperson for the group, wrote in a press statement.

That said, CERN doesn’t have the same utility concerns as the everyday European as its energy strategy is already somewhat sustainable. The facility draws its power from the French grid, which sources more than two-thirds of its juice from nuclear fission—the highest of any country in the world. Not only does that drastically reduce the LHC’s carbon footprint, it also makes it far less reliant on imported fossil fuels.

But the French grid has another quirk: Unlike much of Europe, which relies on gas to heat its homes, homes in France often use electric heaters. As a result, local power bills can double during the cold months. Right now, 32 of the country’s 56 nuclear reactors are down for maintenance or repairs. The French government plans to bolster its grid against the energy crisis by switching most of them back on by winter. 

[Related: Can Europe swap Russian energy with nuclear power?]

But if that doesn’t happen, CERN might be facing a power supply shortage. Even if the research giant stretched its budget to pay for power, there just might not be enough of it, depending on how France’s reactors fare. “For this autumn, it is not a price issue, it’s an availability issue,” Serge Claudet, chair of CERN’s energy management panel, told Science.

Hibernation isn’t exactly out of the ordinary for LHC, though. In the past, CERN has shut down the particle accelerator for maintenance during the winter. This year is no exception: The collider’s stewards plan to mothball it from November until March. If Europe’s energy crisis continues into 2023, the LHC pause could last well into the warmer months, if not longer.

CERN managers are exploring their options, according to the facility’s spokesperson. The French government might order the LHC not to run at times of peak electric demand, such as mornings or evenings. Alternatively, to keep its flagship running, CERN might try to shut off some of the smaller accelerators that share the site.

But not all particle physicists are on board with prioritizing energy for a single machine “I don’t think you could justify running it but switching off everything else,” says Kristin Lohwasser, a particle physicist at the University of Sheffield in the United Kingdom and a collaborator on ATLAS, one of the LHC’s experiments.

On the other hand, the LHC has more to lose by going dark for an indefinite amount of time. If it has to power down for a year or more, the collider’s equipment, such as the detectors used to watch collisions at very small scales, might start to degrade. “This is why no one would blankly advertise to switch off and just wait five years,” says Lohwasser. It also takes a fair amount of energy to keep the LHC in a dormant state.

Even if CERN’s accelerators aren’t running, the particle physicists around the world sifting through the data will still have plenty to work on. Experiments in the field produce tons of results: positions, velocities, and countless mysterious bits of matter from thousands of collisions. Experts can still find subatomic artifacts hidden in the measurements as much as a decade after they’re logged. The flow of physics studies almost certainly won’t cease on account of an energy crisis.

For now, the decision to power LHC’s third run of experiments still remains up in the air. This week CERN officials will present a plan to the agency’s governing authority on how to proceed. That solution will, in turn, be presented to the French and Swiss governments for consultation. Only after will the final decision be made public.

“So far, I do not necessarily see a big concern from [physicists] about these plans,” says Lohwasser. If CERN must take a back seat to larger concerns, then many in the scientific community will accept that.

The post Europe’s energy crisis could shut down the Large Hadron Collider appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
After the big bang, light and electricity shaped the early universe https://www.popsci.com/science/big-bang-galaxy-formation-james-webb-space-telescope/ Tue, 20 Sep 2022 16:18:00 +0000 https://www.popsci.com/?p=471170
Deepest image of space with twinkling stars captured by James Webb Space Telescope
As the James Webb Space Telescope peers far into space, it could dredge up clues to how early universes were shaped by atomic interactions. NASA, ESA, CSA, STScI

Free-roaming atoms charged across newly formed galaxies, bringing us from cosmic dark to dawn.

The post After the big bang, light and electricity shaped the early universe appeared first on Popular Science.

]]>
Deepest image of space with twinkling stars captured by James Webb Space Telescope
As the James Webb Space Telescope peers far into space, it could dredge up clues to how early universes were shaped by atomic interactions. NASA, ESA, CSA, STScI

When the first stars and galaxies formed, they didn’t just illuminate the cosmos. These bright structures also fundamentally changed the chemistry of the universe. 

During that time, the hydrogen gas that makes up most of the material in the space between galaxies today became electrically charged. That epoch of reionization, as it’s called, was “one of the last major changes in the universe,” says Brant Robertson, who leads the Computational Astrophysics Research Group at the University of California, Santa Cruz. It was the dawn of the universe as we know it.

But scientists haven’t been able to observe in detail what occurred during the epoch of reionization—until now. NASA’s newly active James Webb Space Telescope offers eyes that can pierce the veil on this formative time. Astrophysicists like Robertson are already poring over JWST data looking for answers to fundamental questions about that electric cosmic dawn, and what it can tell us about the dynamics that shape the universe today.

What happened after the big bang?

The epoch of reionization wasn’t the first time that the universe was filled with electricity. Right after the big bang, the cosmos were dark and hot; there were no stars, galaxies, and planets. Instead, electrons and protons roamed free, as it was too steamy for them to pair up

But as the universe cooled down, the protons began to capture the electrons to form the first atoms—hydrogen, specifically—in a period called “recombination,” explains Anne Hutter, a postdoctoral researcher at the Cosmic Dawn Center, a research collaboration between the University of Copenhagen and the National Space Institute at the Technical University of Denmark. That process neutralized the charged material.

Any material held in the universe was spread out relatively evenly at that time, and there was very little structure. But there were small fluctuations in density, and over millions of years, the changes drew early atoms together to eventually form stars. The gravity of early stars drew more gases, particles, and other components to coalesce into more stars and then galaxies. 

[Related: How old is the universe? Our answer keeps getting better.]

Once the beginnings of galaxies lit up, the cosmic dark age, as astrophysicists call it, was over. These stellar bodies were especially bright, Robertson says: They were more massive than our sun and burned hot, shining in the ultraviolet spectrum

“Ultraviolet light, if it’s energetic enough, can actually ionize hydrogen,” Robertson says. All it takes is a single, especially energetic particle of light, called a photon, to strip away the electron on a hydrogen atom and leave it with a positive electrical charge. 

As the galaxies started coming together, they would first ionize the regions around them, leaving bubbles of charged hydrogen gas across the universe. As the light-emitting clusters grew, more stars formed to make them even brighter and full of photons. Additional new galaxies began to develop, too. As they became luminous, the ionized bubbles began to overlap. That allowed a photon from one galaxy to “travel a much larger distance because it didn’t run into a hydrogen atom as it crossed through this network,” Robertson explains.

At that point, the rest of the intergalactic medium in the universe—even in regions far from galaxies—quickly becomes ionized. That’s when the epoch of reionization ended and the universe as we know it began.

“This was the last time whole properties of the universe were changed,” Robertson says. “It also was the first time that galaxies actually had an impact beyond their local region.”

The James Webb Space Telescope’s hunt for ionized clues

With all of the hydrogen between galaxies charged the universe entered a new phase of formation. This ionization had a ripple effect on galaxy formation: Any star-studded structures that formed after the cosmic dawn were likely affected. 

“If you ionize a gas, you also heat it up,” explains Hutter. Remember, high temperatures it difficult for material to coalesce and form new stars and planets—and can even destroy gases that are already present. As a result, small galaxies forming in an ionized region might have trouble gaining enough gas to make more stars. “That really has an impact on how many stars the galaxies are forming,” Hutter says. “It affects their entire history.”

Although scientists have a sense of the broad strokes of the story of reionization, some big questions remain. For instance, while they know roughly that the epoch ended about a billion years after the big bang, they’re not quite sure when reionization—and therefore the first galaxy formation—began. 

That’s where JWST comes in. The new space telescope is designed to be able to search out the oldest bits of the universe that are invisible to human eyes, and gather data on the first glimmers of starlight that ionized the intergalactic medium. Astronomers largely detect celestial objects by the radiation they emit. The ones farther away from us tend to appear in the infrared, as the distance distorts their wavelengths to be longer. With the universe expanding, the light can take billions of years to reach JWST’s detectors. 

[Related: Astronomers are already using James Webb Space Telescope data to hunt down cryptic galaxies]

That, in a nutshell, is how scientists are using JWST to peer at the first galaxies in the process of ionizing the universe. While older tools like the Hubble Space Telescope could spot the occasional early galaxy, the new space observatory can gather finer details to place the groups of stars in time.

“Now, we can very precisely work out how many galaxies were around, you know, 900 million years after the big bang, 800, 700, 600, all the way back to 300 million years after the big bang,” Robertson says. Using that information, astrophysicists can calculate how many ionizing photons were around at each age, and how the particles might have affected their surroundings.

Painting a picture of the cosmic dawn isn’t just about understanding the large-scale structure in the universe: It also explains when the elements that made us, like carbon and oxygen, became available as they formed inside the first stars. “[The question] really is,” Hutter says, “where do we come from?” 

Correction (September 21, 2022): The fluctuations in the early universe’s density took place over millions of years, not billions as previously written. This was an editing error.

The post After the big bang, light and electricity shaped the early universe appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh https://www.popsci.com/environment/bangladesh-farmers-seasonal-floods/ Thu, 15 Sep 2022 18:00:00 +0000 https://www.popsci.com/?p=470227
Groundwater pumps like this one deliver water from below to farms in Bangladesh.
A groundwater pump delivers water from below a farm during the dry season in Bangladesh. M. Shamsudduha

Pumping water in the dry months makes the ground sponge-like for the wet season, a system called the Bengal Water Machine.

The post Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh appeared first on Popular Science.

]]>
Groundwater pumps like this one deliver water from below to farms in Bangladesh.
A groundwater pump delivers water from below a farm during the dry season in Bangladesh. M. Shamsudduha

To control unpredictable water and stop floods, you might build a dam. To build a dam, you generally need hills and dales—geographic features to hold water in a reservoir. Which is why dams don’t fare well in Bangladesh, most of which is a flat floodplain that’s just a few feet above sea level.

Instead, in a happy accident, millions of Bangladeshi farmers have managed to create a flood control system of their very own, taking advantage of the region’s wet-and-dry seasonal climate. As farmers pump water from the ground in the dry season, they free up space for water to flood in during the wet season, hydrogeologists found. 

Researchers published the system they’d uncovered in the journal Science on September 15. And authorities could use the findings to make farming more sustainable, writes Aditi Mukherji, a researcher in Delhi for the International Water Management Institute who wasn’t involved in the paper, in a companion article in Science.

“No one really intended this to happen, because farmers didn’t have the knowledge when they started pumping,” says Mohammad Shamsudduha, a geoscientist at University College London in the UK and one of the paper’s authors.

[Related: What is a flash flood?]

Most of Bangladesh lies in the largest river delta on the planet, where the Rivers Ganges and Brahmaputra fan out into the Bay of Bengal. It’s an expanse of lush floodplains and emerald forests, blanketing some of the most fertile soil in the world. Indeed, that soil supports a population density nearly thrice that of New Jersey, the densest US state.

Like much of South Asia, Bangladesh’s climate revolves around the yearly monsoon. The monsoon rains support local animal and plant life and are vital to agriculture, too. But a heavy monsoon can cause devastating floods, as residents of northern Bangladesh experienced in June.

Yet Bangladesh’s warm climate means that farmers can grow crops, especially rice, in the dry season. To do so, farmers often irrigate their fields with water they draw up from the ground. Many small-scale farmers started doing so in the 1990s, when the Bangladeshi government loosened restrictions on importing diesel-powered pumps and made them more affordable. 

The authors of the new study wanted to examine whether pumping was depriving the ground of its water. That’s generally not very good, resulting in strained water supplies and the ground literally sinking (just ask Jakarta). They examined data from 465 government-controlled stations that monitor Bangladesh’s irrigation efforts across the country.

[Related: How climate change fed Pakistan’s devastating floods]

The situation was not so simple: In many parts of the country, groundwater wasn’t depleting at all.

It’s thanks to how rivers craft the delta. The Ganges and the Brahmaputra carry a wealth of silt and sediment from as far away as the Himalayas. As they fan out through the delta, they deposit those fine particles into the surrounding land. These sediments help make the delta’s soil as fertile as it is. 

This accumulation also results in loads of little pores in the ground. When the heavy rains come, instead of running off into the ocean or adding to runaway flooding, all that water can soak into the ground, where farmers can use it.

Where a dam’s reservoir is more like a bucket, Bangladesh is more like a sponge. During the dry season, farmers dry out the sponge. That gives it more room to absorb more water in the monsoon. And so forth, in an—ideally—self-sustaining cycle. Researchers call it the Bengal Water Machine. 

“The operation of the [Bengal Water Machine] was suspected by a small number of hydrogeologists within our research network but essentially unknown prior to this paper,” says Richard Taylor, a hydrogeologist at University College London in the UK, and another of the paper’s authors.

“If there was no pumping, then this would not have happened,” says Kazi Matin Uddin Ahmed, a hydrogeologist at the University of Dhaka in Bangladesh, and another of the paper’s authors. 

Storing water underground instead of a dam has a few advantages, Ahmed adds. The subsurface liquid is at less risk of evaporating into useless vapor. It doesn’t rewrite the region’s geography, and farmers can draw water from their own land, rather than relying on water shuttled in through irrigation channels.

The researchers believe that other “water machines” might fill fertile deltas elsewhere in the tropics with similar wet-and-dry climates. Southeast Asia might host a few, at the mouths of the Red River, the Mekong, and the Irrawaddy.

But an ominous question looms over the Bengal Water Machine: What happens as climate change reshapes the delta? Most crucially, a warming climate might intensify monsoons and change where they deliver their rains. “This is something we need to look into,” says Shamsudduha.

The Bengal Water Machine faces several other immediate challenges. In 2019, in response to overpumping concerns, the Bangladeshi government reintroduced restrictions on which farmers get to install a pump, which could make groundwater pumping more inaccessible. Additionally, many farmers use dirty diesel-powered pumps. (The government’s now encouraging farmers to switch to solar power.)

Also, keeping the Bengal Water Machine ship-shape means not using too much groundwater. Unfortunately, that’s already happening. Bangladesh’s west generally gets less rainfall than its east, and the results reflect that. The researchers noticed groundwater depletion in the west that wasn’t happening out east.

“There is a limit,” says Ahmed. “There has to be close monitoring of the system.”

The post Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Space diamonds sparkle from the wreckage of a crushed dwarf planet https://www.popsci.com/science/diamond-meteorite-crystal-structure/ Thu, 15 Sep 2022 12:34:20 +0000 https://www.popsci.com/?p=470022
Ionsdaleite diamond crystal structure under a microscope
A closeup of the Ionsdaleite diamond's complex folded structure, which may add to its toughness. Andy Tomkins

Mysterious meteorite gems from the solar system's early days could help us design harder lab-grown diamonds.

The post Space diamonds sparkle from the wreckage of a crushed dwarf planet appeared first on Popular Science.

]]>
Ionsdaleite diamond crystal structure under a microscope
A closeup of the Ionsdaleite diamond's complex folded structure, which may add to its toughness. Andy Tomkins

Today, our solar system is fairly stable. There are eight planets (sorry, Pluto) that keep constant orbit around the sun, with little risk of being crushed by asteroids. But it wasn’t always that way.

Some 4.5 billion years ago, as the solar system was just forming, large chunks of rocks frequently collided with slow-growing dwarf planets. The results were often cataclysmic for both bodies, reducing them to debris that still pummels Earth today. But sometimes those violent collisions yielded the creation of something shiny and new—perhaps even diamonds.

That’s likely what happened when an asteroid smashed a dwarf planet into smithereens in those earliest days of the solar system, according to a new paper published this week in the journal Proceedings of the National Academy of Sciences. The collision was so violent, the authors say, that it triggered a chain of events that transformed graphite from the dwarf planet’s mantle into diamonds now found in meteorites.

The explosive process through which these space gems formed, the researchers say, might even inspire a method to make lab-grown diamonds that are tougher than the ones people mine.

“We always say a diamond is the hardest material. It’s natural, nothing we’ve been able to make in the lab is harder than diamond. Yet, there have been hints of research over the years that there are forms of diamond that appear to actually be harder than single-crystal diamonds. And that would be immensely useful,” says Laurence Garvie, a research scientist in the Center for Meteorite Studies at Arizona State University, who was not involved in the new research. “Here’s a hypothesis that may add a new understanding of how these materials are formed.” And such a possibility, he adds, is tantalizing for all kinds of industrial and consumer uses.  

[Related: Meteorites older than the solar system contain key ingredients for life]

On Earth, diamonds emerge when carbon deposits are subjected to high pressure and high temperatures, typically from the geo processes rumbling deep under the planet’s crust. But that explanation never made sense for a carbon-rich meteorite, called a ureilite, that’s mysteriously filled with space diamonds. It takes a fair amount of mass to exert enough pressure on the carbon, Garvie explains, much more than the dwarf planet that these ancient rocks probably came from. Instead, some meteoriticists have proposed that shock from an impact triggered the transformation

But shock alone doesn’t completely explain the crystals in the ureilites, says Alan Salek, a physics researcher at the Royal Melbourne Institute of Technology and one of the authors on the new paper. For example, the meteorites’ diamonds are much larger than any created in laboratory experiments that mimicked the proposed conditions, he says. 

Furthermore, scientists have found inconsistencies in the urelites’ composition. Some don’t appear to have any hints of diamonds. Others contain carbon crystals that look notably different from engagement ring stones: The structures have more folds, with atoms that appear to be hexagonal rather than cubic. Those extra sides are thought to make the material harder.

But as Andrew Tomkins, a geoscientist at Monash University who led the latest research, writes in an email to Popular Science, “of course everyone knows that diamond is very hard, so it should be impossible to fold.” 

After studying the atomic properties of the carbon in ureilites, Tomkins, Salek, and their colleagues devised a scenario they say can explain all of the gems’ quirks. The story goes that when an asteroid slammed into a dwarf planet in the active early solar system, it barreled deep into the ureilite parent body and triggered a sequence of events. 

Two meteorite researchers holding up a diamond sample on a slide tray in a lab
Andy Tomkins (left) and Alan Salek hold up a ureilite sample. RMIT University

The dwarf planet contained folded graphite in the dwarf planet. Once the asteroid hit, the violent collision released pressure from the mantle, much like when you twist the lid off a soda bottle, Tomkins explains. This rapid decompression caused a bit of the mantle to melt and release fluids and gases, which then reacted with minerals. The activity forced the folded graphite in the planet to transform into the hexagonal crystals. Later, as the pressure and temperature dropped, regular cubic diamonds formed, too. 

The hexagonal structure of the crystals are still the subject of some controversy. Some scientists argue that the shape makes them a different kind of diamond known as lonsdaleite. The gem was first identified in Crater Diablo in Arizona in 1967, and has since been found at other impact sites around the world. Others have suggested that the material is something like a snapshot of disordered diamond formation. Garvie and his colleagues have given alternate explanations, such as diamonds with graphine-like intergrowths. But Salek and Tomkins say their new research definitively proves that the ureilite-based gems are indeed hexagonal diamonds, and therefore, should be classified as lonsdaleite.

[Related: Earth has more than 10,000 kinds of minerals]

However it’s defined, scientists tend to agree that this substance could have valuable properties. One attempt to recreate lonsdaleite indirectly measured the material to be 58 percent stronger than its cubic counterpart.  If the substance can be made artificially in a laboratory, Garvie says “the possibilities are endless,” describing potential uses for protective coatings on, say, an iPhone or a camera lens. Salek suggests creating saws and other cutting implements with blades so hard that they can’t get dull. 

The crystals Salek and Tomkins found, however, are just about 2 percent of the size of a human hair. So don’t expect to profess your love with a rare ureilite diamond anytime soon. But, Salek adds, “the hope is to mimic the process [from space] and make bigger ones.”

The post Space diamonds sparkle from the wreckage of a crushed dwarf planet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists used lasers to make the coldest matter in the universe https://www.popsci.com/science/create-the-coldest-matter-in-the-universe/ Fri, 09 Sep 2022 17:00:00 +0000 https://www.popsci.com/?p=468770
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer.
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer. Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University

The atoms were chilled to within a billionth of a degree of absolute zero.

The post Scientists used lasers to make the coldest matter in the universe appeared first on Popular Science.

]]>
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer.
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer. Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University

In a laboratory in Kyoto, Japan, researchers are working on some very cool experiments. A team of scientists from Kyoto University and Rice University in Houston, Texas has cooled matter to within a billionth of a degree of absolute zero (the temperature when all motion stops), making it the coldest matter in the entire universe . The study was published in the September issue of Nature Physics, and “opens a portal to an unexplored realm of quantum magnetism,” according to Rice University.

“Unless an alien civilization is doing experiments like these right now, anytime this experiment is running at Kyoto University it is making the coldest fermions in the universe,” said Rice University professor Kaden Hazzard, corresponding theory author of the study, and member of the Rice Quantum Initiative, in a press release. “Fermions are not rare particles. They include things like electrons and are one of two types of particles that all matter is made of.”

The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer.
Different colors represent the six possible spin states of each atom. CREDIT: Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University

The Kyoto team led by study author Yoshiro Takahashi used lasers to cool the fermions (or particles like protons, neutrons, and electrons whose spin quantum number is an odd half integer like 1/2 or 3/2) of ytterbium atoms to within about one-billionth of a degree of absolute zero. That’s roughly 3 billion times colder than interstellar space. This area of space is still warmed by the cosmic microwave background (CMB), or the afterglow of radiation from the Big Bang… about 13.7 billion years ago. The coldest known region of space is the Boomerang Nebula, which has a temperature of one degree above absolute zero and is 3,000 light-years from Earth.

[Related: How the most distant object ever made by humans is spending its dying days.]

Just like electrons and photons, atoms are are subject to the laws of quantum dynamics, but their quantum behaviors only become noticeable when they are cooled to within a fraction of a degree of absolute zero. Lasers have been used for more than 25 years to cool atoms to study the quantum properties of ultracold atoms.

“The payoff of getting this cold is that the physics really changes,” Hazzard said. “The physics starts to become more quantum mechanical, and it lets you see new phenomena.”

In this experiment, lasers were used to to cool the matter by stopping the movements of 300,000 ytterbium atoms within an optical lattice. It simulates the Hubbard model, a quantum physics first proposed by theoretical physicist John Hubbard in 1963. Physicists use Hubbard models to investigate the magnetic and superconducting behavior of materials, especially those where interactions between electrons produce collective behavior,

This model allows for atoms to show off their unusual quantum properties, which include the collective behavior between electrons (a bit like a group of fans performing “the wave” at a football or soccer game) and superconduction, or an object’s ability to conduct electricity without losing energy.

“The thermometer they use in Kyoto is one of the important things provided by our theory,” said Hazzard. “Comparing their measurements to our calculations, we can determine the temperature. The record-setting temperature is achieved thanks to fun new physics that has to do with the very high symmetry of the system.”

[Related: Chicago now has a 124-mile quantum network. This is what it’s for.]

The Hubbard model simulated in Kyoto has special symmetry known as SU(N). The SU stands for special unitary group, which is a mathematical way of describing the symmetry. The N denotes the possible spin states of particles within the model.

The greater the value of N, the greater the model’s symmetry and the complexity of magnetic behaviors it describes. Ytterbium atoms have six possible spin states, and the simulator in Kyoto is the first to reveal magnetic correlations in an SU(6) Hubbard model. These types of calculations are impossible to calculate on a computer, according to the study.

“That’s the real reason to do this experiment,” Hazzard said. “Because we’re dying to know the physics of this SU(N) Hubbard model.”

Graduate student in Hazzard’s research group and study co-author Eduardo Ibarra-García-Padilla added that the Hubbard model aims to capture the very basic ingredients needed for what makes a solid material a metal, insulator, magnet, or superconductor. “One of the fascinating questions that experiments can explore is the role of symmetry,” said Ibarra-García-Padilla. “To have the capability to engineer it in a laboratory is extraordinary. If we can understand this, it may guide us to making real materials with new, desired properties.”

The team is currently working on developing the first tools capable of measuring the behavior that arises a billionth of a degree above absolute zero.

“These systems are pretty exotic and special, but the hope is that by studying and understanding them, we can identify the key ingredients that need to be there in real materials,” conculed Hazzard.

The post Scientists used lasers to make the coldest matter in the universe appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sustainable batteries could one day be made from crab shells https://www.popsci.com/science/crab-shell-green-batteries/ Thu, 01 Sep 2022 19:30:00 +0000 https://www.popsci.com/?p=467040
A bucket of crabs, who have a multi-purpose material called chitosan in their shells.
Crabs that we eat contain chitosan in their shells, which scientists are using to make batteries. Mark Stebnick via Pexels

A material in crab shells has been used to brew booze, dress wounds, and store energy.

The post Sustainable batteries could one day be made from crab shells appeared first on Popular Science.

]]>
A bucket of crabs, who have a multi-purpose material called chitosan in their shells.
Crabs that we eat contain chitosan in their shells, which scientists are using to make batteries. Mark Stebnick via Pexels

There are those who say ours is the age of the battery. New and improved batteries, perhaps more than anything else, have made possible a world of mobile phones, smart devices, and blossoming electric vehicle fleets. Electrical grids powered by clean energy may soon depend on server-farm-sized battery projects with massive storage capacity.

But our batteries aren’t perfect. Even if they’ll one day underpin a world that’s sustainable, today they’re made from materials that aren’t. They rely on heavy metals or non-organic polymers that might take hundreds of years to degrade. That’s why battery disposal is such a tricky task.

Enter researchers from the University of Maryland and the University of Houston, who have made a battery from a promising alternative: crustacean shells. They’ve taken a biological material, easily sourced from the same crabs and squids you can eat, and crafted it into a partly biodegradable battery. They published their results in the journal Matter on September 1.

It’s not the first time batteries have been made from this stuff. But what makes the researchers’ work new is the design, according to Liangbing Hu, a materials scientist at the University of Maryland and one of the paper’s authors. 

A battery has three key components: two ends and a conductive filling, called an electrolyte. In short, charged particles crossing the electrolyte put out a steady flow of electric current. Without an electrolyte, a battery would just be a sitting husk of electric charge.

Today’s batteries use a whole rainbow of electrolytes, and few are things you’d particularly want to put in your mouth. A standard AA battery uses a paste of potassium hydroxide, a dangerously corrosive substance that makes throwing batteries in the trash a very bad idea. 

[Related: This lithium-ion battery kept going (and going and going) in the extreme cold]

The rechargeable batteries in your phone are a completely different sort of battery: lithium-ion batteries. Those batteries can power on for many years and usually rely on plastic-polymer-based electrolytes that aren’t quite as toxic, but they can still take centuries or even millennia to break down.

Batteries themselves, full of environmentally unfriendly materials, aren’t the greenest. They’re rarely sustainably made, either, reliant on rare earth mining. Even if batteries can last thousands of discharges and recharges, thousands more get binned every day.

So researchers are trawling through oceans of materials for a better alternative. In that, they’ve started to dredge up crustacean parts. From crabs and prawns and lobsters, battery-crafters can extract a material called chitosan. It’s a derivative of chitin, which makes up the hardened exoskeletons of crustaceans and insects, too. There’s plenty of chitin to go around, and a relatively simple chemical process is all that’s necessary to convert it into chitosan.

We already use chitosan for quite a few applications, most of which have little to do with batteries. Since the 1980s, farmers have sprinkled chitosan over their crops. It can boost plant growth and harden their defenses against fungal infestation. 

[Related: The race to close the EV battery recycling loop]

Away from the fields, chitosan can remove particles from liquids: Water purification plants use it to remove sediment and impurities from drinking water, and alcohol-makers use it to clarify their brew. Some bandages come dressed with chitosan that helps seal wounds.

You can sculpt things from chitosan gel, too. Because chitosan is biodegradable and non-toxic, it’s especially good for making things that must go into the human body. It’s entirely possible that hospitals of the future might use specialized 3D printers to carve chitosan into tissues and organs for transplants.

Now, researchers are seeking to put chitosan into batteries whose ends are made from zinc. Largely experimental today, these rechargeable batteries could one day form the backbone of an energy storage system.

The researchers at Maryland and Houston weren’t the first to think about making chitosan into batteries. Scientists around the world, from China to Italy to Malaysia to Iraqi Kurdistan, have been playing with crab-stuff for about a decade, spindling it into intricate webwork that charged particles could cross like adventurers.

The authors of the new work added zinc ions to that chitosan structure, which bolstered its physical strength. Combined with the zinc ends, the addition also boosted the battery’s effectiveness.

This design means that two-thirds of the battery is biodegradable; the researchers found that the electrolyte broke down completely within around five months. Compared to conventional electrolytes and their thousand-year lifespans in the landfill, Hu says, these have little downside. 

And although this design was made for those experimental zinc batteries, Hu sees no reason researchers can’t extend it to other sorts of batteries—including the one in your phone.

Now, Hu and his colleagues are pressing ahead with their work. One of their next steps, Hu says, is to expand their focus beyond the confines of the electrolyte—to the other parts of a battery. “We will put more attention to the design of a fully biodegradable battery,” he says.

The post Sustainable batteries could one day be made from crab shells appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Atoms are famously camera-shy. This dazzling custom rig can catch them. https://www.popsci.com/science/particle-physics-custom-camera/ Sun, 28 Aug 2022 17:00:00 +0000 https://www.popsci.com/?p=465661
MAGIS-100 vacuum for a Fermilab quantum physics experiment
When built, the MAGIS-100 atom interferometer will be the largest in the world. But it's still missing a key component: a detailed camera. Stanford University

The mirror-studded camera is designed to take glamor shots of quantum physics experiments.

The post Atoms are famously camera-shy. This dazzling custom rig can catch them. appeared first on Popular Science.

]]>
MAGIS-100 vacuum for a Fermilab quantum physics experiment
When built, the MAGIS-100 atom interferometer will be the largest in the world. But it's still missing a key component: a detailed camera. Stanford University

In suburban Chicago, about 34 miles west of Lake Michigan, sits a hole in the ground that goes about 330 feet straight down. Long ago, scientists had the shaft drilled for a particle physics experiment that’s long vanished from this world. Now, in a few short years, they will reuse the shaft for a new project with the mystical name MAGIS-100.

When MAGIS-100 is complete, physicists plan to use it for detecting hidden treasures: dark matter, the mysterious invisible something that’s thought to make up much of the universe; and gravitational waves, ripples in space-time caused by cosmic shocks like black holes collisions. They hope to find traces of those elusive phenomena by watching the quantum signatures they leave behind on raindrop-sized clouds of strontium atoms.

But actually observing those atoms is trickier than you might expect. To pull off similar experiments, physicists have so far relied on cameras comparable to the ones on a smartphone. And while the technology might work fine for a sunset or a tasty-looking food shot, it limits what physicists can see on the atomic level.

[Related: It’s pretty hard to measure nothing, but these engineers are getting close]

Fortunately, some physicists may have an upgrade. A research team from different groups in Stanford, California, has created a unique camera contraption that relies on a dome of mirrors. The extra reflections help them to see what light is entering the lens, and tell what angle a certain patch of light is coming from. That, they hope, will let them peer into an atom cloud like never before.

Your mobile phone camera or DSLR doesn’t care where light travels from: It captures the intensity of the photons and the colors reflected by the wavelengths, little more. For taking photographs of your family, a city skyline, or the Grand Canyon, that’s all well and good. But for studying atoms, it leaves quite a bit to be desired. “You’re throwing away a lot of light,” says Murtaza Safdari, a physics graduate student at Stanford University and one of the creators.

Physicists want to preserve that information because it lets them paint a more complex, 3D picture of the object (or objects) they’re studying. And when it comes to the finicky analyses physicists like to do, the more information they can get in one shot, the quicker and better. 

One way to get that information is to set up multiple cameras, allowing them to snap pictures from multiple angles and stitch them together for a more detailed view. That can work great with, say, five cameras. But some physics experiments require such precise measurements that even a thousand cameras might not do the trick.

Stanford atom camera mirror array shown in the lab
The 3D-printed, laser-cut camera. Sanha Cheong/Stanford University

So, in a Stanford basement, researchers decided to set out on making their own system to get around that problem. “Our thinking…was basically: Can we try and completely capture as much information as we can, and can we preserve directional information?” says Safdari.

Their resulting prototype—made from off-the-shelf and 3D-printed components—looks like a shallow dome, spangled with an array of little mirror-like dots on the inside. The pattern seems to form a fun optical illusion of concentric circles, but it’s carefully calculated to maximize the light striking the camera.

For the MAGIS-100 project, the subject of the shot—the cloud of strontium atoms—would sit within the dome. A brief light flash from an external laser beam would then scatter off the mirror-dots and through the cloud at a myriad angles. The lens would pick up the resulting reflections, how they’ve interacted with the molecules, and which dots they’ve bounced off.

Then, from that information, machine learning algorithms can piece the three-dimensional structure of the cloud back together. Currently, this reconstruction takes many seconds; in an ideal world, it would take milliseconds, or even less. But, like the algorithms used to trainy self-driving cars to adjust to the surrounding world, researchers think their computer codes’ performance will improve. 

While the creators haven’t gotten around to testing the camera on atoms just yet, they did try it out by scanning some suitably sized sample parts: 3D-printed letter-shaped pieces the size of the strontium droplets they intend to use. The photo they took was so clear, they could find defects where the little letters D, O, and E varied from their intended design. 

3D-printed letters photographed and 3D modeled on a grid
Reconstructions of the test letters from a number of angles. Sanha Cheong/SLAC National Accelerator Laboratory

For atom experiments like MAGIS-100, this equipment is distinct from anything else on the market. “The state of the art are just cameras, commercial cameras, and lenses,” says Ariel Schwartzman, a physicist at SLAC National Accelerator Laboratory in California and co-creator of the Stanford setup. They scoured photo-equipment catalogs for something that could see into an atom cloud from multiple angles at once. “Nothing was available,” says Schwartzman.

Complicating matters is that many experiments require atoms to rest in extremely cold temperatures, barely above absolute zero. This means they require low-light conditions—shining any bright light source for too long could heat them up too fast. Setting a longer exposure time on a camera could help, but it also means sacrificing some of the detail and information needed in the final image. “You are allowing the atom cloud to diffuse,” says Sanha Cheong, a physics graduate student at Stanford University and member of the camera-building crew. The mirror dome, on the other hand, aims to use only a brief laser-flash with an exposure of microseconds. 

[Related: Stanford researchers want to give digital cameras better depth perception]

The creators’ next challenge is to actually place the camera in MAGIS-100, which will take a lot of tinkering to fit the camera to a much larger shaft and in a vacuum. But physicists are hopeful: A camera like this might go a lot further than detecting obscure effects around atoms. Its designers plan to use it for everything from tracking particles in plasma to measuring quality control of small parts in the factory.

“To be able to capture as much light and information in a single shot in the shortest exposure possible—it opens up new doors,” says Cheong.

The post Atoms are famously camera-shy. This dazzling custom rig can catch them. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Need more air in space? Magnets could yank it out of water. https://www.popsci.com/science/magnets-oxygen-international-space-station/ Thu, 18 Aug 2022 10:00:00 +0000 https://www.popsci.com/?p=463203
The International Space Station, seen from a Dragon Capsule in November 2021.
The International Space Station makes its own oxygen through electrolysis, an energy-intensive process. NASA

Water is magnetic—a property that could help astronauts breathe a little easier.

The post Need more air in space? Magnets could yank it out of water. appeared first on Popular Science.

]]>
The International Space Station, seen from a Dragon Capsule in November 2021.
The International Space Station makes its own oxygen through electrolysis, an energy-intensive process. NASA

Humans tend to take a lot for granted, even something as simple as a breath of fresh air. It’s easy to forget how much our bodies depend on oxygen—until it becomes an invaluable resource, such as aboard the International Space Station. 

Although astronauts are typically sent to space with stores of necessary supplies, it’d be too costly to keep sending tanks of breathable air up to the station. Instead the oxygen that astronauts rely on for primary life support is created through a process called electrolysis, wherein electricity is used to split water into hydrogen gas and oxygen gas. On Earth, a similar process happens naturally through photosynthesis, when plants use hydrogen to make sugars for food and release oxygen into the atmosphere.

Yet because the system on the ISS requires massive amounts of energy and upkeep, scientists have been looking for alternative ways to sustainably create air in space. One such solution was recently published in NPJ Microgravity, in which researchers found a way to pull gases from liquids using magnets.

“Not a lot of people [are] aware that water and other liquids are also magnetic to some extent,” says Álvaro Romero-Calvo, currently an assistant professor at the Guggenheim School of Aerospace Engineering at Georgia Tech and lead author of the study.

“The physical principle is pretty well known in the physics community [but] the application in space is barely explored at this point,” he says. “When a space engineer is designing a space system involving fluids, they do not even consider the possibility of using magnets to induce phase separation.”

[Related: Lunar soil could help us make oxygen in space]

At the Center for Applied Space Technology and Microgravity (ZARM) at the University of Brennan in Germany, Romero-Calvo’s team was able to study the phenomenon of “magnetically-induced buoyancy.” The idea is easier to explain by visualizing a can of soda: On Earth, because the liquid is denser than carbon dioxide molecules, soda bubbles separate and float to the top of the drink when subjected to the planet’s gravity. In space, where microgravity creates a continuous freefall and removes the effect of buoyancy, the substances inside become harder to separate and these bubbles are simply left suspended in air.

To test whether magnets could make a difference, the team took their research to ZARM’s drop tower, where an experiment, once placed in an airtight drop capsule, can achieve weightlessness for a few seconds. By injecting air bubbles into syringes filled with different carrier liquids, the team was able to use the power of magnetism to successfully detach gas bubbles in microgravity. This proved that the bubbles can be both attracted to and repelled by a neodymium magnet from within various substances. 

Additionally, the researchers found that through the inherent magnetic properties of various aqueous solutions (like purified water and olive oil) they tested, it’s possible to direct air bubbles to different locations within the liquid. Essentially, it’d become easier to collect or send air through a vessel. Besides being used to create an abundance of oxygen for the crew, Romero-Calvo says the study’s results show that developing microgravity magnetic phase separators could lead to more reliable and lightweight space systems, like better propellant management devices or wastewater recycling technologies.

To demonstrate the magnets’ potential use for research purposes, the team also experimented with Lysogeny Broth, a medium used in to grow bacteria for ISS experiments. As it turns out, both the broth and the olive oil were “significantly affected” by the magnetic force expended on it. “Every bit of effort that we devote to this problem is effort well spent, because it will affect many other products in space,” Romero-Calvo says. 

[Related: How the ISS recycles its air and water]

If the next generation of space engineers do decide to apply magnets to future space stations, the new method could generate more efficient, breathable atmospheres to support human travel to other extraterrestrial environments, including the moon and most especially, to Mars. If we were to plan a human mission to the Red Planet, the ISS’s current oxygenation system is too complex to be completely reliable during the long journey. Simplifying it with magnets would lower overall mission costs and ensure that oxygen is abundant. 

Although Romero-Calvo says their breakthrough could ultimately help us touch down on Mars, other scientists are working on ways to manufacture oxygen using plasma—a state of matter that contains free charged particles like electrons which are easily excited by powerful electric fields—for fuels, fertilizers, and other materials that could help colonize the planet. And while neither project is up to scale just yet, these emerging advances represent the amazing feats humans are capable of as we keep moving forward, striving to reach beyond our familiar horizons.

The post Need more air in space? Magnets could yank it out of water. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s pretty hard to measure nothing, but these engineers are getting close https://www.popsci.com/science/vacuum-measurements-manufacturing-new-method/ Mon, 08 Aug 2022 22:30:00 +0000 https://www.popsci.com/?p=461034
NIST computer room with small glass vacuum chamber
The National Institute of Standards and Technology sets the bar for precise vacuum measurements. NIST

The US still uses Cold War-era tech to calibrate vacuums for manufacturing. Is there a more precise (and fun) option out there?

The post It’s pretty hard to measure nothing, but these engineers are getting close appeared first on Popular Science.

]]>
NIST computer room with small glass vacuum chamber
The National Institute of Standards and Technology sets the bar for precise vacuum measurements. NIST

Outer space is a vast nothingness. It’s not a perfect vacuum—as far as astronomers know, that concept only exists in theoretical calculations and Hollywood thrillers. But aside from the remnant hydrogen atom floating about, it is a vacuum.

That’s important because here on Earth, much of the modern world quietly relies on partial vacuums. More than just a place for physicists to do fun experiments, the machine-based environments are critical for crafting many of the electronic components in cutting-edge phones and computers. But to actually measure a vacuum—and understand how good it will be at manufacturing—engineers rely on relatively basic tech left over from the days of old-school vacuum tubes.

[Related: What happens to your body when you die in space?]

Now, some teams are working on an upgrade. Recent research has brought a novel technique—one that relies on the coolest new atomic physics (as cool as -459 degrees Fahrenheit)—one step closer to being used as a standardized method.

“It’s a new way of measuring vacuum, and I think it’s really revolutionary,” says Kirk Madison, a physicist at the University of British Columbia in Vancouver.

NIST circular metal vacuum chamber with blue lights
The NIST mass-in-vacuum precision mass comparator. NIST

What’s inside a vacuum

It might seem hard to quantify nothing, but what you’re actually doing is reading the gas pressure inside a vacuum—in other words, the force that any remaining atoms put on the chamber well. So, measuring vacuums is really about calculating pressures with far more precision than your local meteorologist can manage.

Today, engineers might do that with a tool called an ion gauge. It consists of a spiralling wire that pops out electrons when inserted into a vacuum chamber; the electrons collide with any gas atoms within the spiral, turning them into charged ions. The gauge then reads the number of ions left in the chamber. But to interpret that figure, you need to know the composition of the different gases you’re measuring, which isn’t always simple.

Ion gauges are technological cousins of vacuum tubes, the components that powered antique radios and the colossal computers that filled rooms and pulp science fiction stories before the development of the silicon transistor. “They are very unreliable,” says Stephen Eckel, a physicist at the National Institute for Standards and Technology (NIST). “They require constant recalibration.”

Other vacuum measuring tools do exist, but ion gauges are the best at getting pressure readings down to billionths of a Pascal (the standard unit of pressure). While this might seem unnecessarily precise, many high-tech manufacturers want to read nothingness as accurately as possible. A couple of common techniques to fabricate electronic components and gadgets like lasers and nanoparticles rely on delicately layering materials inside vacuum chambers. Those techniques need pure voids of matter to work well.

The purer the voir, the harder it is to identify the leftover atoms, making ion gauges even more unreliable. That’s where deep-frozen atoms come in.

Playing snooker with atoms

For decades physicists have taken atoms, pulsed them with a finely tuned laser, and confined them in a magnetic cage, all to keep them trapped at temperatures just fractions of a degree above absolute zero. The frigidness forces atoms, otherwise wont to fly about, to effectively sit still so that physicists can watch how they behave.

In 2009, Madison and other physicists at several institutions in British Columbia were watching trapped atoms of chilled rubidium—an element with psychrophilic properties—when a new arrangement dawned on them.

Suppose you put a trap full of ultracold atoms in a vacuum chamber at room temperature. They would face a constant barrage of whatever hotter, higher-energy atoms were left in the vacuum. Most of the frenzied particles would slip through the magnetic trap without notice, but some would collide with the trapped atoms and snooker them out of the trap.

It isn’t a perfect measurement—not all collisions would successfully kick an atom out of the trap. But if you know the trap’s “depth” (or temperature) and a number called the atomic cross-section (essentially, a measure of the probability of a collision), you can find out fairly quickly how many atoms are entering the plane. Based on that, you can know the pressure, along with how much matter is left in the vacuum, Madison explains.

Such a method could have a few advantages over ion gauges. For one, it would work for all types of gases present in the vacuum, as there are no chemical reactions happening. Most of all, due to the fact that you’re making calculations from how the atoms’ behaviors, nothing needs to be calibrated.

At first, few people in the physics community noticed the breakthrough by Madison and his collaborators. “Nobody believed that the work we were doing was impactful,” he says. But in the 13 years since, other groups have taken up the technology themselves. In China, the Lanzhou Institute of Physics has begun building their own version. So has an agency in the German government.

The NIST is the latest test subject on the list. It’s the US agency responsible for deciding the country’s official weights and measures, like the official kilogram (yes, even the US government uses the SI system). One of NIST’s tasks for decades has been to calibrate those persnickety ion gauges as manufacturers kept sending them in. The British Columbia researchers’ new way presented an appealing shortcut.

NIST engineer in red polo and glasses testing silver cold-atom vacuum chamber
As part of a project testing the ultra-cold atom method of vacuum measurement, NIST scientist Stephen Eckel behind a pCAVS unit (silver-colored cube left of center) that is connected to a chamber (cylinder at right). C. Suplee/NIST

A new standard for nothing

NIST’s system isn’t exactly like the one Madison’s group devised. For one, the agency uses lithium atoms, which are much smaller and lighter than rubidium. Eckert, who was involved in the NIST project, says that these atoms are far less likely to stay in the trap after collision. But it uses the same underlying principles as the original experiment, which reduces labor because it doesn’t need to be calibrated over and over.

“If I go out and I build one of these things, it had better measure the pressure correctly,” says Eckel. “Otherwise, it’s not a standard.”

NIST put their system to the test in the last two years. To make sure it worked, they built two identical cold-atom devices and ran them in the same vacuum chamber. When they turned the devices on, they were dismayed to find that both produced different measurements. As it turned out, the vacuum chamber had developed a leak, allowing atmospheric gases to trickle in. “Once we fixed the leak, they agreed with each other,” says Eckel.

Now that their system seems to work against itself, NIST researchers want to compare the ultra-chilled atoms against ion gauges and other old-fashioned techniques. If these, too, result in the same measurement, then engineers might soon be able to close in on nothingness by themselves.

The post It’s pretty hard to measure nothing, but these engineers are getting close appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Nuclear power’s biggest problem could have a small solution https://www.popsci.com/science/nuclear-fusion-less-energy/ Sun, 07 Aug 2022 23:07:36 +0000 https://www.popsci.com/?p=460468
Spherical fusion energy reactor in gold, copper, and silver seen from above
In 2015 the fusion reactor at the Princeton Plasma Physics Laboratory got a spherical upgrade for an energy-efficiency boost. Some physicists think this sort of design might be the future of the field. US Department of Energy

Most fusion experiments take place in giant doughnut-shaped reactors. Physicists want to test a smaller peanut-like one instead.

The post Nuclear power’s biggest problem could have a small solution appeared first on Popular Science.

]]>
Spherical fusion energy reactor in gold, copper, and silver seen from above
In 2015 the fusion reactor at the Princeton Plasma Physics Laboratory got a spherical upgrade for an energy-efficiency boost. Some physicists think this sort of design might be the future of the field. US Department of Energy

For decades, if you asked a fusion scientist to picture a fusion reactor, they’d probably tell you about a tokamak. It’s a chamber about the size of a large room, shaped like a hollow doughnut. Physicists fill its insides with a not-so-tasty jam of superheated plasma. Then they surround it with magnets in the hopes of crushing atoms together to create energy, just as the sun does.

But experts think you can make tokamaks in other shapes. Some believe that making tokamaks smaller and leaner could make them better at handling plasma. If the fusion scientists proposing it are right, then it could be a long-awaited upgrade for nuclear energy. Thanks to recent research and a newly proposed reactor project, the field is seriously thinking about generating electricity with a “spherical tokamak.”

“The indication from experiments up to now is that [spherical tokamaks] may, pound for pound, confine plasmas better and therefore make better fusion reactors,” says Steven Cowley, director of Princeton Plasma Physics Laboratory.

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

If you’re wondering how fusion power works, it’s the same process that the sun uses to generate heat and light. If you can push certain types of hydrogen atoms past the electromagnetic forces keeping them apart and crush them together, you get helium and a lot of energy—with virtually no pollution or carbon emissions.

It does sound wonderful. The problem is that, to force atoms together and make said reaction happen, you need to achieve celestial temperatures of millions of degrees for sustained periods of time. That’s a difficult benchmark, and it’s one reason that fusion’s holy grail—a reaction that generates more energy than you put into it, also known as breakeven and gain—remains elusive.

The tokamak, in theory, is one way to reach it. The idea is that by carefully sculpting the plasma with powerful electromagnets that line the doughnut’s shell, fusion scientists can keep that superhot reaction going. But tokamaks have been used since the 1950s, and despite continuous optimism, they’ve never been able to mold the plasma the way they need to deliver on their promise.

But there’s another way to create fusion outside of a tokamak, called inertial confinement fusion (ICF). For this, you take a sand-grain-sized pellet of hydrogen, place it inside a special container, blast it with laser beams, and let the resulting shockwaves ruffle the pellet’s interior into jump-starting fusion. Last year, an ICF reactor in California came closer than anyone’s gotten to that energy milestone. Unfortunately, in the year since, physicists haven’t been able to make the flash happen again.

Stories like this show that if there’s an alternative method, researchers won’t hesitate to jump on it.

The idea of trimming down the tokamak emerged in the 1980s, when theoretical physicists—followed by computer simulations—proposed that a more compact shape could handle the plasma more effectively than a traditional tokamak.

Not long after, groups at the Culham Center for Fusion Energy in the UK and Princeton University in New Jersey began testing the design. “The results were almost instantaneously very good,” says Cowley. That’s not something physicists can say with every new chamber design.

Round fusion reactor with silver lithium sides and a core
A more classic-shaped lithium tokamak at the Plasma Physics Laboratory. US Department of Energy

Despite the name, a spherical tokamak isn’t a true sphere: It’s more like an unshelled peanut. This shape, proponents think, gives it a few key advantages. The smaller size allows the magnets to be placed closer to the plasma, reducing the energy (and cost) needed to actually power them. Plasma also tends to act more stably in a spherical tokamak throughout the reaction.

But there are disadvantages, too. In a standard tokamak, the doughnut hole in the middle of the chamber contains some of those important electromagnets, along with the wiring and components needed to power the magnets up and support them. Downsizing the tokamak reduces that space into something like an apple core, which means the accessories need to be miniaturized to match. “The technology of being able to get everything down the narrow hole in the middle is quite hard work,” says Cowley. “We’ve had some false starts on that.”

On top of the fitting issues, placing those components closer to the celestially hot plasma tends to wear them out more quickly. In the background, researchers are making new components to solve these problems. At Princeton, one group has shrunk those magnets and wrapped them with special wires that don’t have conventional insulation, which would need to be specially treated in an expensive and error-prone process to fit in fusion reactors’ harsh conditions. This development doesn’t solve all of the problems, but it’s an incremental step.

[Related: At NYC’s biggest power plant, a switch to clean energy will help a neighborhood breathe easier]

Others are dreaming of going even further. The world of experimental tokamaks is currently preparing for ITER, a record-capacity test reactor that’s been underway since the 1980s and will finally finish construction in southern France this decade. It will hopefully pave the way for viable fusion power by the 2040s. 

Meanwhile, fusion scientists are already designing something very similar in Britain with a Spherical Tokamak for Energy Production, or STEP. The chamber is nowhere near completion—the most optimistic plans won’t have it begin construction until the mid-2030s and start generating power until about 2040—but it’s an indication that engineers are taking the spherical tokamak design quite seriously. 

“One of the things we always have to keep doing is asking ourselves: ‘If I were to build a reactor today, what would I build?’” says Cowley. Spherical tokamaks, he thinks, are beginning to enter that equation.

The post Nuclear power’s biggest problem could have a small solution appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
See the first video of solitary solid atoms playing with liquid https://www.popsci.com/science/video-solid-in-liquid/ Thu, 04 Aug 2022 12:00:00 +0000 https://www.popsci.com/?p=460136
Platinum atoms and liquid graphene seen in red and purple under a microscope next to a graphic of material particle locations
Left to right: Platinum atoms in liquid graphene under a transmission electron microscope in a colorized image; platinum atom trajectories are shown with a color scale from blue (start) to green, yellow, orange, then red. Clark et al (2022)

To catch "swimming" platinum atoms, materials scientists made a graphene sandwich.

The post See the first video of solitary solid atoms playing with liquid appeared first on Popular Science.

]]>
Platinum atoms and liquid graphene seen in red and purple under a microscope next to a graphic of material particle locations
Left to right: Platinum atoms in liquid graphene under a transmission electron microscope in a colorized image; platinum atom trajectories are shown with a color scale from blue (start) to green, yellow, orange, then red. Clark et al (2022)

It’s summer, it’s hot, and these atoms are going for a swim.

For the first time ever, materials scientists recorded individual solid atoms moving through a liquid solution. A team of engineers from the National Graphene Institute at the University of Manchester and the University of Cambridge, both in the UK, used a transmission electron microscope to pull off the delicate feat. The technique lets researchers view and take images of miniscule things in extraordinary detail. Typically, however, the subject has to be immobile and held in a high-pressure vacuum system to allow the electrons to scan properly. This limits the microscope’s use at the atomic level.

The engineers got around this by tapping a newer form of the instrument that works on contained liquid and gaseous environments. To set up the experiment, they created a “pool” with  a nanometers-thin, double-layer graphene cell. Their “swimmers” consisted of a sample of platinum atoms covered in a salty solution, or adatoms, because they were sitting on mineral crystals. 

Once it was in the liquid graphene, the solid platinum moved quickly. (For context, the looped video below is shown at real speed.) The team tested the same reaction with a vacuum in place of the graphene cell. They saw that the platinum atoms didn’t react as naturally in the traditional setup.

Credit: Adi Gal-Greenwood

After recreating the motion in liquid more than 70,000 times, the team deemed their methods successful. They published their work in the journal Nature on July 27.

The results could make waves for a few different reasons. One, it “paves the way” for transmission electron microscopes to be used widely to study “chemical processes with single-atom precision,” the scientists wrote in the paper. Two, “given the widespread industrial and scientific importance of such behavior [of solids], it is truly surprising how much we still have to learn about the fundamentals of how atoms behave on surfaces in contact with liquids,” materials scientist and co-author Sarah Haigh said in a press release.

[Related: These levitating beads can teach physicists about spinning celestial objects]

A growing number of technologies depend on the interplay between solid particles and liquid cells. Graphene, which was discovered by researchers at the University of Manchester in the early 2000s, is a key component in battery electrodes, computer circuitry, and a new technique for green hydrogen production. Meanwhile, platinum gets made into LCD screens, cathode ray tubes, sensors, and much more. Seeing how these two materials pair together at the nanometer level opens up a more precise, efficient, and inventive world.

The post See the first video of solitary solid atoms playing with liquid appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What engineers learned about diving injuries by throwing dummies into a pool https://www.popsci.com/science/physics-how-to-dive-safely/ Wed, 27 Jul 2022 21:00:00 +0000 https://www.popsci.com/?p=458564
Diving mannequins enter a pool so researchers can measure what forces affect them.
Two 3D-printed mannequins plunge into a pool. Anupam Pandey, Jisoo Yuk, and Sunghwan Jung

Pointier poses slipped into the water more easily than rounded ones.

The post What engineers learned about diving injuries by throwing dummies into a pool appeared first on Popular Science.

]]>
Diving mannequins enter a pool so researchers can measure what forces affect them.
Two 3D-printed mannequins plunge into a pool. Anupam Pandey, Jisoo Yuk, and Sunghwan Jung

The next time you’re about to jump off a diving board to escape the summer heat, consider this: There are denizens of the animal kingdom who can put even the flashiest of Olympic divers to shame. Take, for instance, the gannet. In search of fresh fish to eat, this white seabird can plunge into water at 55 miles per hour. That’s almost double the speed of elite human athletes who leap from 10-meter platforms.

Engineers can now measure for themselves what diving does to the human body, without any actual humans being harmed in the process. They created mannequins, like crash-test dummies, fitted them with force sensors, and dropped them into water. Their results, published in the journal Science Advances on July 27, show just how unnatural it is for a human body to plummet headlong into the drink.

“Humans are not designed to dive into water,” says Sunghwan Jung, a biological engineer at Cornell University, and one of the researchers behind the study.

Jung’s group have spent the past several years studying how various living things crash into water. Initially, they focused on animals: the gannet, for one; the porpoise; and the basilisk lizard, famed for running on water’s surface before gravity forces their feet under.

Those animals’ bodies likely evolved and adapted to their aquatic environments. They might need to dive under the water to find food or to avoid predators swooping down from above. Humans, who evolved in drier, terrestrial environments, have no such biological need. For us, that tendency makes diving much more dangerous.

“Humans are different,” says Jung. “Humans dive into water for fun—and likely get injured frequently.”

[Related: Swimming is the ultimate brain exercise. Here’s why.]

Jung and his colleagues wanted to measure the force the human body experienced when it crashed into the water surface. To do this, they 3D-printed mannequins and fitted them with sensors. The sensors that could record the force the dummy diver was experiencing, and in turn, how that force changed over a splash.

They measured three different poses, each mimicking one of their diving animals. To emulate the rounded head of a porpoise, a mannequin dropped into water head-first. To emulate the pointed beak of a bird, the second pose had the mannequin’s hands joined in a V-shape beyond its head. And to copy how a lizard falls in, a third pose had the mannequin plunge with its feet.

As the bodies experienced the force of the impact, the researchers found that the rate of change in the force varied depending on the shape. A rounded shape, like a human head, underwent a more brutal jolt than a pointier shape.

From this, they estimated a few heights above which diving in a particular posture would be dangerous. Diving feet-first from above around 50 feet would put you at risk of knee injury, they say. Diving hands-first from above roughly 40 feet could put you through enough force to hurt your collarbone. And diving from just 27 feet, head-first, might cause spinal cord injuries, the researchers believe.

You likely won’t encounter diving boards that high at your local pool, but it’s not inconceivable that you’d jump from that high when, say, diving from a cliff.

“The modelling is really solid, and it’s very interesting to look at the different impacts,” said Chet Moritz, who studies how people recover from spinal cord injuries at the University of Washington and wasn’t involved with the paper.

Spinal cord injuries aren’t common, but poolside warnings beg you not to dive into shallow water for very good reason: The trauma can be utterly debilitating. A 2013 study found that most spinal cord injuries were due to falls or vehicle crashes—and diving accounted for 5 percent of them. 

But Moritz points out that the spinal cord injuries he is aware of come from striking the bottom of a pool, rather than the surface that these engineers are scrutinizing. “From my experience, I don’t know of anyone who’s had a spinal cord injury from just hitting the water itself,” he says.

Nonetheless, Jung believes that if people can’t stop diving, then his research may at least make the activity safer. “If you really need to dive, then it’s good to follow these kind of suggestions,” he says. That is to say: Try not to hit the water head-first.

Jung’s group aren’t just doing this research to improve diving safety warnings. They’re trying to make a projectile—one with a pointed front, inspired by a bird’s beak—that can better arc through deep water.

Correction (July 28, 2022): Sunghwan Jung’s last name was previously misspelled. It has been corrected.

The post What engineers learned about diving injuries by throwing dummies into a pool appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Have we been measuring gravity wrong this whole time? https://www.popsci.com/science/gravitational-constant-measurement/ Mon, 18 Jul 2022 11:45:50 +0000 https://www.popsci.com/?p=456843
Swimmer in a black speedo soaring off a green diving board
What goes up must comes down, but how quickly is still a small mystery. Deposit Photos

A Swiss experiment using vibrations and vacuum chambers could help firm up the gravitational constant.

The post Have we been measuring gravity wrong this whole time? appeared first on Popular Science.

]]>
Swimmer in a black speedo soaring off a green diving board
What goes up must comes down, but how quickly is still a small mystery. Deposit Photos

Gravity is everywhere. It’s the force that anchors the Earth in its orbit around the sun, stops trees from growing up forever, and keeps our breakfast cereal in its bowl. It’s also an essential component in our understanding of the universe

But just how strong is the force? We know that gravity acts the same whether an object is as light as a feather or as heavy as a stone, but otherwise, scientists don’t have a precise answer to that question, despite studying gravity in the cosmos for centuries. 

According to Isaac Newton’s law of universal gravitation, the gravitational force drawing two objects (or particles) together gets stronger the more massive those objects are and the closer they get to each other. For example, the gravity between two feathers that are five inches apart is weaker than two apples that are the same distance from one another. However, the exact calculation of the force relies on a universal variable called the gravitational constant, which is represented by “G” in equations. 

[Related: The standard model of particle physics might be broken]

Physicists don’t know exactly what value to assign to “G.” But a new approach from Switzerland might bring fresh insights on how to test better for gravity in the first place.

“These fundamental constants, they are basically baked into the fabric of the universe,” says Stephan Schlamminger, a physicist in the Physical Measurement Laboratory at the National Institute of Standards and Technology. “Humans can do experiments to find out their value, but we will never know the true value. We can get closer and closer to the truth, the experiments can get better and better, and we approximate the true value in the end.”

Why is “G” so difficult to measure?

Unlike counting, measuring is inherently imprecise, says Schlamminger, who serves as chair of the Working Group on the Newtonian Constant of Gravitation of the International Union of Pure and Applied Physics.

“If you take a tape measure and measure the length of a table, let’s say it falls between two ticks. Now you have to use your eye and figure out where [the number] is,” he says. “Maybe you can use a microscope or something, and the more advanced the measurement technique is, the smaller and smaller your uncertainty will become. But there’s always uncertainty.”

It’s the same challenge with the gravitational constant, Schlamminger says, as researchers will always be measuring the force between two objects in some form of increments, which requires them to include some uncertainty in their results.

On top of that, the gravitational force that can be tested between objects in a lab will always be limited by the size of the facility. So that makes it even trickier to measure a diversity of masses with sophisticated tools.

Finally, there can always be interference in readings, says Jürg Dual, a professor of mechanics and experimental dynamics at ETH Zurich, who has conducted a new experiment to redetermine the gravitational constant. That’s because any object with mass will exert a gravitational pull on everything else with mass in its vicinity, so experimenters need to be able to remove the external influence of Earth’s gravity, their own, and all other presences that hold weight from the test results.

What experiments have physicists tried?

In 1798, Henry Cavendish set the standard for laboratory experiments to measure the gravitational constant using a technique called the torsion balance

That technique relies on a sort of modified pendulum. A bar with two test masses on each end is suspended from its midpoint on a thin wire hanging down. Because the bar is horizontal to the Earth’s gravitational field, Cavendish was able to remove much of the planetary force from the measurements. 

Cavendish used two small lead spheres two inches in diameter as his test masses. Then he added a second set of masses, larger lead balls with a 12-inch diameter, which were hung separately from the test masses but near to each other. These are called the “source” masses. The pull of these larger lead balls causes the wire to twist. From the angle of that twist, Cavendish and his successors have been able to calculate the gravitational force acting between the test and the source masses. And because they know the mass of each object, they are able to calculate “G.” 

Similar methods have been used by experimenters in the centuries since Cavendish, but they haven’t always found the same value for “G” or the same range of uncertainty, Schlamminger says. And the disagreement in the uncertainty of the calculations is a “big enigma.”

So physicists have continued to devise new methods for measuring “G” that might one day be able to reach a more precise result. 

[Related: From the archives: The Theory of Relativity gains speed]

Just this month, a team from Switzerland, led by Dual, published a new technique in the journal Nature Physics, which may cut out noise from surroundings and produce more accurate results.

The experimental setup included two meter-long beams suspended in vacuum chambers. The researchers caused one beam to vibrate at a particular frequency; due to the gravitational force between the two beams, the other beam would then begin to move as well. Using laser sensors, the team measured the motion of the two beams and then calculated the gravitational constant based on the effect that one had on the other. 

Their initial results yielded a value for “G” that is about 2.2 percent higher than the official value recommended by the Committee on Data for Science and Technology (which is 6.67430×10−11 m3⋅kg−1s−2), and holds a relatively large window of uncertainty. 

“Our results are more or less in line with previous experimental determinations of ‘G.’ This means that Newton’s law is also valid for our situation, even though Newton didn’t ever think of a situation like the one we have presented,” Dual says. “In the future, we will be more precise. But right now, it’s a new measurement.”

This is a slow-moving but globally collaborative endeavor, says Schlamminger, who was not involved in the new research. “It’s very rare to get a paper on big ‘G,’” so while their results may not be the most precise measurement of the gravitational constant, “it’s exciting” to have a new approach and another measurement added to one of the universe’s most weighty mathematical constants.

The post Have we been measuring gravity wrong this whole time? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Let’s talk about how planes fly https://www.popsci.com/technology/how-do-planes-fly-physics/ Thu, 14 Jul 2022 15:01:00 +0000 https://www.popsci.com/?p=455581
An airplane taking off toward the camera at dusk, with lights along the runway and on the front of the plane, against a cloudy reddish sunset.
Flight isn't magic, it's physics. Josue Isai Ramos Figueroa / Unsplash

How does an aircraft stay in the sky, and how do wings work? Fasten your seatbelts, and let's explore.

The post Let’s talk about how planes fly appeared first on Popular Science.

]]>
An airplane taking off toward the camera at dusk, with lights along the runway and on the front of the plane, against a cloudy reddish sunset.
Flight isn't magic, it's physics. Josue Isai Ramos Figueroa / Unsplash

How does an airplane stay in the air? Whether you’ve pondered the question while flying or not, it remains a fascinating, complex topic. Here’s a quick look at the physics involved with an airplane’s flight, as well as a glimpse at a misconception surrounding the subject, too. 

First, picture an aircraft—a commercial airliner, such as a Boeing or Airbus transport jet—cruising in steady flight through the sky. That flight involves a delicate balance of opposing forces. “Wings produce lift, and lift counters the weight of the aircraft,” says Holger Babinsky, a professor of aerodynamics at the University of Cambridge. 

“That lift [or upward] force has to be equal to, or greater than, the weight of the airplane—that’s what keeps it in the air,” says William Crossley, the head of the School of Aeronautics and Astronautics at Purdue University. 

Meanwhile, the aircraft’s engines are giving it the thrust it needs to counter the drag it experiences from the friction of the air around it. “As you’re flying forward, you have to have enough thrust to at least equal the drag—it can be higher than the drag if you’re accelerating; it can be lower than the drag if you’re slowing down—but in steady, level flight, the thrust equals drag,” Crossley notes.

Understanding just how the airplane’s wings produce the lift in the first place is a bit more complicated. “The media, in general, are always after a quick and simple explanation,” Babinsky reflects. “I think that’s gotten us into hot water.” One popular explanation, which is wrong, goes like this: Air moving over the curved top of a wing has to travel a longer distance than air moving below it, and because of that, it speeds up to try to keep abreast of the air on the bottom—as if two air particles, one going over the top of the wing and one going under, need to stay magically connected. NASA even has a webpage dedicated to this idea, labeling it as an “Incorrect Theory.” 

So what’s the correct way to think about it? 

Lend a hand

One very simple way to start thinking about the topic is to imagine that you’re riding in the passenger seat of a car. Stick your arm out sideways, into the incoming wind, with your palm down, thumb forward, and hand basically parallel to the ground. (If you do this in real life, please be careful.) Now, angle your hand upwards a little at the front, so that the wind catches the underside of your hand; that process of tilting your hand upwards approximates an important concept with wings called their angle of attack.

“You can clearly feel the lift force,” Babinsky says. In this straightforward scenario, the air is hitting the bottom of your hand, being deflected downwards, and in a Newtonian sense (see law three), your hand is being pushed upwards. 

Follow the curve 

But a wing, of course, is not shaped like your hand, and there are additional factors to consider. Two key points to keep in mind with wings are that the front of the wings, also known as the leading edge, is curved, and overall, they also take on a shape called an airfoil when you look at them in cross-section. 

[Related: How pilots land their planes in powerful crosswinds]

The curved leading edge of a wing is important because airflow tends to “follow a curved surface,” Babinsky says. He says he likes to demonstrate this concept by pointing a hair dryer at the rounded edge of a bucket. The airflow will attach to the bucket’s curved surface, and make a turn, and could even snuff out a candle on the other side that’s blocked by the bucket. Here’s a charming old video that appears to demonstrate the same idea. “Once the flow attaches itself to the curved surface, it likes to stay attached—[although] it will not stay attached forever,” he notes.

With a wing—and picture it angled up somewhat, like your hand out the window of the car—what happens is that the air encounters the rounded leading edge. “On the upper surface, the air will attach itself, and bend round, and actually follow that incidence, that angle of attack, very nicely,” he says. 

Keep things low pressure

Ultimately, what happens is that the air moving over the top of the wing attaches to the curved surface, and turns, or flows downwards somewhat: a low-pressure area forms, and the air also travels faster. Meanwhile, the air is hitting the underside of the wing, like the wind hits your hand out as it sticks out the car window, creating a high-pressure area. Voila: the wing has a low-pressure area above it, and higher pressure below. “The difference between those two pressures gives us lift,” Babinsky says. 

This video depicts the general process well:

Babinsky notes that more work is being done by that lower pressure area above the wing than the higher pressure one below the wing. You can think of the wing as deflecting the air flow downwards on both the top and bottom. On the lower surface of the wing, the deflection of the flow “is actually smaller than the flow deflection on the upper surface,” he notes. “Most airfoils, a very, very crude rule of thumb would be that two-thirds of the lift is generated there [on the top surface], sometimes even more,” Babinksy says.

Can you bring it all together for me one last time?

Sure! Gloria Yamauchi, an aerospace engineer at NASA’s Ames Research Center, puts it this way. “So we have an airplane, flying through the air; the air approaches the wing; it is turned by the wing at the leading edge,” she says. (By “turned,” she means that it changes direction, like the way a car plowing down the road forces the air to change its direction to go around it.) “The velocity of the air changes as it goes over the wing’s surface, above and below.” 

“The velocity over the top of the wing is, in general, greater than the velocity below the wing,” she continues, “and that means the pressure above the wing is lower than the pressure below the wing, and that difference in pressure generates an upward lifting force.”

Is your head constantly spinning with outlandish, mind-burning questions? If you’ve ever wondered what the universe is made of, what would happen if you fell into a black hole, or even why not everyone can touch their toes, then you should be sure to listen and subscribe to Ask Us Anything, a podcast from the editors of Popular Science. Ask Us Anything hits AppleAnchorSpotify, and everywhere else you listen to podcasts every Tuesday and Thursday. Each episode takes a deep dive into a single query we know you’ll want to stick around for.

The post Let’s talk about how planes fly appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The world’s best dark matter detector whiffed on its first try https://www.popsci.com/science/find-dark-matter-detector-results/ Fri, 08 Jul 2022 18:00:00 +0000 https://www.popsci.com/?p=455091
White cylinder with wires, part of the LZ dark matter detector run the US Department of Energy
The LUX-ZEPLIN central detector in the clean room at Sanford Underground Research Facility after assembly, before beginning its journey underground. Matthew Kapust/Sanford Underground Research Facility

The LUX-ZEPLIN is now on the hunt for WIMPs and other hints of dark matter across the universe.

The post The world’s best dark matter detector whiffed on its first try appeared first on Popular Science.

]]>
White cylinder with wires, part of the LZ dark matter detector run the US Department of Energy
The LUX-ZEPLIN central detector in the clean room at Sanford Underground Research Facility after assembly, before beginning its journey underground. Matthew Kapust/Sanford Underground Research Facility

The hunt for dark matter is underway underground. In a first run with LUX-ZEPLIN (LZ) that lasted for more than three months and ended this April, the world’s most sensitive dark matter detector found no signs of hypothetical dark matter bits called Weakly Interacting Massive Particle (WIMPs) in space. Despite the lack of hard data, scientists confirmed the US Department of Energy-led experiment is working as planned, leaving the possibility of finding dark matter in future rounds.

“For now it’s kind of a weird thing, we’re saying that we’re the best in the world at finding nothing, but the prospect of finding new physics a few years from now is entirely feasible,” Chamkaur Ghag, an astroparticle physicist and professor at the University College London and team member of the LUX experiment, told New Scientist.

Dark matter is thought to make up 27 percent of the universe (the visible matter in stars and galaxies might only comprise 5 percent of it). That said, no one has ever detected it. That’s because dark matter contains particles that do not emit, absorb, or reflect light, making it difficult to even measure with electromagnetic radiation. But physicists and astronomers know dark matter exists because of the gravitational effects it has on visible objects, like keeping stars from slingshotting around space and preventing galaxies from collapsing. It is hypothesized to be the invisible glue holding the universe together.

[Related: Meet the mysterious particle that’s the dark horse in dark matter]

WIMPs have been scientists’ best bet at detecting dark matter. Other hypothesized dark matter such as photons or axions are very small and behave like waves. But WIMPs contain mass and rarely interact with other visible matter. Billions of WIMPs also pass through us every second. By studying dark matter, experts will have a better understanding of what the true base of the universe is—and what we can expect to happen to it in the future. 

The key to unlocking the secrets of the universe is buried a mile below the Black Hills of South Dakota. The LZ experiment is composed of two next titanium tanks filled with 10 tons of pure liquid xenon tanks. It also contains two arrays of photomultiplier tubes that can detect the faintest light. If dark matter in the form of WIMPs collide with a xenon atom, it will knock over loose electrons. The particle collision produces a brief shimmer of luminosity the LZ experiment picks up. 

The experiment is underground because cosmic radiation and the radiation from human bodies could muffle dark matter signals. So, submerging the detector helps increase its sensitivity and chances of finding a sign of dark matter. “You’re trying to hear a whisper. You do it in the middle of New York City, you’re not going to hear it, there’s just too much noise. You want to get away from our backgrounds—the cosmic rays and junk we’re bombarded by would hide the very rare signals we’re looking for,” Kevin Lesko, a senior physicist at the Lawrence Berkeley National Laboratory, who coordinates the LZ project, told Popular Science in 2020.

[Related: What we learned from the Large Hadron Collider on its first day back in business]

While, the first round of results did not find dark matter, it did show the machine is working well and functioning within expectation. “Considering we just turned it on a few months ago and during COVID restrictions, it is impressive we have such significant results already,” said Aaron Manalaysay, a physics coordinator from the Berkeley Lab that led the effort for the experiment’s initial run, in a Berkeley Lab press release.

“We are now out of the starting gate,” said Harry Nelson, a professor of physics at the University of California, Santa Barbara and former spokesperson of LZ in a second press release. “LZ is a far more powerful detector of dark matter than any ever built before, and is uniquely capable of making a discovery in the next few years.”

The post The world’s best dark matter detector whiffed on its first try appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What we learned from the Large Hadron Collider on its first day back in business https://www.popsci.com/science/large-hadron-collider-restart/ Wed, 06 Jul 2022 17:47:00 +0000 https://www.popsci.com/?p=454723
CERN's LHC resumed work on Tuesday.
The LHC will smash particles together with more energy than ever. Deposit Photos

Three new exotic particles expand the roster of subatomic characters.

The post What we learned from the Large Hadron Collider on its first day back in business appeared first on Popular Science.

]]>
CERN's LHC resumed work on Tuesday.
The LHC will smash particles together with more energy than ever. Deposit Photos

After three years of upgrades and maintenance, the world’s largest and most powerful particle accelerator, the Large Hadron Collider (LHC) has fired up for a third run. On Tuesday at 10:47 a.m. EDT, the atom smasher shot beams of protons through a 16.7-mile ring of superconducting magnets in Switzerland. New upgrades will allow the LHC to achieve an increased collision energy of 13.6 trillion electron volts (previous runs were at 8 trillion and 13 trillion electron volts). Physicists project the machine will run for almost four years at this intensity—opening up new insights into the field of particle physics.

“This is a significant increase, paving the way for new discoveries,” said Mike Lamont, director for Accelerators and Technology at the European Organization for Nuclear Research (CERN), in a press release.

One goal of the new LHC era is to better understand the structure of the Higgs boson, a subatomic particle the collider uncovered a decade ago. The Higgs boson particle, which scientists theorize gives other particles such as electrons and quarks their mass, was created 10 to 12 seconds after the big bang that created the universe billions of years ago. 

[Related: The souped-up Large Hadron Collider is back to take on its weightiest questions yet]

Scientists at CERN, which runs the LHC, plan to measure how the Higgs boson decays into other matter, such as muons. “This would be an entirely new result in the Higgs boson saga, confirming for the first time that second-generation particles also get mass through the Higgs mechanism,” said CERN theorist Michelangelo Mangano, in a press release.

LHC’s upgrades will also more precisely measure other fundamental features in the universe, such as the origin of matter-antimatter asymmetry (the unsolved mystery of why more matter than antimatter exists). Other areas of interest include searching for dark matter and studying matter under extreme temperatures and density.

To hunt for these rare atomic bits, the LHC contains multiple accelerating structures to augment the energy of its particle beams. The machine uses thousands of magnets that help push the particles closer together, increasing the chance of a collision. Those beams travel almost at the speed of light before they smash together, allowing scientists to study the insides of atoms. 

Through particle collisions, physicists have learned a great deal about the smallest known building blocks of matter. Also on Tuesday, CERN presented evidence of three new exotic particles, a pentaquark and two tetraquarks. The discovery could help inform physicists how quarks—pairs of subatomic particles that carry a fractional electrical charge—form. When combined, quarks are believed to create the protons and neutrons, together known as hadrons, in an atomic nucleus. 

It may also help explain the creation of exotic hadrons, which are particles composed of more than three quarks. “Finding new kinds of tetraquarks and pentaquarks and measuring their properties will help theorists develop a unified model of exotic hadrons, the exact nature of which is largely unknown,” said Chris Parkes, a spokesperson for the experiment responsible for the discovery, in a separate CERN press release. With the LHC running, scientists may be one step closer to unraveling the secrets of the universe. 

The post What we learned from the Large Hadron Collider on its first day back in business appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These 4 problem-solvers just won one of math’s biggest prizes https://www.popsci.com/science/fields-medal-winners-mathematicians/ Tue, 05 Jul 2022 16:43:18 +0000 https://www.popsci.com/?p=454503
Ukraine's Maryna Viazovska holding her Fields Medal.
Ukraine's Maryna Viazovska presents her medal after receiving the 2022 Fields Prize for Mathematics in Helsinki. Vesa Moilanen/Lehtikuva/AFP via Getty Images

The Fields Medal is kind of like an Olympic gold in mathematics.

The post These 4 problem-solvers just won one of math’s biggest prizes appeared first on Popular Science.

]]>
Ukraine's Maryna Viazovska holding her Fields Medal.
Ukraine's Maryna Viazovska presents her medal after receiving the 2022 Fields Prize for Mathematics in Helsinki. Vesa Moilanen/Lehtikuva/AFP via Getty Images

Four mathematicians won the prestigious Fields Medal on Monday for their significant contributions to mathematics. The 14-karat gold award, equivalent to an Olympic gold medal for math, is given out by the International Mathematical Union in Helsinki every four years to talented mathematicians under the age of 40. 

This year’s award recognizes groundbreaking research in subjects such as prime numbers and the packing, or efficiently arranging, of spheres in eight-dimensional space. The winning mathematicians–Hugo Duminil-Copin of France, June Huh of the US, Maryna Viazovska of Switzerland, and James Maynard of the UK–have answered questions that have stumped other experts for years. 

Duminil-Copin, of France’s Institut des Hautes Études Scientifiques, was awarded for his work on the probabilistic theory of phase transitions—how matter changes forms, such as water freezing to ice. Duminil-Copin’s work focuses on how ferromagnetic objects transition from a nonmagnetic to magnetic phase in what’s called the Ising model. Previous physicists have used it to create simplified one- and two-dimensional models of reality, but solving the Ising model in 3D is more difficult. “The ability to produce exact formulas just collapses completely,” Duminil-Copin told The New York Times. “Nobody has any idea how to compute things exactly.” While the 3D Ising model is not completely solved, Duminil-Copin’s work showed proof that phase transitions in a 3D Ising model resembled those in two dimensions. 

[Related: Your brain uses different neurons to add and subtract]

Princeton University’s Huh won for a range of work: He was the first to apply geometric concepts to combinatorics, the mathematics of counting. Working with his colleagues, he answered previously unsolved problems in combinatorics and provided a scheme to explain the mathematical properties of complex geometric objects. But despite these accomplishments, Huh tried to avoid math as much as he could growing up. “I was pretty good at most subjects except math,” Huh told The New York Times, adding he nearly failed his tests. Huh, who dropped out of school and wanted to become a poet, fell in love with math at 23 after studying under Heisuke Hironaka, a Japanese mathematician, who won the Fields Medal in 1970. 

For 13 years, Viazovska, a Ukrainian-born mathematician and professor at the Swiss Federal Institute of Technology in Lausanne, has worked on an arrangement called the E8 lattice, which shows how to pack spheres in eight dimensions in the least amount of space possible. “Sphere packing is a very natural geometric problem. You have a big box, and you have infinite collection of equal balls, and you’re trying to put as many balls into the box as you can,” Viazovska, who is now trying to densely pack spheres in 24 dimensions, told New Scientist. Her win is the second time the committee has awarded the medal to a woman.

One mathematical theory is that there are an infinite number of prime numbers, but as numbers get larger the distance between two prime numbers gets farther apart. The University of Oxford’s James Maynard, in a prize-winning breakthrough, showed this is not always the case: There are instances where prime numbers can come close together. Yet, in other stretches, primes can be very distant.   

John Charles Fields, a Canadian mathematician, created the award in 1924, as a way to recognize early-career mathematicians for their accomplishments–and also to highlight researchers whose future work might dazzle, too. “Suddenly to be tossed up on this list,” Maynard told Quanta Magazine, “with these legends of mathematics who inspired me as [a child], is incredible but completely surreal.”

The post These 4 problem-solvers just won one of math’s biggest prizes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The green revolution is coming for power-hungry particle accelerators https://www.popsci.com/science/sustainable-particle-accelerators/ Tue, 05 Jul 2022 10:00:00 +0000 https://www.popsci.com/?p=454081
LHC consumes as much energy as a city. Can particle accelerators be greener?
The gentle underground curve of the LHC at CERN, in Geneva. Deposit Photos

Future accelerators may be able to capture spent energy, or use greener cooling gases.

The post The green revolution is coming for power-hungry particle accelerators appeared first on Popular Science.

]]>
LHC consumes as much energy as a city. Can particle accelerators be greener?
The gentle underground curve of the LHC at CERN, in Geneva. Deposit Photos

On July 5, marking the end of a three-year hiatus, CERN’s Large Hadron Collider will start collecting data. It will launch beams of high-energy particles in opposite directions around a 16-mile-long loop to create an explosive crash. Scientists will watch the carnage with high-precision detectors and sift through the debris for particles that reveal the inner workings of our universe.

But to do all of that, LHC needs electricity: enough to power a small city. It’s easy for someone outside CERN to wonder just why one physics facility needs all that power. Particle physicists know these demands are extreme, and many of them are trying to make the colliders of the future more efficient. 

“I think there is increased awareness in the community that accelerator facilities need to reduce energy consumption if at all possible,” says Thomas Roser, a physicist formerly at Brookhaven National Laboratory in New York.

Scientists are already drawing up plans for LHC’s proposed successor—the so-called Future Circular Collider (FCC), with a circumference nearly four times as large as LHC’s, quite literally encircling most of the city of Geneva. As they do that, they’re looking at a few, sometimes unexpected, sources of energy use and greenhouse gas emissions—and how to reduce them.

Networked costs

LHC, despite its size and energy demands, isn’t that carbon-intensive to operate. For one, CERN sources its electricity from the French grid, whose portfolio of nuclear power plants make it one of the least carbon-reliant in the world. Put LHC in a place with a fossil-fuel-heavy grid, and its climate footprint would be very different.

“We’re very lucky…if it was in the US, it would be terrible,” says Véronique Boisvert, a particle physicist at Royal Holloway, University of London.

But the collider’s climate impacts spread far beyond a little sector of Geneva’s suburbs. Facilities like CERN generate heaps of raw data. To process and analyze that data, particle physics relies on a global network of supercomputers, computer clusters, and servers—which are notoriously power-hungry. At least 22 are, indeed, in the US.

Scientists can plan to build these networks or use computers in places with low-carbon electricity: California, say, over Florida. 

“Maybe we should also think about what’s the carbon emission per CPU cycle and use that as a factor in planning your technology, as much as you do cost or power efficiency,” says Ken Bloom, a particle physicist at the University of Nebraska-Lincoln.

[Related: The biggest particle collider in the world gets back to work]

Even though the accelerator itself is only a small portion of particle physics’ carbon footprint, Bosivert believes that researchers should plan to reduce the facility’s energy consumption. By the time FCC comes online in the 2040s and 2050s, decarbonization means that it will have to compete for resources on the power grid with many more cars and appliances than exist today. She thinks it’s wise to plan ahead for that time.

The goal of reducing power use is the same, says Bosivert. “You still need to minimize power, but for a different reason.”

Recovering energy

In the name of efficiency and energy conservation, scientists are studying a few technologies that can help make “green accelerators.” 

In 2019, researchers at Cornell University and Brookhaven National Lab unveiled a prototype accelerator called the Cornell-Brookhaven ERL Test Accelerator (CBETA). Remarkably, in demonstrations, CBETA recovered all the energy that scientists put into it.

“We took technology that existed, to some extent, and improved it and broadened its application,” says Georg Hoffstaetter, a physicist at Cornell University.

CBETA launched high-energy electrons through a racetrack-shaped loop that could fit inside a warehouse. With every “lap,” the electrons gained an energy boost. After four laps, the machine could slow down the electrons and store their energy to be used again. CBETA was the first time physicists had recovered energy after that many full laps.

It’s not a new technology, but as particle physicists grow more interested in saving energy, similar technology is in FCC’s plans. “There are options for [FCC] that use energy recovery,” says Hoffstaetter. Particles that aren’t smashed can be recovered.

CBETA also saves energy by using different magnets. Most particle accelerators use electromagnets to guide their particles along the arc. Electromagnets get their magnetic strength from running electricity around them; turn off the switch, and the magnetic field disappears. By replacing electromagnets with permanent magnets that don’t need electricity, CBETA could cut down on energy use.

“These technologies are kind of catching on,” says Hoffstater. “They’re being recognized and they’re being incorporated into new projects to save energy.”

Some of those projects are closer to completion than the FCC. Designers of a new collider at Brookhaven, smashing electrons and ions together, have plotted with energy recovery. At the Jefferson Lab, an accelerator facility in Newport News, Virginia, scientists are building a much larger accelerator that uses permanent magnets.

It isn’t the only way that energy from a particle collider finds new life. Much of the collider’s energy is turned into heat. That warmth can be put to work: CERN has experimented with piping heat to homes in the towns that surround LHC.

Gassy culprits

But focusing on carbon emissions from these facilities misses part of the picture—in fact, the largest part. “That is not the dominant source of emission,” says Bosivert. “The dominant source is the gases we use in our particle detectors.”

To keep an apparatus at the ideal temperatures for detecting particles, the highly sensitive equipment needs to be chilled by gases—similar to the gases used in some refrigerators. Those gases need to be non-flammable and endure high levels of radiation, even while maintaining refrigerated temperatures.

The gases of choice fall into categories hydrofluorocarbon (HFC) and perfluorocarbons (PFC). Some of them are greenhouse gases, far more potent than carbon dioxide. C2H2F2, CERN’s most common HFC, traps heat 1,300 times more effectively.

LHC already tries to capture these gases, reuse them, and stop them from spewing out into the atmosphere. Still, its process isn’t perfect. “A lot of these are in parts of these experiments that are just really difficult to access,” says Bloom. “Leaks can develop in there. They’re going to be very hard to fix.”

[Related: Scientists found a fleeting particle from the universe’s first moments]

From the logistician’s point of view, the use of HFCs and PFCs poses a procurement problem. Some jurisdictions—the European Union—are moving to ban them. Bosivert says this has led to wild fluctuations in price.

“When you’re designing future detectors, you can’t use those gases anymore,” says Bosivert. “All this R&D—‘Okay, what gases are we going to use?’—needs to happen now, essentially.”

There are alternatives. One, actually, is carbon dioxide itself. CERN has retrofitted some of LHC’s detectors to chill themselves with that compound. It isn’t perfect, but it’s an improvement.

These are the sorts of choices that many scientists want to see enter any planning discussion for a future accelerator. 

“Just as monetary costs are a consideration in the planning of any future facility, future experiment, future physics program,” Bloom says, “we can think about climate costs in the same way.”

Correction August 9, 2022: This article has been updated to include Ken Bloom’s university affiliation.

The post The green revolution is coming for power-hungry particle accelerators appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all. https://www.popsci.com/science/earth-minerals-catalog/ Fri, 01 Jul 2022 14:20:49 +0000 https://www.popsci.com/?p=454078
Stacks of white calcite on a black background in a mineral catalog
Calcite is known to form in at least 17 different ways, making it one of the most diverse mineral species (along with pyrite). This other-worldly example appears to be a cave deposit capturing different episodes of crystallization that correlate to changing water levels in southern China, during the ice ages. ARKENSTONE/Rob Lavinsky

If you consider how and where a diamond was formed, you end up with nine different kinds instead of one.

The post Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all. appeared first on Popular Science.

]]>
Stacks of white calcite on a black background in a mineral catalog
Calcite is known to form in at least 17 different ways, making it one of the most diverse mineral species (along with pyrite). This other-worldly example appears to be a cave deposit capturing different episodes of crystallization that correlate to changing water levels in southern China, during the ice ages. ARKENSTONE/Rob Lavinsky

Robert Hazen was attending a Christmas party, one December night in 2006, when a biologist friend and colleague asked a simple question: “Were there any clay minerals in the Hadean?”

The question came from an important place. The Hadean eon is what scientists call the first chapter of Earth’s history—the fiery and mythopoetic time from our planet’s formation until about 4 billion years ago. And clay minerals, often found today in soils around the world, play a key role in some of the many theories of how life began.

But according to Hazen, a mineralogist at the Carnegie Institution for Science in Washington, D.C., it wasn’t a question his field was equipped to study a decade or two ago.

[Related: How minerals and rocks reflect rainbows, glow in the dark, and otherwise blow your mind]

He now hopes that will change, thanks to a new mineral cataloging system that takes into account how—and when—a mineral formed. It’s described in two papers, published today in the journal American Mineralogist. (This research could well be the vanguard for more than 70 other studies.)

“I think this gives us the opportunity to answer almost unlimited questions,” says Shaunna Morrison, a geoscientist at the Carnegie Institution and one of the papers’ authors.

Traditionally, mineralogists classify crystalline compounds by way of their chemical makeup (what atoms are in a mineral?) and their structure (if you zoomed in, how would you see those atoms arranged?).

“The way mineralogists think about their field is: Each mineral is an idealized chemical composition and crystal structure,” says Hazen, also one of the paper authors. “That’s how we define ‘mineral species.’”

Iridescent opalized ammonite on black in a mineral catalog
A beautiful example of opalized ammonite from Alberta, Canada, shows the intersection of biological evolution and mineral evolution—the interplay between minerals and life. A hundred million years ago, the ammonite deposited its own hard carbonate shell — a “biomineral.” In this rare case, that original carbonate shell was later replaced by the fiery mineral opal. ARKENSTONE/Rob Lavinsky

The International Mineralogical Association (IMA), the world congress for the field of study, defines around 5,800 listed species: from pyrite and diamond to hydroxyapophyllite-(K) and ferro-ferri-fluoro-leakeite. It’s a collection that scientists have assembled over centuries.

That schema is great for identifying minerals on their face, but it doesn’t say much about how a geological artifact might have formed. Pyrite, for instance, can be traced back to anything from hot water and volcanoes to meteorites and human-created mine fires. Without that extra bit of knowledge, if you find pyrite, you won’t know the story it’s trying to tell you. Other minerals are borne in the extreme conditions of a lightning strike, or from Earth’s life directly, like in bones or bird poop. There are minerals that arise due to the oxygen that early bacteria pumped into Earth’s ancient atmosphere.

Hazen and Morrison wanted to create a next-level catalog that tied materials to their histories. “What we were really looking to do was bring in context,” says Morrison.

Currently, there are quite a few ways researchers can tell where, when, and how a mineral formed. They might look at trace elements, which are extra bits of chemical and biological matter that are incorporated into a mineral from its surroundings. They might look at the ratio of different radioactive isotopes in a mineral which, similar to carbon-dating, could tell scientists how far back a mineral goes. They might even think about a mineral’s texture or color; samples that have oxidized or rusted, for instance, might change appearance.

Orange tourmaline in a white rock base on a black background in a mineral catalog
Tourmaline is the most common mineral with the element boron. It forms gorgeous crystals in mineral-rich granite pegmatites, which host hundreds of exotic mineral species. The International Mineralogical Association recognizes more than 30 “species” of tourmaline, but the new papers acknowledge only a handful of “mineral kinds.” The reason is that the composition of tourmaline is highly variable — ratios of Mg/Fe, F/OH, Al/Fe and many other “chemical substitutions” can lead to individual colorfully zoned crystals that hold as many as seven different species but only one “mineral kind.” ARKENSTONE/Rob Lavinsky

Equipped with data science methods—often used today by biologists to analyze genomes and by sociologists to find groups of people in a social network—Morrison was able to correlate multiple of those factors and find the formation histories for various minerals. It took her team 15 years to scour housands of minerals from around the planet and tag them with one of 57 different formation environments, ranging from spaceborne minerals that predated the Earth to minerals formed in human mines.

Now, they’ve transformed the IMA’s 5,800 species into more than 10,500 of what Morrison and Hazen call “mineral kinds.” One mineral can have numerous kinds if it formed in several different ways.

Take diamond, for instance. Chemically, it’s one of the simplest minerals, made entirely of carbon atoms arranged in a cube-based structure. But the new catalog lists nine different kinds of it: diamond that was baked and pressed in Earth’s mantle, diamond that precipitated from a meteor strike, diamond from carbon-rich stars before life even existed, and more.

In Morrison and Hazen’s revamped guide, around five-sixths of the IMA’s minerals came in only one or two kinds. But nine minerals actually branched off into 15 kinds. And no mineral in the catalogue has quite as many kinds as pyrite: 21.

Green malachite pillars on a black background in a mineral catalog
Malachite is an example of a mineral that formed after life created atmospheric oxygen about 2.5 billion years ago. They are among hundreds of beautiful blue and green copper minerals that form near Earth’s surface as ore deposits weather. ARKENSTONE/Rob Lavinsky

In creating this schema, Hazen and Morrison (both of who are also on the Curiosity Mars rover team) are looking far beyond Earth. If you find a mineral on another world and you know where it formed, you can quickly figure out what sort of environment that planet held in ancient times. For instance, if your mineral is amongst the 80 percent of kinds that originated in contact with water, then you might have evidence of a long-dead ocean. 

And if your mineral is amongst the one-third of mineral kinds that emerged from biological processes, it could be a hint of long-disappeared extraterrestrial life.

[Related: Why sustainable diamonds are almost mythical]

“A new way of seeing minerals appears,” said Patrick Cordier, a mineralogist at the University of Lille in France, in a statement. “Minerals become witnesses, markers of the long history of matter.”

“You can hold a mineral that’s hundreds of millions or billions of years [old]. You can hold a meteorite that’s 4.567 billion years old,” says Hazen. “There’s no other tangible evidence of the earliest history of our solar system.”

The post Earth has more than 10,000 kinds of minerals. This massive new catalog describes them all. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This ghostly particle may be why dark matter keeps eluding us https://www.popsci.com/science/dark-matter-particle-experiment/ Mon, 27 Jun 2022 22:43:48 +0000 https://www.popsci.com/?p=452753
Photodetectors in a circular array in a large neutrino detector experiment at Fermilab
Different detectors can be built to measure and study different kinds of neutrinos. The MiniBooNE experiment at Fermilab, for example, is specifically used for muon neutrinos. Reidar Hahn/Fermilab

Physicists in Russia think they’re on the trail of a new particle that's everywhere, but nowhere.

The post This ghostly particle may be why dark matter keeps eluding us appeared first on Popular Science.

]]>
Photodetectors in a circular array in a large neutrino detector experiment at Fermilab
Different detectors can be built to measure and study different kinds of neutrinos. The MiniBooNE experiment at Fermilab, for example, is specifically used for muon neutrinos. Reidar Hahn/Fermilab

Every second, 100 trillion phantasmic little particles called neutrinos pass through your body. Nearly all of them shoot through your skin without interacting at all. Their shyness makes these particles particularly painstaking for physicists to detect.

But over the past few decades, the world of neutrino physics has been taking on a new challenge. 

From an experiment conducted deep under the Caucasus Mountains in Russia, physicists have found further evidence—published on June 9 in two papers—that a piece of the current theory of neutrinos is out of place. If they’re right, it could unveil a never-before-seen type of neutrino that could fly even more under the radar—and could explain why we can’t see the dark matter that makes up much of our universe.

“It’s probably, in my mind, one of the most important results in neutrino physics, at least in the last five years,” says Ben Jones, a neutrino physicist at the University of Texas at Arlington, who was not involved with the experiment.

Department of Energy physicists handling chromium disks for a neutrino detector
In the Los Alamos experiment, a set of 26 irradiated disks of chromium 51 provides the source of electron neutrinos that react with gallium and produce germanium 71 at rates which can be measured against predicted rates. A.A. Shikhin

The case of the misbehaving neutrinos

Like creatures from an ethereal plane, neutrinos react with their material surroundings sparingly. With zero electric charge, they aren’t susceptible to electromagnetism. Nor do they get involved in the strong nuclear interaction, which helps bind particles together in the hearts of atoms. 

But neutrinos do play a part in the weak nuclear force, which—according to the Standard Model, the theoretical framework that forms the foundation of modern particle physics—is responsible for certain types of radioactivity.

The vast majority of neutrinos we observe on Earth are born from radioactive processes in the sun. To watch those, scientists rely on neutrino observatories under the sea or buried deep beneath the planet’s crust. It’s not often easy to tell if neutrino detectors are working properly, so physicists can calibrate their equipment by placing certain isotopes—like chromium-51, whose neutrino emissions they know well—nearby.

As neutrino physics gained momentum in the 1990s, however, researchers noticed something odd. In some experiments, when they calibrated their detectors, they began finding fewer neutrinos than accounted for in theoretical particle physics.

For instance, in 1997 at Los Alamos National Lab in New Mexico, scientists from the US and Russia set up a tank filled with gallium, a metal that’s liquid on a warm summer day. As neutrinos struck the gallium, the element’s atoms absorbed the particles. That process transformed gallium into a more solid metal, germanium—a sort of reversed radioactive decay. Physicists measured that germanium to trace how many neutrinos have passed through the tank.

But when the Los Alamos team tested their system with chromium-51, they found too much gallium—and too few neutrinos, in other words. This deficit became known as the “gallium anomaly.”

[Related: Why Los Alamos lab is working on the tricky task of creating new plutonium cores]

Since then, experts poring over the gallium anomaly have explored a tentative explanation. Particle physicists know that neutrinos come in three “flavors”: electron neutrinos, muon neutrinos, and tau neutrinos, each playing different roles in the dance of the quantum world. Under certain circumstances, it’s possible to observe neutrinos switching between flavors. Those shifts are called “neutrino oscillations.” 

That led to an interesting possibility—that neutrinos were missing in the gallium anomaly because they were jumping into another hidden flavor, one that’s even less reactive to the physical world. The physicists came up with a name for the category: sterile neutrinos.

The sterile neutrino story was just an idea, but it found support. Around the same time, physicists at places like Los Alamos and Fermilab in suburban Chicago had started to observe neutrino oscillations directly. When they did, they found discrepancies the number of neutrinos of each flavor they expected to appear and how many actually appeared.

“Either some of the experiments are wrong,” says Jones, “or something more interesting and strange is going on that has a different signature.”

Sterile neutrino detector machinery in a large underground room in Russia
The main setup of the Baksan Experiment on Sterile Transitions. V.N. Gavrin/BEST

Searching for sterile signatures

So what would that sterile neutrino look like? The name “sterile,” and the fact that physicists haven’t detected them through the normal channels, indicate that this class of particles abstains from the weak nuclear force, too. That leaves just one way they can interact with their environment: gravity. 

At the subatomic scales that neutrinos call home, compounded by their puny masses, gravity is extremely weak. Sterile neutrinos would be extraordinarily hard to detect.

That held true well into the 21st century, as the anomalies were too inconsistent for physicists to tell if they amounted to sterile neutrinos. Some experiments found anomalies; others simply didn’t. The sum of experiments seemed to paint a mural of circumstantial evidence. “I think that’s how a lot of people viewed it,” says Jones. “That’s how I viewed it.”

So, physicists created a whole new observatory to test the Los Alamos gallium anomaly. They named it Baksan Experiment on Sterile Transitions, or, in the proud physics tradition of strained acronyms, BEST.

The observatory sits in a tunnel buried more than a mile under the Baksan River in the Russian republic of Kabardino-Balkaria, across the mountains from the country of Georgia. There, before Russia’s invasion of Ukraine threw the local scientific community into chaos, an international team of particle physicists recreated the Los Alamos gallium experiment, specifically looking for missing neutrinos.

BEST found the anomaly again by detecting 20 to 25 percent less germanium than expected. “This definitely reaffirms the anomaly we’ve seen in previous experiments,” Steve Elliot, a particle physicist at Los Alamos National Laboratory and a collaborator on the BEST experiment, said in a statement in early June. “But what this means is not obvious.”

Despite the satisfying result, physicists aren’t getting ahead of themselves. BEST is only one experiment, and it doesn’t explain every discrepancy that’s ever been ascribed to sterile neutrinos. (Other analyses have argued that the Fermilab result couldn’t have been the signs of sterile neutrinos, though they didn’t offer an alternative explanation.)

[Related: Meet the mysterious particle that’s the dark horse in dark matter]

But if scientists were to find similar evidence in other scenarios—for instance, in the neutrino experiment IceCube, buried under the Antarctic sheets, or in other detectors purposely planned for the sterile neutrino hunt—that would serve up real, compelling evidence that something is out there.

If the BEST result holds—and is confirmed by other experiments—it still doesn’t mean sterile neutrinos are responsible for the anomaly. Other undiscovered particles may be in play, or the whole discrepancy could be the fingerprint of some strange and unknown process. If the sterile neutrino idea is true, however, it would breach the biggest theory behind some of the world’s smallest objects.

“It would be real evidence, not only of physics beyond the Standard Model, but of truly new and not-understood physics,” says Jones.

Simply put, if sterile neutrinos exist, the implications would reach far past particle physics. Sterile neutrinos might make up much of our universe’s dark matter, which holds six times as much of the matter we can see—and whose composition we still don’t understand.

The post This ghostly particle may be why dark matter keeps eluding us appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Rogue black holes’ might be neither ‘rogue’ nor ‘black holes’ https://www.popsci.com/science/what-are-rogue-black-holes/ Sun, 26 Jun 2022 17:00:00 +0000 https://www.popsci.com/?p=452465
Rogue black hole in Milky Way galaxy in an artist's rendition
Hubble data from the Milky Way and other galaxies is helping astronomers get to the bottom of an otherwise invisible mystery. NASA/ESA and G. Bacon (STScI)

Millions of invisible black holes float freely around our galaxy. Now astronomers think they can spot them.

The post ‘Rogue black holes’ might be neither ‘rogue’ nor ‘black holes’ appeared first on Popular Science.

]]>
Rogue black hole in Milky Way galaxy in an artist's rendition
Hubble data from the Milky Way and other galaxies is helping astronomers get to the bottom of an otherwise invisible mystery. NASA/ESA and G. Bacon (STScI)

When a star 20 times as massive as our sun dies, it can explode in a supernova and squeeze back down into a dense black hole (with gravity’s help). But that explosion is never perfectly symmetrical, so sometimes, the resulting black holes goes hurtling off into space. These wandering objects are often called “rogue black holes” because they float around freely, untethered by other celestial bodies. 

But that name might be a “misnomer,” according to Jessica Lu, associate professor of astronomy at the University of California Berkeley. She prefers the term “free-floating” to describe these black holes. “Rogue,” she says, implies that the nomads are rare or unusual—or up to no good.

That’s certainly not the case. Astronomers estimate that there are as many as 100 million such black holes that roam around our galaxy. But because they’re solitary, they’re extremely difficult to find. Until recently, these so-called rogue black holes were only known through theory and calculations. 

“They are ghosts, so to speak,” says Lu, who has made it her mission to find the Milky Way’s free-floating black holes. 

[Related: We’re still in the dark about a key black hole paradox]

Earlier this year, two teams of space researchers separately revealed detections of what just might be one of these roaming black holes. One of those teams was led by Casey Lam, a graduate student in Lu’s lab. The other was led by Kailash C. Sahu, an astronomer at the Space Telescope Science Institute. Both teams posted their papers on an open-access website without expert review.

The scientists will get more data from the Hubble Space Telescope in October that Lu says should help “resolve the mystery of whether this is a black hole or a neutron star.” “There’s still a lot of uncertainty about how stars die and the ghost remnants that they leave behind,” she notes. When stars much more massive than our sun run out of nuclear fuel, they’re thought to collapse into either a black hole or a neutron star. “But we don’t know exactly which ones die and turn into neutron stars or die and turn into black holes,” adds Lu. “We don’t know when a black hole is born and a star dies, is there a violent supernova explosion? Or does it directly collapse into a black hole and maybe just give a little burp?” 

With star stuff making up everything we know in the world, understanding the afterlife of stars is key to understanding how we, ourselves, came to be.

How to spot a black hole on the loose

Black holes are inherently invisible. They trap all light that they encounter, therefore there’s nothing for the human eye to perceive. So astronomers have to get creative to detect these dense, dark objects. 

Typically, they look for anomalies in gas, dust, stars, and other material that might be caused by the intensely strong gravity of a black hole. If a black hole is tearing material away from another celestial body, the resulting disk of debris that surrounds the black hole can be brightly visible. (That’s how astronomers took the first direct image of one in 2019 and an image of the black hole at the center of the Milky Way earlier this year.)

But if a black hole is not inflicting chaos with its gravitational force, there’s hardly anything to detect. That’s often the case with these moving black holes. So astronomers like Lu use another technique called astrometric or gravitational microlensing.

“What we do is we wait for the chance alignment of one of these free-floating black holes and a background star,” Lu explains. “When the two align, the light from the background star is warped by the gravity of the black hole [in front of it]. It shows up as a brightening of the star [in the astronomical data]. It also makes it take a little jaunt in the sky, a little wobble, so to speak.”

The background star doesn’t actually move—rather, it appears to shift off its course when the black hole or another compact object passes in front of it. That’s because the gravity of the black hole warps the fabric of spacetime, according to Albert Einstein’s General Theory of Relativity, which alters the starlight.

The odds that a roaming black hole could pass through our celestial neighborhood and disrupt life on Earth are “astronomically small.”

Astronomers use microlensing to study all kinds of temporary phenomena in the universe, from supernovae to exoplanets transiting around their stars. But it’s tricky to do with ground-based telescopes, as the Earth’s atmosphere can blur the images. 

“In astrometry, you’re trying to measure the position of something very precisely, and you need very sharp images,” Lu explains. So astronomers rely on telescopes in space, like Hubble, and a couple of ground-based instruments that have sophisticated systems to adapt for the atmospheric interference. “There are really only three facilities in the world that can make this astrometric measurement,” Lu says. “We’re working right at the cutting edge of what our technology can do today.”

The first rogue black hole? 

It was that brightening, or a “gravitational lensing event” as Lu calls it, that both her and Sahu’s teams spotted in data from the Hubble Space Telescope in 2011. Something, they surmised, must be passing in front of that star.

Figuring out what caused the wobble and change of intensity in a star’s light requires two measurements: brightness and position. Astronomers observe that same spot in the sky over time to see how the light changes as the object passes in front of the star. This gives them the data they need to calculate the mass of that object, which in turn determines whether it’s a black hole or a neutron star. 

“We know the thing that’s doing the lensing is heavy. We know it’s heavier than your typical star. And we know that it’s dark,” Lu notes. “But we’re still a little uncertain about exactly how heavy and exactly how dark.” If it’s only a little bit heavy, say, one and a half times the mass of our sun, it might actually be a neutron star. But if it’s three to 10 times as massive as our sun, then it would be a black hole, Lu explains.

As the two teams gathered data from 2011 to 2017, their analyses revealed distinctly different masses for that compact object. Sahu’s team determined that the roaming object has a mass seven times that of our sun, which would put it squarely in black hole territory. But Lam and Lu’s team calculated it to be less massive, somewhere between 1.6 and 4.4 solar masses, which spans both possibilities. 

[Related: Black holes can gobble up neutron stars whole]

The astronomers can’t be sure which calculation is correct until they get a chance to know just how bright the background star is normally and its position in the sky when something isn’t passing in front of it. They weren’t focused on that star before noticing its uncharacteristic brightness and wobble, so they’re just now getting the chance to make those baseline observations as the lensing effect has faded, Lu explains. Those observations will come from new Hubble data in the fall.

What they do know is that the object in question is in the Carina-Sagittarius spiral arm of the Milky Way galaxy, and is currently about 5,000 light years away from Earth. This detection also suggests that the nearest roaming black hole to could be less than 100 light years away, Lu says. But that’s not reason for concern.

“Black holes are a drain. If you get close enough, they will consume you,” Lu points out. “But you have to get very close, much closer than I think we typically picture.” The boundary around a black hole marking the line where light can still escape its gravity, called the event horizon, typically has a radius of under 20 miles.

The odds that a roaming black hole could pass through our celestial neighborhood and disrupt life on Earth are “astronomically small,” Lu says. “That’s the size of a city. So a black hole could pass by the solar system and we’d hardly notice.”

But she’s not ruling it out. “I’m a scientist,” she says. “I can’t say no chance.”

Regardless of whether the first teams detected a roaming black hole or a neutron star, Lu says, “the real revolution that these two papers are showing is that we can now find these black holes using a combination of brightness and position measurements.” This opens the door to discoveries of more light-capturing nomads, especially as new telescopes come online, including the Vera C. Rubin Observatory currently under construction in Chile and the Nancy Grace Roman Space Telescope scheduled to launch later this decade.

The way Lu sees it, “the next chapter of black hole studies in our galaxy has already begun.”

The post ‘Rogue black holes’ might be neither ‘rogue’ nor ‘black holes’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Popping a champagne cork creates supersonic shockwaves https://www.popsci.com/science/pop-champagne-physics/ Fri, 10 Jun 2022 14:30:00 +0000 https://www.popsci.com/?p=449473
Gold and green champagne bottle with cork popped and bubbles rushing out on a black background
Every time you pop a champagne bottle, you're launching a small, but powerful weapon. Deposit Photos

How fluid dynamics explain bubbly ballistics.

The post Popping a champagne cork creates supersonic shockwaves appeared first on Popular Science.

]]>
Gold and green champagne bottle with cork popped and bubbles rushing out on a black background
Every time you pop a champagne bottle, you're launching a small, but powerful weapon. Deposit Photos

Since Europeans started drinking it during the Renaissance, champagne (or sparkling wine, if it’s not made from the region in northern France) has always come with a pop. The very first bottles were probably accidents: wine that had bubbled up after fermenting for too long. Some built up so much pressure, they’d explode.

But once champagne was tamed by Dom Pérignon and other wine connoisseurs, it became all about the bubbles. The drink really took off during the Roaring Twenties, when the wealthy stealthily filled their coupe glasses from gilded bottles of Ayala and Perrier. Today, people are back to drinking from flutes, which better show off the dance of the carbon molecules.

Of course, part of the joy of imbibing champagne lies in the uncorking itself. The tension of teasing apart the wire cage. The pop and the fountain of fizzy booze that follows. The relieved laughter (even though you’ve done this hundreds of times now). All that drama comes from a millisecond-long reaction triggered by supersonic flow. 

In a study published in the journal Physics of Fluid Dynamics last month, engineers from France and India modeled the shockwaves of gas after champagne is popped. Researchers had previously used high-speed cameras to understand how fast the jet of carbon dioxide and liquid moves once a bottle is uncorked. But this group dug a little deeper to break down the champagne’s “interaction with the cork stopper, the eminently unsteady character of the flow escaping from the bottle, and the continuous change of the geometry” of the matter, as they wrote in the paper.

What they learned is that the ballistics of bubbly are powerful—and maybe even dangerous. When the cork on a champagne bottle is wiggled out, the flow seeps out slowly without forming a strong pattern. But in that flash of a second when the cork is yanked up, the flow bursts through, hitting supersonic speeds at the top of the bottleneck. As the gas and pressure rush out, they dissipate in crown-shaped shockwaves (or Mach diamonds), similar to the ones that come off rockets during launch. The final shockwaves look more muted and detached, and are likely the ripple effects of the CO2 and water vapor’s interactions with the cork.

The findings could prove useful to the development of electronics, submersibles, and even military-grade weapons. “We hope our simulations will offer some interesting leads to researchers, and they might consider the typical bottle of champagne as a mini-laboratory,” study co-author Robert Georges from the Institut de Physique de Rennes told phys.org. Next he said his team might explore how different temperatures and bottle shapes affect the shockwaves.

No word yet on how sabering changes this supersonic sequence. But please remember to always aim the champagne bottle away from yourself and others: That’s basically a miniature missile you’re setting off.

The post Popping a champagne cork creates supersonic shockwaves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Volcanic eruptions are unpredictable, but these geologists cracked the code https://www.popsci.com/science/volcanic-eruption-forecast/ Fri, 03 Jun 2022 18:33:48 +0000 https://www.popsci.com/?p=447942
Sierra Negra volcano erupting in the Galapagos Islands in 2018
The Sierra Negra volcano on Ecuador's Isabela Island last erupted on June 26, 2018. The 1,124-meter-high crater is one of the largest on the planet. Xavier Garcia/picture alliance via Getty Images

If you thought weather forecasting was tough, try taking on magma.

The post Volcanic eruptions are unpredictable, but these geologists cracked the code appeared first on Popular Science.

]]>
Sierra Negra volcano erupting in the Galapagos Islands in 2018
The Sierra Negra volcano on Ecuador's Isabela Island last erupted on June 26, 2018. The 1,124-meter-high crater is one of the largest on the planet. Xavier Garcia/picture alliance via Getty Images

On June 26, 2018, the earth rumbled under the giant sleeping tortoises on Isabela Island in the Galápagos. Not long afterwards, Sierra Negra, a volcano that towers over the island, began to erupt. Over the next two months, the volcano’s fissures spewed out enough lava to cover an area of roughly 19 square miles.

It was hardly Sierra Negra’s first eruption: It’s blown at least seven other times in the past century alone. But what made the 2018 phenomenon special is that geologists had forecasted the eruption’s date as early as January. In fact, they almost got it down to the exact day.

It was a fortunate forecast, to be sure. Now, in a paper published in the journal Science Advances today, they’ve figured out why their estimates hit the mark—and how they can make their simulations get it right again. Sierra Negra is just one volcano in a sparsely inhabited archipelago, but when hundreds of millions of people around the world reside in volcanic danger zones, translating these forecasts to other craters could save untold numbers of lives.

[Related: A 1930s adventure inside an active volcano]

“There is still a lot of work to be done, but … volcano forecasting may become a reality in the coming decades,” says Patricia Gregg, a geologist at the University of Illinois Urbana-Champaign and one of the paper’s authors.

Forecasting eruptions is like forecasting the weather. With so many variables and moving parts, it becomes increasingly difficult to paint a picture of moments while trying to project further into the future. You might trust a forecast for tomorrow, but you might not be so eager to trust a forecast for a week away.

That made Gregg and her colleagues’ Sierra Negra forecast—five months prior to the eruption—all the more fortunate. Although the volcano had begun grumbling by then, with spikes of seismic activity, the forecasters themselves agree it was a gamble.

“It was always just meant to be a test,” says Gregg. “We did not put much faith in our forecast being accurate.”

But Sierra Negra is an ideal laboratory for fine-tuning volcanic forecasts. Because it erupts once every 15 or 20 years, it gets a lot of scrutiny, with scientists from both Ecuador and around the world continually monitoring it. By 2017, their instruments were picking up renewed rumblings indicating a future eruption.

[Related: How to study a volcano when it destroys your lab]

Experts know that volcanoes like Sierra Negra blow their top when magma builds up in the reservoir below. As more magma strains against the surrounding rock, it puts the earth under ever-mounting pressure. Eventually, something has to give. The rocks break, and magma begins to burst through. If geologists could understand exactly how the rocks crumble, they could forecast when that breaking point was likely to occur.

Gregg and colleagues relied on methods familiar to weather or climate forecasters: They combined observational data of the volcano’s ground activity with predictions from simulations. They then used satellite radar images of the ground beneath Sierra Negra to watch what the bloating magma reservoir was doing, and ran models on supercomputers to learn what happens next.

Based on how the magma was inflating by January 2018, their forecasts highlighted a likely eruption between June 25 and July 5. The levels kept rising at the same rate over the next few months—and the eruption began on June 26, right on schedule. 

“If anything had changed during those months, our forecast would not have worked,” says Gregg.

“The very tight coincidence of the author’s forecast with the eruption onset must involve some good fortune, but that in itself tells us something,” says Andrew Bell, a geologist at the University of Edinburgh, who has studied Sierra Negra but wasn’t an author on the paper.

Colorful rocks on the Sierra Negra volcano in the Galapagos
The Sierra Negra volcano seemingly at rest. Deposit Photos

So, in the years afterwards, Gregg and her colleagues combed back over their calculations to determine what they’d gotten right—and what that “something” might be. They ran more simulations using data from the actual eruption to see how close they could get to reality.

What they found was that the magma buildup remained relatively constant over the first part of 2018. By late June, the reservoir had placed enough pressure against the volcano’s underside to trigger a moderately strong earthquake. That seemed to have been the final straw, cracking the rock and letting the magma flow through.

This practice of simulating historic phenomena to check the accuracy of forecast models is sometimes called “hindcasting” in meteorology. In addition to Sierra Negra, Gregg and her colleagues have examined old eruptions from Sumatra, Alaska, and underwater off the coast of Cascadia. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

But is it possible to use the same forecasting techniques in different areas of the world? Every volcano is unique, which means geologists need to adjust their models. By doing so, however, the authors behind the Sierra Negra study found some commonalities in how ground motions translate into the chance of an eruption.

Better forecasting models also mean that scientists learn more about the physical processes that cause volcanoes to rumble to life as they try to match simulations to real-world conditions. “Making genuine quantitative forecasts ahead of the event happening is a challenging thing to do,” Bell says, “but it’s important to try.”

The post Volcanic eruptions are unpredictable, but these geologists cracked the code appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicist Ann Nelson broke barriers—for herself and for those who’d come after her https://www.popsci.com/science/ann-nelson-profile/ Mon, 30 May 2022 15:00:00 +0000 https://www.popsci.com/?p=446667
ann nelson
Even before her Ph.D., Nelson addressed some of physics's biggest puzzles. Esther Goh

As a theoretical thinker, Nelson was almost unparalleded in her lifeitme—and her legacy continues to push the field.

The post Physicist Ann Nelson broke barriers—for herself and for those who’d come after her appeared first on Popular Science.

]]>
ann nelson
Even before her Ph.D., Nelson addressed some of physics's biggest puzzles. Esther Goh

The annals of science journalism weren’t always as inclusive as they could have been. So PopSci is working to correct the record with In Hindsight, a series profiling some of the figures whose contributions we missed. Read their stories and explore the rest of our 150th anniversary coverage here

When Ann Nelson died in a tragic hiking accident in 2019, the loss went far beyond the pain felt by her many mentees in physics, myself among them. Her passing struck a blow to the entire theoretical physics community, which lost, in her, a phenomenal thinker. Nelson spent her career on the edge of particle physics, imagining what the universe might be like—and never worrying whether her questions were too weird. When I eulogized her for Quanta, I wrote about how she described her approach to new ideas: put the purple scarf on the moose and worry later about why it’s there. Nelson also recognized that her achievements provided her with social capital, and she used that clout to make physics a space where we all had a home. 

Born in Louisiana in 1958, Nelson spent the bulk of her childhood in California. There she determined to become a physicist. Though there were people who discouraged her, including high school teachers and classmates who could not fathom the idea of a woman physicist, it did not impede her because she knew the field was her intellectual home. She followed this interest to Stanford University and then on to Harvard, where she earned her Ph.D. and was made a Junior Fellow of Harvard’s Society of Fellows—a prestigious lifetime membership that also grants newly minted scholars three years of support for independent research. Nelson returned to Stanford in 1987, where she became only the second woman in the physics department’s history on a path towards tenure. She was following in the footsteps of Nina Byers, who left for UCLA after a short time on the Stanford faculty in the 1960s.

She once told me that people at Stanford commented too much about this particular superlative. But despite the university community’s apparent willingness to boast about including Nelson, it didn’t do what it needed to keep her, and she eventually went to University of California, San Diego so she could live in the same city as her husband and frequent collaborator, fellow theoretical physicist David Kaplan. From there, they moved to the University of Washington, where they built a life and an astonishing legacy.

Over the decades, Nelson made major contributions to particle physics on a range of questions. Put broadly, her body of work addressed gaps and inconsistencies in the standard model—the math that elegantly describes everything we have seen in a laboratory or collider (except for gravity). The model, however, contains a prediction about neutrons that does not match what particle physicists have seen in experiments; as Ph.D. candidate at Harvard in 1984, Nelson published a mathematical solution to this tension—a singular achievement for a scholar who had not yet completed her Ph.D.
 
Part of what makes that first publication notable is that she had no ego about it. She wasn’t attached to ideas simply because they were her own, but instead focused on whether they were interesting and plausible. Because of that, she’d go on to publish another alternative answer to that same issue, which is known as the Strong CP problem. And she’d publish a solution to yet another of the field’s biggest puzzles: how the idea of supersymmetry—a notion that there should be a set of new particles that are partners to the known particles, a concept that impacts astrophysics and cosmology, including, potentially, the dark matter problem—might work in our universe.

In 2018, she and theoretical physicist Michael Dine shared the J. J. Sakurai Prize for Theoretical Physics, one of the most prestigious awards in the field. The prize citation notes the pair’s “groundbreaking explorations of physics beyond the standard model of particle physics.” Nelson’s Ph.D. advisor, Howard Georgi, himself a prominent theoretical physicist and Harvard Society Fellow, understood her singular intellect, writing in his Physics Today remembrance: “Ann was the only student I ever had who was better than I am at what I do best, and I learned more from her than she learned from me.”

Nelson loved physics, but she did not simply bask in her own academic success. She was deeply aware of social injustice, and in her final years was the sole physics professor at UW to wear a Black Lives Matter pin to work every day. 

This was not a performative act, but one part of a multi-pronged strategy to create better conditions for Black people and other marginalized folks who shared her love for physics. Crucially, Nelson understood that her specific intellectual prowess had gotten her through a door that stayed shut for most. Many of her Ph.D. candidates and postdocs were women of a range of racial and national identities, as well as men of color. In my experience, though she sometimes made mistakes, she worked to create opportunities for us to have the same kind of freedom that she had. Not only did she hire marginalized scholars for her own group, but as chair of UW’s graduate committee, she fought to make institutional changes as well.

In the wake of her death, the UW physics department went on a hiring spree in particle theory that included multiple women. None of them could ever replace her, but I feel confident she would agree that that isn’t the point: Every woman in physics is her own person who, if given a room of her own, might just radically change what we know about the universe.

Correction 6/1/22: An earlier version of this story stated that Nelson was the first professor on the path to tenure in the Stanford physics department. Nina Byers holds that distinction.

The post Physicist Ann Nelson broke barriers—for herself and for those who’d come after her appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to make an X-ray laser that’s colder than space https://www.popsci.com/science/slacs-ultra-cold-x-ray-laser/ Sun, 29 May 2022 14:00:00 +0000 https://www.popsci.com/?p=446431
The cryomodule being delivered to SLAC's X-ray facility.
A cryomodule delivered to SLAC for its enhanced X-ray beam. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

What's cooler than being cool? This ultra-cold X-ray beam.

The post How to make an X-ray laser that’s colder than space appeared first on Popular Science.

]]>
The cryomodule being delivered to SLAC's X-ray facility.
A cryomodule delivered to SLAC for its enhanced X-ray beam. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

The physics world is rallying around CERN’s Large Hadron Collider, now coming online after a lengthy upgrade and a yearslong pause. But that isn’t the only science machine to literally receive new energy. Nearly 6,000 miles away, on the other side of the globe, another one is undergoing its final touches.

The SLAC National Accelerator Laboratory, south of San Francisco, is home to a large laser called LCLS, which lets scientists use X-rays to peer into molecules. “The way to think about a facility like LCLS is really as a super-resolution microscope,” says Mike Dunne, the facility’s director.

Now, LCLS has just finished a major upgrade—called LCLS-II—that plunges the laser down to just a few degrees above absolute zero.

Giving a particle accelerator new life

A half-century ago, SLAC’s tunnel housed a particle accelerator. While most particle accelerators today send their quarry whirling about in circles, this accelerator was perfectly straight. To bring electrons up to speed for smashing, it had to be over 2 miles long. For decades after it opened, it was the “longest building in the world.” (The tunnel is so distinctive, a miles-long straight line carved into foothills, that pilots use it for wayfinding.)

When it came online in 1966, this so-called Stanford Linear Accelerator was an engineering marvel. In the following decades, the particle physics research conducted there led to no fewer than three Nobel prizes in physics. But by the 21st century, it had become something of a relic, surpassed by other accelerators at CERN and elsewhere that could smash particles at far higher energies and see things Stanford couldn’t.

But that 2-mile-long building remained, and in 2009, SLAC outfitted it with a new machine: the Linac Coherent Light Source (LCLS).

LCLS is an example of an apparatus called an X-ray free-electron laser (XFEL). Although it is a laser, it doesn’t have much in common with the little handheld laser pointers that excite kittens. Those create a laser beam using electronic components such as diodes.

An XFEL, on the other hand, has far more in common with a particle accelerator. In fact, that’s the laser’s first stage, accelerating a beam of electrons to very near the speed of light. Then, those electrons pass through a gauntlet of magnets that force them to zig-zag in rapid switchbacks. In the process, the electrons shoot their vast energy forward as X-rays.

Physics photo
The electron gun that’s the source of the beam. Marilyn Chung/Berkeley Lab via SLAC

Doing this can create all sorts of electromagnetic waves from microwaves to ultraviolet to visible light. But scientists prefer to use X-rays. That’s because X-rays have wavelengths that are about the size of atoms, which, when focused in a powerful beam, allow scientists to peer inside molecules. 

[Related: Scientists are putting the X factor back in X-rays]

LCLS is different from most of the other X-ray sources in the world. The California beam works like a strobe light. “Each flash captures the motion of that molecule in a particular state,” says Dunne. 

LCLS could originally shoot 100 flashes per second. That allowed scientists to make, say, a movie of a chemical reaction as it happened. They could watch bonds between atoms form and break and watch new molecules. It may soon be able to make movies with frame rates thousands of times faster.

Chilling a laser

In its first iteration, LCLS used copper structures to accelerate its electrons. But increasing the whole machine’s power was pushing the limits of that copper. “The copper just is pulling too much current, so it melts, just like when you fuse a wire in your fuse box,” says Dunne.

There’s a way around that: the bizarre quantum effect called superconductivity.

When you lower a material past a certain critical temperature, its electrical resistance drops off to virtually nothing. Then, you can functionally get current to flow indefinitely, without losing energy to its surroundings, as heat.

LCLS is far from the first laser to use technology like this. The problem is that getting to that temperature—typically just a few degrees above absolute zero—is no small feat. 

[Related: Scientists found a fleeting particle from the universe’s first moments]

“It gets really hard to support these cryogenic systems that cool to very low temperatures,” says Georg Hoffstaetter, a physicist at Cornell University who had previously worked on the technology. There are superconducting materials that operate at slightly less unforgiving temperatures, but none of them work in spaces that are hundreds of feet long.

A smaller facility might have been fazed by this challenge, but SLAC built a warehouse-sized refrigerator at one end of the structure. It uses liquid helium to cool the accelerator down to -456°F.

Superconductivity also has the bonus of making the setup more energy-efficient; large physics facilities are notorious for using as much electricity as small countries do. “The superconducting technology in itself is, in a way, a green technology, because so little of the accelerator power gets turned into heat,” says Hoffstaetter.

When the upgrades are finished, the new and improved LCLS-II will be able to deliver not just 100 pulses a second, but as many as a million.

What to do with a million frames per second

Dunne says that there are, roughly, three main areas where the beam can advance science. For one, the X-ray beam can help chemists sort out how to make reactions go faster using less material, which could lead to more environmentally friendly industrial processes or more efficient solar panels.

For another, the tool can aid biologists doing things like drug discovery—looking at how pharmaceuticals impact enzymes in the human body that are hard to study via other methods.

For a third, the beam can help materials scientists better understand how a material might behave under extreme conditions, such as an X-ray barrage. Scientists can also use it to design new substances—such as even better superconductors to build future physics machines just like this one.

SLAC's Linac Coherent Light Source X-ray free-electron laser is housed in this building.
The miles-long facility that houses SLAC’s Linac Coherent Light Source X-ray free-electron laser. SLAC National Accelerator Laboratory

Of course, there’s a catch. As with any major upgrade to a machine like this one, physicists need to learn how to use their new tools. “You’ve sort of got to learn how to do that science from scratch,” says Dunne. “It’s not just what you did before…It’s an entirely new field.”

One problem scientists will need to solve is how to handle the data the laser produces: one terabyte, every second. It’s already a hurdle that large facilities face, and it’s likely to get even more acute if networks and supercomputers can’t quite keep up.

Even so, this hasn’t diminished physicists’ enthusiasm for enhancement. Scientists are already plotting yet another update for the laser, set for later in the 2020s, which will boost its energy and allow it to probe even deeper into the world of atoms.

The post How to make an X-ray laser that’s colder than space appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Jocelyn Bell Burnell discovered pulsars, but someone else won the Nobel https://www.popsci.com/space/jocelyn-bell-burnell-profile/ Wed, 25 May 2022 13:00:00 +0000 https://www.popsci.com/?p=445361
jocelyn-bell-burnell
Jocelyn Bell Burnell changed the face of astronomy at a young age. Esther Goh

Bell Burnell has spent her career holding her field to a higher standard of excellence and equity.

The post Jocelyn Bell Burnell discovered pulsars, but someone else won the Nobel appeared first on Popular Science.

]]>
jocelyn-bell-burnell
Jocelyn Bell Burnell changed the face of astronomy at a young age. Esther Goh

The annals of science journalism weren’t always as inclusive as they could have been. So PopSci is working to correct the record with In Hindsight, a series profiling some of the figures whose contributions we missed. Read their stories and explore the rest of our 150th anniversary coverage here

Jocelyn Bell Burnell, a doctoral student in astronomy at Cambridge University, was plowing through the massive set of cosmic data from a radio telescope when she spotted something peculiar: a series of spikes in relative brightness. At the time, in 1967, a full scan of the sky took four days and generated nearly 400 feet of paper printouts, so a data error or printer glitch could have easily been the culprit. She saw it again later, in the same spot, and figured out by digging through reams of data that the flashes occurred with amazing regularity—one about every 1.33 seconds. It was as if there was a pulsating clock up in the sky. 

At first, Bell Burnell’s supervisor Antony Hewish thought the bursts were caused by human activity—or that they might be the beacons of an extraterrestrial civilization. They jokingly dubbed the mysterious flickering lights “Little Green Men” (LGM), entertaining the remote possibility that they were, in fact, signs of alien life. But Bell Burnell would soon find evidence disproving that far-out notion. 

She’d discovered the first known pulsar—a rotating neutron star that emits beams of electromagnetic radiation out of its magnetic poles, akin to a lighthouse spinning its beam. “We spent a month trying to find out what was wrong, so unexpected was the signal,” she later recalled. “At the end of that month, I found a second pulsar, killing the LGM hypothesis and indicating a new kind of astronomical source.” Seven years after Bell Burnell’s discovery, the 1974 Nobel Prize in Physics was awarded to Hewish and his colleague Martin Ryle. Bell Burnell was left out, an omission that represents a systemic shortcoming in academic pursuits she’s spent a career working to change.

Born in 1943 in Belfast, Northern Ireland, Bell Burnell found her calling early. “When we started doing science at school, which in Britain is age 12, it became clear quickly that I was good at physics, okay at chemistry, and bored with biology,” she said in a 2014 interview with Current Science. She specifically recalls her father bringing astronomy books home from the library when she was about 14: “I read these from cover to cover.” 

Bell Burnell earned a bachelor’s in physics from University of Glasgow in Scotland and went to study astronomy at the University of Cambridge. As part of her doctoral thesis, she and other students built a radio telescope—a massive antenna and receiver that detected electromagnetic waves streaming down from faraway stars. After six months of data gathering, she had literal miles of papers to examine. Without her attention to detail, she might easily have missed those mysterious blips, the discovery of which was published in Nature in 1968. 

Her revelation did more than reveal a faraway blinking star: It took physics and astronomy to new heights. In the decades that followed, scientists used these celestial clocks to study space phenomena. Pulsars helped researchers discover gravitational radiation—the ripples in space and time emanating from faraway celestial bodies—which was predicted by Einstein, but remained unconfirmed for nearly a century. Scientists have also used pulsars to study gravitational waves emanating from faraway black holes.

Bell Burnell has attributed her omission from the Nobel to prevailing scientific mores of the era. “At the time, the picture we had of the way science was done was there was a senior man and a whole fleet of minions under that senior man,” she said in an interview with CNBC years later. “

Although she felt that her exclusion had more to do with being a student than her gender, she’s since became a passionate advocate for women in science, pushing to improve the situation with the institutions whose policies and practices often ignore or bypass women while favoring men. To help mend this problem, Bell Burnell co-founded Athena SWAN—an organization that aims to advance gender equity in academia—in 2005.   

Bell Burnell herself went on to have a stellar career. She held several professorships, including at University College London and the University of Oxford, and worked at the Royal Observatory, Edinburgh, until she retired in 2004 and became a visiting professor at the University of Oxford. In 2018, she was awarded the prestigious Breakthrough Prize, given for achievements in fundamental physics, life sciences, and mathematics. She donated the $3 million prize money to the Institute of Physics in the UK to fund scholarships for grad students from underrepresented groups, in the hope that they’ll one day make groundbreaking, world-changing discoveries the way she did.  

The post Jocelyn Bell Burnell discovered pulsars, but someone else won the Nobel appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Racial and economic barriers kept Carolyn Beatrice Parker from realizing her full potential https://www.popsci.com/science/carolyn-beatrice-parker-profile/ Wed, 18 May 2022 15:00:00 +0000 https://www.popsci.com/?p=443786
carolyn beatrice parker
Carolyn Beatrice Parker was a gifted physicist whose potential was cut short. Esther Goh

The Dayton Project physicist could have had a long and storied career in her field.

The post Racial and economic barriers kept Carolyn Beatrice Parker from realizing her full potential appeared first on Popular Science.

]]>
carolyn beatrice parker
Carolyn Beatrice Parker was a gifted physicist whose potential was cut short. Esther Goh

The annals of science journalism weren’t always as inclusive as they could have been. So PopSci is working to correct the record with In Hindsight, a series profiling some of the figures whose contributions we missed. Read their stories and explore the rest of our 150th anniversary coverage here

All too often, even in 2022, Black students are interpreted through a deficit model: Academic institutions assume that we’re underprepared and that our families are unfamiliar with such work. Read Carolyn Beatrice Parker’s CV in the context of her family’s achievements, however, and it’s immediately apparent that she represents an example of what’s possible when intellectual success is expected and the resources to achieve it are provided. 

Parker, the first known Black woman to earn a master’s degree in physics, came from a family of strivers who could arguably be held up as the archetype of W.E.B. Du Bois’s “Talented Tenth.” Her father was a Fisk University-educated physician, and her mother was a schoolteacher. A maternal uncle was a dentist, and another cousin earned a degree in mathematics from Fisk. Parker—born in November 1917 in Gainesville, Florida—and her siblings all went to college, and every one of them earned an advanced degree.  

In some ways, Parker’s trajectory through her field reflects the stories of many white men who are better known: She was a technical mind drawn into the US’s nuclear efforts during World War II, then went on to earn a postgraduate degree at the Massachusetts Institute of Technology—a landing pad for many ex–Manhattan Project scientists. 

Yet, in that company, Parker is distinct. She got as far as she did because of the educational opportunities created by Black institutions; entering a better resourced university did not transform her career the way it did for her white, male contemporaries. She earned a second masters, but never a Ph.D. Her name doesn’t appear in major recountings of mid-twentieth century physics and only recently has any kind of comprehensive biography been written about her. Her family has lovingly maintained an oral history that has only recently been taken up in the newsletter of the American Physical Society’s Forum on the History and Philosophy of Physics. The entry’s authors, physicist Ronald E. Mickens and history professor Charmayne E. Patterson, note that they penned the essay hoping that someone “will find an urge to write a proper, full-length biography of this fascinating woman.”

As a Black woman, scientific institutions were not set up to protect and nurture her. Dedicated teachers and her highly educated family provided Parker with a solid primary education in spite of the underfunded, segregated primary schools she was forced to attend in Florida. Parker’s higher education and career, however, were slowed at times because she spent so much time working as a public school teacher or physics and math instructor to pay for her schooling, and she made choices that did not necessarily align with her intellectual interests purely for financial reasons. Parker’s studies were delayed when she spent an academic year teaching public school to fund her bachelor’s degree at Fisk University. There she was advised by Elmer Imes—the second African American to earn a Ph.D. in physics and the founding chair of the school’s physics department—to pursue graduate studies at his own alma mater, the University of Michigan. However, her desired physics program required a thesis, and, as Parker was funding her studies by working additional years as a teacher in Florida and Virginia, she pursued an MA in mathematics instead.

After she graduated, Parker worked as an instructor at Bluefield State College in West Virginia, before she was recruited in 1942 to work on the Dayton Project, which produced polonium as part of the larger effort to develop atomic weapons. Intensive physics laboratory courses at Fisk had trained Parker in advanced techniques including electronic testing and infrared spectroscopy, and her first masters provided her knowledge in advanced applied mathematics. Safety protocols (as far as they existed) in the Dayton Project, though, seem to have been particularly lacking in regard to protecting Black women: In their essay, Mickens and Patterson reference comments made by a Dayton project scientist about a female employee with “unruly hair, some of which became contaminated.” This employee, who the pair believes was Parker, had weekly urine checks with the highest radiation counts of anyone in her research group. Head coverings at Dayton had likely been designed with the short, finer hairs of white men in mind—even though many women also worked at the facility.

After Dayton, Parker was recruited as faculty at Fisk, where Imes’s replacement, James R. Lawson, was setting up an infrared spectroscopy research program. After a number of years there, Parker realized she needed a doctoral degree to advance both her teaching and research, and used her Dayton Project contacts to gain admission to a physics Ph.D. program at MIT. She got in, but, even though MIT was flush with funds from the Department of Defense, she received no financial aid—especially disturbing for a Manhattan Project alum. Ultimately, she switched from a doctoral to a MS program. 

After she enrolled in 1951, Mickens and Patterson believe her studies were delayed once more by the need to work part time at the nearby Geophysics Research Division at the Air Force Cambridge Research Center, where she continued for a decade after leaving MIT. By the time she officially finished her MS in 1955, she’d begun to experience what were likely symptoms of leukemia.

Despite the misogynoir she faced, Parker made an independent contribution to research in nuclear and particle physics with her MIT thesis, titled “Range distribution of 122 MEV (pi⁺) and (pi⁻) mesons in brass”. As a sometimes-teacher of stellar astrophysics, I can say that these kinds of measurements of nuclear interactions are key to understanding physical phenomena not just here on Earth, but also far away in the cosmos. It is somewhat chilling to imagine what insights her incredible mind might have provided, had Parker’s education not been slowed at nearly every turn. Arguably, she could have earned a Ph.D. in physics as early as 1942, around the time she began at the Dayton Project.

This year, 2022, marks the 50th anniversary of the first African-American woman who did earn a Ph.D. in physics. When she defended her dissertation at the University of Michigan in 1972, Willie Hobbs Moore broke a barrier in professional science. In the intervening years, around 100 more Black American women, myself included, have reached that level in physics and related fields. Often, we are touted as firsts: the first at our institution, the first in our area of specialty. And we absolutely are first in those spaces. But it is important to recognize that all of us, Parker and Hobbs Moore included, are part of a long tradition of Black scientists, which stretches back and through Africa before European colonialism and the transatlantic slave trade. Each of our achievements is rooted in those who come before us—names we often do not know. 

The post Racial and economic barriers kept Carolyn Beatrice Parker from realizing her full potential appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: Inside the tantalizing quest to sense gravity waves https://www.popsci.com/science/gravity-waves-search/ Wed, 18 May 2022 11:00:00 +0000 https://www.popsci.com/?p=443702
Images from April 1981 issue of Popular Science.
“The tantalizing quest for gravity waves” by Arthur Fisher appeared in the April 1981 issue of Popular Science. Popular Science

In the April 1981 issue of Popular Science, we explored the many initiatives and techniques used in the exciting hunt for sensing gravity waves, then out of reach.

The post From the archives: Inside the tantalizing quest to sense gravity waves appeared first on Popular Science.

]]>
Images from April 1981 issue of Popular Science.
“The tantalizing quest for gravity waves” by Arthur Fisher appeared in the April 1981 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

When two blackholes collided 1.3 billion years ago, the impact released three suns’ worth of energy into the fabric of spacetime. On a Monday in 2015, at a remote facility in Hanford, Wash., researchers detected that ancient cosmic impact as its effects swung past Earth. They mapped its gravity wave, whose length was almost incomprehensibly small (1/10,000th the diameter of a proton), to audio and heard a whoop. That tiny soundtrack was more than 100 years in the making.

Physicists had been seeking ways to detect gravity waves—ripples in spacetime caused by massive events—ever since Einstein predicted their existence in 1916. In an April 1981 article, Popular Science’s editor, Arthur Fisher, described the hunt for gravity waves, calling it “one of the most exciting in the whole history of science.” The Laser Interferometer Gravitational-Wave Observatory, or LIGO, was responsible for sensing the 2015 wave, but, as Fisher explains, in 1981 it was just one among many competing initiatives, each pursuing a different measurement technique. 

Rainer Weiss, MIT physics professor (now emeritus) and Kip Thorne (Caltech) were among the many scientists Fisher met and interviewed. Weiss devised the laser interferometer’s design in the 1970s and later teamed up with Thorne and Barry Barish to build LIGO (all three earned the 2017 Nobel Prize in Physics for their efforts). Ever since that first cosmic whoop in 2015, LIGO has detected 90 different gravitational wave events

In his story, Fisher describes the far-out wilds of space responsible for shaking space-time, including starquakes, gamma-ray bursts, and ticking neutron stars (pulsars). But it was Weiss, shortly after his device detected its first gravity wave in 2015, who captured space’s turbulence best: “monstrous things like stars, moving at the velocity of light, smashing into each other and making the geometry of space-time turn into some sort of washing machine.” 

“The tantalizing quest for gravity waves” (Arthur Fisher, April 1981) 

When scientists finally detect a form of energy they have never seen, they will open a new era in astronomy.

In the vast reaches of the cosmos, cataclysms are a commonplace: Something momentous is always happening. Perhaps the blazing death of an exhausted sun, or the collision of two black holes, or a warble deep inside a neutron star. Such an event spews out a torrent of radiation bearing huge amounts of energy. The energy rushes through space, blankets our solar system, sweeps through the Earth… and no one notices.

But there is a small band of experimenters, perhaps 20 groups worldwide, scattered from California to Canton, determined that some day they will notice. Pushed to the edge of contemporary technology and beyond, battling the apparent limits of natural law itself, they are developing what will be the most sensitive antennas ever built. And eventually, they are sure, they will detect these maddeningly intangible phenomena—gravity waves.

Even though gravity waves (more formally called gravitational radiation) have never been directly detected, virtually the entire scientific community is convinced they exist. This assurance stems, in part, from the bedrock on which gravity-wave notions are founded: Albert Einstein’s theory of general relativity, which, though still being tested, remains untoppled [PS, Dec. ‘79]. Says Caltech astrophysicist Kip Thorne, “I don’t know of any respectable expert in gravitational theory who has any doubt that gravity waves exist. The only way we could be mistaken would be if Einstein’s general relativity theory were wrong and if all the competing theories were also wrong, because they also predict gravity waves.”

In 1916, Einstein predicted that when matter accelerated in a suitable way, the moving mass would launch ripples in the invisible mesh of space-time, tugging momentarily at each point in the universal sea as they passed by. The ripples—gravity waves—would carry energy and travel at the speed of light. 

In many ways, this prediction was analogous to one made by James Clerk Maxwell, the brilliant British physicist who died in the year of Einstein’s birth—1879. Maxwell stated that the acceleration of an electric charge would produce electromagnetic radiation—a whole gamut of waves, including light, that would all travel at the same constant velocity. His ideas were ridiculed by many of his contemporaries. But a mere decade after his death, he was vindicated when Heinrich Hertz both generated and detected radio waves in the laboratory.

Why, then, more than 60 years after Einstein’s bold forecast, has no one seen a gravity wave? Why, despite incredible obstacles, are physicists still seeking them in a kind of modern quest for the Holy Grail, one of the most exciting in the whole history of science?

To find out, l visited experimenters who are building gravity-wave detectors and theoreticians whose esoteric calculations guide them. In the process, I learned about the problems, and how the attempts to solve them are already producing useful spinoffs. And I learned about the ultimate payoff if the quest is successful: a new and potent tool for penetrating, for the first time, what one physicist has called “the most overwhelming events in the universe.”

A kiss blown across the Pacific

The fundamental problem in gravity-wave detection is that gravity as a force is feeble in the extreme, some 40 orders of magnitude weaker than the electromagnetic force. (That’s 1040, or a 1 followed by 40 zeros.)

Partly for this reason, and partly because of other properties of gravity waves, they interact with matter very weakly, making their passage almost imperceptible. And unlike the dipole radiation of electromagnetism, gravitational radiation is quadrupole.

If a gravity wave generated, for example, by a supernova in our galaxy passed through the page you are now reading, the quadrupole effect would first make the length expand and the width contract (or vice versa), and then the reverse. But the amount of energy deposited in the page would be so infinitesimal that the change in dimension would be less than the diameter of a proton. Trying to detect a gravity wave, then, is like standing in the surf at Big Sur and listening for a kiss blown across the Pacific. As for generating detectable waves on Earth, a la Hertz, theoreticians long ago dismissed the possibility. “Sure, you make gravity waves every time you wave your fist,” says Rainer Weiss, a professor of physics at MIT. “But anything you will ever be able to detect must be made by massive bodies moving very fast. That means events in space.”

Astrophysicists have worked up whole catalogs of such events, each associated with gravity waves of different energy, different characteristic frequencies, and different probabilities of occurrence. They include the supposed continuous background gravitational radiation of the “big bang” that began the universe [PS, Dec. ‘80], and periodic events like the regular pulses of radiation emitted by pulsars and binary systems consisting of superdense objects. And then there are the singular events: the births of black holes in globular clusters, galactic nuclei, and quasars; neutron-star quakes; and supernovas.

Probably the prime candidate for detection is what William Fairbank, professor of physics at Stanford University, calls “the most dramatic event in the history of the universe”—a supernova. As a star such as our sun ages, it converts parts of its mass into nuclear energy, perhaps one percent in five billion years. “The only reason a large star like the sun doesn’t collapse,” explains Fairbank, “is because the very high temperature in its core generates enough pressure to withstand gravitational forces. But as it cools from burning its fuel, the gravitational forces begin to overcome the electrical forces that keep its particles apart. It collapses faster and faster, and if it’s a supernova, the star’s outer shell blasts off. In the last thousandth of a second, it collapses to a neutron star, and if the original star exceeded three solar masses, maybe to a black hole.” One way of characterizing the energy of a gravity wave is the strain it induces in any matter it impinges on. If the mass has a dimension of a given length, then the strain equals the change in that length (produced by the gravity wave) divided by the length. Gravity waves have very, very tiny strains. A supernova occurring in our galaxy might produce a strain on Earth that would shrink or elongate a 100-cm-long detector only one one-hundredth the diameter of an atomic nucleus. (That is 10-15 cm, and physicists would label the strain as 10-17.) To the credit of tireless experimenters, there are detectors capable of sensing that iota of a minim of a scruple.

But there is a catch: Based on observations of other galaxies, a supernova can be expected to occur in the dense center of any given galaxy roughly about once in 30 years. That is a depressingly long interval. Over and over again, the scientists I spoke to despaired of doing meaningful work if it had to depend on such a rara avis. Professor David Douglass of the University of Rochester told me: “To build an experiment to detect an event once every 30 years—maybe—is not a very satisfying occupation. It’s hardly a very good Ph.D. project for a graduate assistant; it’s not even a good career project—you might be unlucky.”

Gravity waves: powerful astronomical tools?

What if we don’t confine ourselves to events in our own galaxy, but look farther afield? Instead of the “hopelessly rare” (in the words of one researcher) supernova in our galaxy, what if we looked for them in a really large arena—the Virgo cluster, which has some 2,500 galaxies, where supernovas ought to be popping from once every few days to once a month or so? That’s Catch-222. The Virgo cluster is about 1,000 times farther away than the center of our own galaxy. So a supernova event from the cluster would dispatch gravity waves whose effect on Earth would be some million times weaker (1,000 times 1,000, according to the inverse-square law governing all radiative energy). And that means building a detector a million times more sensitive. “There is no field of science,” says Ronald Drever of Caltech and the University of Glasgow, Scotland, “where such enormous increases in sensitivity are needed as they are here, in gravity-wave detection.” Trying to detect a supernova in a distant galaxy means having to measure a displacement one-millionth the size of an atomic nucleus.

Paradoxically, it is this very quality that gives gravity waves the ability to be, as Kip Thorne says, “a very powerful tool for astronomy. True, they go through a gravity-wave detector with impunity. But that means the gravity waves generated during the birth of a black hole can also get away through all the surrounding matter with impunity.” And neither light, nor gamma rays, nor radio waves can. During a supernova we can see the exploding shell via showers of electromagnetic radiation, but only hours or days after the initial massive implosion—the gravitational collapse. During the collapse, while a neutron star or black hole is being formed, nothing but gravity waves (and, theoretically, neutrinos) can escape.

“We’ve opened, at least partially, all the electromagnetic windows onto the universe,” says Thorne. “With gravity wave astronomy, we will open a unique new window onto fascinating, explosive events that cannot be well studied any other way—births and collisions of black holes, star quakes, collapses to neutron stars. This is the real bread and butter of modern high-energy astrophysics.”

But first, as the cookbooks say, you must catch your gravity wave. Until the 1950’s, no one presumed that the task was even feasible. Then Joseph Weber, a physicist at the University of Maryland, began to ponder the problem of building a gravity-wave detector, and proceeded to do so. It is no exaggeration to say that he fathered the entire field. By 1967, he and his assistants had built the first operating gravity-wave detector—a massive aluminum bar, isolated as well as possible from external vibrations and girdled by piezoelectric crystal sensors, which translated changes in the bar’s dimensions into electrical signals. Weber reported a number of events recorded on this and a twin detector at Argonne that he concluded were gravity waves [PS, May ‘72]. His report stimulated a host of other experimenters to build their own detectors. Designed by such investigators as J. A. Tyson at Bell Labs and David Douglass at Rochester, the detectors followed the same principles as Weber’s pioneering bar detector, but with greater sensitivity. These and subsequent experimenters were unable to confirm Weber’s findings; in fact, at the level Weber’s bar was capable of, theoreticians believe it was impossible to have detected gravity waves. “Either Joe Weber was wrong,” one told me, “or the whole universe is cockeyed.”

Today, three basic kinds of gravity-wave detectors are being developed. One is basically a Weber resonant-bar antenna, much refined; the second is the laser interferometer; and the third is a space-based system called Doppler tracking. Each has its advantages, and each its own devilish engineering problems.

Farthest along is the resonant bar, mostly because it has been in the works longest. The more massive such a bar is, the better (because it will respond to a gravity wave better). And its worth depends on the quality of resonating, or “ringing,” for a time after it has been struck by the wave. The longer it rings, the better an experimenter is able to pick out the effect of the wave. That quality is measured by the value called “Q”-the higher the Q, the better. For a while David Douglass and others, including Soviet scientists, have been seeking to make detectors out of such very-high-Q materials as sapphire-crystal balls. But Douglass, for one, has returned to aluminum. The reasons: New alloys of aluminum have been found with very high Q’s; sapphire can’t be fabricated in massive chunks (one of his detectors has a six-ton aluminum bar); and expense: “A 60-pound pure sapphire crystal,” he told me, “would cost about $50,000.”

Like virtually everyone else developing bar antennas, Douglass has abandoned room-temperature detectors and turned to cryogenic detectors, cooled down as close to absolute zero as possible. That includes groups at Perth, Australia, Tokyo, Moscow, Louisiana State University, Rome, Weber himself at the University of Maryland, and William Fairbank and colleagues at Stanford University.

Fairbank told me why the low-temperature route was essential: “At room temperature, the random thermal motion of the atoms in a bar is 300 times as big as the displacement we’re trying to detect. The only way to approach the sensitivities we’re after is to get rid of that thermal noise by cooling the bar.”

When I visited the Stanford campus, the detector’s five-ton aluminum bar was sealed inside its cryostat, a kind of oversized Thermos bottle. The whole assembly looked like something you could use if you wanted to freeze Frankenstein’s monster for a few centuries. And the environment was suitable, too: a vast, drafty, concrete building that could have been an abandoned zeppelin hangar.

This antenna, and others like it, is designed to respond to gravity waves with a frequency of about 1,000 Hz, characteristic of supernova radiation. Obviously the antenna must be isolated as far as possible from any external vibration at or around that frequency. This the Stanford group does by suspending the cylinder with special springs, consisting of alternating iron and rubber bars in what is called an isolation stack. “Otherwise, with our sensitivity,” Fairbank says, “this detector would make a dandy seismograph—just what we don’t want in California.” The Stanford suspension system attenuates outside noise by a factor of 10 °, enough so that you could drop a safe in its vicinity without disturbing the detector.

At LSU, William Hamilton, who is building an antenna very similar to Stanford’s (eventually it will become part of a Rome-Perth-Baton Rouge-Stanford axis looking for gravity-wave coincidences), takes another route toward seismic isolation. The very low temperature of the device allows him to levitate the bar magnetically; it is coated with a thin film of niobium-tin alloy, a material that becomes superconducting near absolute zero. If electromagnets are placed under the bar, the persistent currents running through its coating will interact with the magnetic field so that the bar literally floats in air.

Superconductivity is also the key to one of the most perplexing of all engineering problems: designing a transducer capable of sensing the tiny displacements of these antennas and converting them to a useful voltage that can be amplified and measured. “You can’t buy such things,” says David Douglass, “you have to make them, and go beyond the state of the art.” Both Douglass and Fairbank use superconducting devices whose elegant design makes them exquisitely sensitive—orders of magnitude more than the piezoelectric crystals originally used—although their approaches differ in details.

Superconducting devices may also one day—a day far in the future-allow gravity-wave astronomers to perform a feat of legerdemain called “quantum non-demolition.” To oversimplify, this means evading a fundamental limit for all resonant detectors, one that is imposed by the laws of quantum mechanics as the displacements become ever smaller. That problem will have to be faced if bar antennas are ever to be sensitive enough to detect gravity waves from supernovas in the Virgo cluster.

An alternative: laser interferometers

“One of the reasons we’re turning to laser detectors,” says Ronald Drever, “is to avoid the quantum-limit problem. Because we can make measurements over a much larger region of space, we effectively see a much larger signal. We don’t have to look for such minute changes as in a bar antenna.”

Laser interferometers bounce an argon-ion laser beam back and forth many times between two mirrors. (A generalized approach to the scheme appears in the drawing on page 92.) As a gravity wave ripples between the mirrors, the length of the light path changes, resulting in a change in the interference patterns that appear in photodetectors. Numbers of such detectors are in the planning and building stages, including ones at MIT, designed by Rainer Weiss, a pioneer in the field; at the Max Planck Institute of Astrophysics in Germany; at the University of Glasgow; and at Caltech.

“The one in Glasgow has 10-meter arms,” Drever told me, “and is working now. The one we’re working on at Caltech also has 10-meter arms, but will be stretched to 40 meters as soon as a building for it is ready. This will serve as a prototype for a much larger version—a kilometer to several kilometers long.”

Of course, laser interferometers have engineering problems, too, problems that become exacerbated as they grow larger. The laser beams must travel through vacuum pipes, and isolating pipes a kilometer long will not be simple. But Drever is convinced it can be done. “Maybe we’ll put it in a mine, or in the desert,” he says. This device may be ready by 1986, and has, Drever thinks, a chance of eventually detecting supernovas in the Virgo cluster.

One additional advantage of such laser detectors is that they are not restricted to a narrow frequency range, as are the resonant antennas, but would be sensitive to a broad band of frequencies from a few hertz to a few thousand hertz. They could therefore detect some massive black-hole events, which have lower frequencies than gravity waves from supernovas. To detect gravity waves with much lower frequencies, such as those from binary systems, you need very long baselines. “ln about 15 years,” says Rainer Weiss, “we will want big, space-based laser systems, using, say, a 10-kilometer frame in space. That way we could avoid all seismic noise.” 

The third kind of gravity-wave detector already exists in space, after a fashion. It has been used for spacecraft navigation for 20 years. It is called Doppler tracking, and is very simple—in theory. Here’s how ifs described by Richard Davies, program leader for space physics and astrophysics at Jet Propulsion Laboratory in Pasadena, Calif.: “You send a radio signal from Earth to a spacecraft, and a transponder aboard the craft sends the signal back to you. If a gravity wave passes through the solar system, it alters the distance between the two, and when you compare the frequency of the signal you sent out to the one you get back, you see that they are different—-the Doppler shift. However, the contribution of the gravity wave to this shift is minute compared to that of the spacecraft’s own velocity.

“We want to detect gravity waves with very low frequencies, maybe a thousandth of a hertz, using interplanetary spacecraft and the Deep Space Net that is used to track them. Such waves could be emitted from a collapsing system with a mass of a million to ten million suns, or from double stars that orbit each other in hours.”

A gravity-wave experiment had been planned for the International Solar Polar Mission. But, according to MIT’s Irwin Shapiro, who chaired the Committee on Gravitational Physics of the National Academy of Science’s Space Science Board, the experiment was dropped by NASA because of budget cuts.

Which of these methods will yield the first direct evidence of gravity waves? And when will that first contact come? No one really knows, and the gravity-wave seekers themselves are extremely diffident about making claims and predictions. But some time within the decade seems at least plausible.

ln the meantime, gravity-wave research is paying unexpected dividends. “It has opened up,” says Kip Thorne, “a modest new chapter in quantum electronics. Because it is pushing so hard against the bounds of modem technology, it is inventing new techniques that will have fallout elsewhere; for example, a new way to make laser frequencies more stable than ever. This will be useful in both physics and chemistry research.”

In the long run, however, the search for gravity waves is propelled by the basic drive of all scientists, and all mankind: to see a little farther, to understand a little more than we have ever done before.

Two indirect proofs for the existence of gravity waves 

The first evidence of any kind for the existence of gravity waves comes not from sensing them directly but from observing their effect on the behavior of a bizarre astronomical object called a binary pulsar. A pulsar, believed to be a rapidly spinning neutron star, emits strong radio signals in periodic beeps. But pulsar PSR 1913+16, discovered by a team of University of Massachusetts astronomers in 1974 with the world’s largest radio telescope (at Arecibo, P.R.), is unique. Its beeps decelerate and accelerate in a regular sequence lasting about eight hours. From this, the astronomers, led by Joseph Taylor, deduced that the pulsar was rapidly orbiting around another very massive object—perhaps another neutron star.

Einstein’s theory of general relativity predicts that this binary system should produce a considerable quantity of gravity waves, and that the energy radiated should be slowly extracted from the orbit of the system, gradually decreasing its period as the superdense stars spiral closer to one another. Einstein’s equations predict a decrease of one ten-thousandth of a second per year for a pulsar like PSR 1913+ 16. And after four years of observations Taylor’s team announced, in late 1978, that ultraprecise measurements of the radio signals gave a value almost exactly that amount. The closeness of the match not only provides good—even though indirect—evidence of the existence of gravity waves, but also further bolsters Einstein’s theory of gravity against some competing theories.

As Taylor said of what he called “an accidental discovery originally,” the astronomers had an ideal situation for testing the relativity theory—a moving clock (the pulsar) with a very precise rate of ticking and a high velocity—some 300 kilometers per second. “lt’s almost as if we had designed the system ourselves and put it out there just to do this measurement.” 

Another indirect indication that gravity waves do indeed exist came more recently, and more dramatically. It stemmed from an event that still has astronomers reeling. At exactly 15 hours, 52 minutes, five seconds, Greenwich time on March 5, 1979, a gamma-ray burst of unparalleled Intensity flashed through our solar system from somewhere in space. It triggered monstrous blips on detectors aboard a motley collection of nine different spacecraft throughout the solar system, which form, in effect, an international network maintained by the U.S., France, West Germany, and the Soviet Union.

Once-in-a-lifetime event

“This March 5 gamma-ray event was extraordinary,” says Thomas Cline of NASA Goddard Space Flight Center, who, with his colleague Reuven Ramaty and other U.S., French, and Russian astrophysicists, has been analyzing it ever since. “It was not like the gamma-ray bursts that have been seen a hundred times in the last decade. It’s a first and only, like something that’s seen once in a scientific lifetime.”

Because the surge of gamma rays was detected by so many satellites separated in space, astronomers were able to triangulate the position of its source and identify it with a visible object—the first time for such a feat. The object was a supernova remnant dubbed N49 in the Large Magellanic Cloud (LMC), a neighboring galaxy roughly 150,000 light-years away.

Ramaty, Cline, and colleagues posit that the genesis of the gamma-ray burst was a quivering neutron star—the ultradense, ultracompact object that many theorists believe is left over from a supernova explosion. “We believe,” Cline told me, “that a neutron star can undergo a transformation analogous to an avalanche. Snow falls on a mountain until there’s a slide. 

Similarly, dust and other material collect on a neutron star until it can’t stand being as heavy as it is. Then there’s a star quake, either in the crust or in the core. and the star shakes itself at a frequency of about 3,000 Hz, a note you could hear if you were listening to it in an atmosphere. The surface of the star-only five to 10 miles in diameter—is heaving up and down several feet, thousands of times a second. Its magnetosphere is shaken, and that’s what produces, indirectly, the gamma rays. But that’s secondary, in our model, to the gravitational waves caused by the oscillation of the neutron star.

“Could we detect these? The answer is no. After all, this is only a kind of after-gurgle, thousands of years after the star’s original collapse—the supernova. It’s like a tremor after a major earthquake, maybe only one percent as big.”

Nevertheless, Cline called all the U.S. gravity-wave experimenters who could have been “on-line” during the gamma ray burst to learn whether they had seen anything. Of them all, only Joseph Weber had an antenna working that March day, and he had observed nothing.

The gamma-ray detectors aboard the satellites were not capable of sensing the 3,000-Hz frequency predicted by the starquake model. If they had, says Cline, it would have been “a very direct link” to the existence of gravitational radiation.

But the star-quake model makes another prediction: The gravity waves generated should carry off an enormous amount of energy, far more than that in the gamma rays, and thus snuff out the star’s vibration very quickly. “The nice thing,” says Goddard’s Reuven Ramaty, “is that the damping time predicted for gravity waves in this event exactly corresponds to what we observed: The main part of the burst lasted just ‘I5 hundredths of a second, and that’s what we calculate from our model. So we now have for the second time indirect evidence of the existence of gravity waves. But both have problems, as do all indirect checks. They won’t replace direct evidence.”

Physics photo
April 1981 Popular Science cover featuring developments in solar power and automative technology.

Some text has been edited to match contemporary standards and style.

The post From the archives: Inside the tantalizing quest to sense gravity waves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Chien-Shiung Wu’s work defied the laws of physics https://www.popsci.com/science/chien-shiung-wu-profile/ Mon, 16 May 2022 15:00:00 +0000 https://www.popsci.com/?p=442346
Chien Shiung-Wu
Now regarded as the "first lady of physics," Chien-Shiung Wu made contributions that went unrecognized for too long. Esther Goh

Now regarded as the 'first lady of physics,' the Manhattan Project scientist was often not treated as a peer among her collaborators.

The post Chien-Shiung Wu’s work defied the laws of physics appeared first on Popular Science.

]]>
Chien Shiung-Wu
Now regarded as the "first lady of physics," Chien-Shiung Wu made contributions that went unrecognized for too long. Esther Goh

The annals of science journalism weren’t always as inclusive as they could have been. So PopSci is working to correct the record with In Hindsight, a series profiling some of the figures whose contributions we missed. Read their stories and explore the rest of our 150th anniversary coverage here

In quantum physics, there’s a law known as the conservation of parity, which is based on the notion that nature adheres to the ideal of symmetry. In a mirror-image of our world, it posits, the laws of physics would function the same way—despite everything being flipped. Since the early 1900s, experimental evidence suggested that this was true: To the pull of gravity or the draw of the electromagnetic force, the difference between left and right hardly mattered. So, physicists quite reasonably assumed that parity was a fundamental principle in the universe.

But in the 1950s, an experimental physicist at Columbia University named Chien-Shiung Wu devised an experiment that challenged—and defied—that law. Physics, she proved, to the astonishment of the field, did not always adhere to parity. Throughout her life, in fact, this woman demonstrated that parity was not the default; she flouted gender and racial barriers and eventually came to be known as the “first lady of physics.” 

Wu was born in 1912 in a small fishing town north of Shanghai to parents who supported education for women. She displayed an extraordinary talent for physics as a college student in China. At the urging of Jing-Wei Gu, a female professor, she set her sights on earning a Ph.D. in the United States. In 1936, she arrived by ship in San Francisco and enrolled at the University of California, Berkeley, where she studied the nuclear fission of uranium. 

She was 24 years old, in a new country where she wasn’t fluent in the language and where the Chinese Exclusion Act, which prohibited Chinese workers from immigrating, was in full effect. It was preceded by the Page Act, which effectively banned the immigration of Chinese women based on the assumption that they intended to be sex workers. Wu was only able to enter the US because she was a student, but she was still ineligible for citizenship. “There must have been so much tension and conflict there,” says Leslie Hayes, vice president for education at the New York Historical Society. “‘I’m going to this place where I won’t be welcome, but if I don’t go, I won’t be able to fulfill my goals and dreams.’” 

After earning her Ph.D. in 1940, she married another Chinese-American physicist, and the couple moved to the East Coast in a long-shot search for tenure-track work. Major research institutes at the time were generally unwilling to hire women, people of color, or Jewish people, and the uptick in anti-Asian sentiment during World War II certainly didn’t help. “She was discriminated against as an Asian, but more so as a woman,” Tsai-Chien Chiang wrote in his biography of Wu. 

Nevertheless, shortly after a teaching stint at a women’s college, she became the first female faculty member in Princeton University’s physics department. That job was short lived; in 1944, Columbia University recruited her to work on the Manhattan Project, where she would advise a stumped Enrico Fermi on how to sustain a nuclear chain reaction. 

Wu returned to research at Columbia after the war. Her reputation for brilliance and meticulousness grew in 1949 when she became the first to design an experiment that proved Fermi’s theory of beta decay, a type of radioactive decay in which a neutron spontaneously breaks down into a proton and a high-speed electron (a.k.a., a beta particle). In 1956, two theoretical physicists, Tsung-Dao Lee of Columbia and Chen Ning Yang of Princeton, sought Wu’s expertise in answering a provocative question: Is parity really conserved across the universe?

The law had been called into question by a problem known as “theta-tau puzzle,” a recently discovered paradox in particle physics. Theta and tau were two subatomic particles that were exactly the same in every respect—except that one decayed into two smaller particles, and the other into three. This asymmetry confounded the physics community. Yang and Lee dove deep into the literature to see if anyone had ever actually proven that the nucleus of a particle always behaved symmetrically. As they found out, nobody had. So Wu, who they consulted during the process of writing their theoretical paper, got to work designing an experiment that would prove that it didn’t. 

Over the next few months, the men were in near constant communication with Wu. The monumental experiment that she designed and carried out “rang the death knell for the concept of parity conservation in weak interactions,” wrote nuclear physicist Noemie Benczer-Koller in her biography of Wu. Wu’s findings sparked such a sensation that they led to a Nobel Prize in physics—but only for Yang and Lee. Wu’s groundbreaking work in proving the theory they advanced was ignored. 

Though her genius allowed her to work in the same spaces as theoretical scientists, says Hayes, “once there, she was not treated as a peer.” But despite how frequently she experienced discrimination throughout her career—during which she won every award in the field except the Nobel—Wu didn’t stop researching until her retirement in 1981. 

Throughout her life, she was an outspoken advocate for the advancement of female physicists—campaigning, for the rest of her life, for the establishment of parity where it actually counted. “Why didn’t we encourage more women to go into science?” she asked the crowd at an MIT symposium in 1964. “I wonder whether the tiny atoms and nuclei, or the mathematical symbols, or the DNA molecules, have any preference for either masculine or feminine treatment.” 

The post Chien-Shiung Wu’s work defied the laws of physics appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The discovery of electrons breaks open the subatomic era https://www.popsci.com/science/discovery-electron/ Mon, 16 May 2022 13:00:00 +0000 https://www.popsci.com/?p=441912
An image of the 1901 issue of Popular Science Monthly
“On bodies smaller than atoms” (J. J. Thomson, 1901). Popular Science

In the August 1901 issue of Popular Science, physicist J. J. Thomson excitedly detailed his methods for discovering the electron.

The post From the archives: The discovery of electrons breaks open the subatomic era appeared first on Popular Science.

]]>
An image of the 1901 issue of Popular Science Monthly
“On bodies smaller than atoms” (J. J. Thomson, 1901). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

As far as human knowledge is concerned, the electron turned 125 on April 30, 2022. Of course, the subatomic particles have been around since shortly after the Big Bang, but here on Earth nobody knew about them until British physicist, J. J. Thomson announced his discovery on April 30, 1897 at the Royal Institution in London. 

In August 1901, Thomson wrote  “On Bodies Smaller Than Atoms” for Popular Science, detailing his discovery and methods. By today’s standards, the piece reads like a hybrid journal article and memoir, capturing his pride and the thrill of discovery. Thomson was awarded the Nobel Prize in Physics for isolating electrons as something fundamental to all atoms.

At the time of Thomson’s finding, no one had ever detected anything smaller than a hydrogen atom (one proton and one electron, no neutron). However, electricity’s ability to flow through materials—coupled, as Thomson cites, with MarieCurie’s radiation experiments and associated electric fields—suggested the possibility. 

Thomson did more than discover electrons; his method, which involved accelerating particles between electrodes, kicked off a new way to study the subatomic world, using accelerators and colliders to smash apart the smallest of the small. By 1911, Ernest Rutherford presented his atomic model, which confirmed Thomson’s electron discovery, but disproved his broader hypothesis that atoms were uniformly distributed (protons paired with electrons). Today, a potpourri of particles smaller than electrons, such as quarks and neutrinos, make up the Standard Model of the universe, developed in the 1970s. The most elusive, perhaps, is the Higgs boson—believed to be the origin of mass for all subatomic particles—first spied in 2012 by physicists at CERN’s Large Hadron Collider. But even the Standard Model has its gaps, like dark matter and antimatter, which, a century on, continues to fuel the quest for bodies smaller than atoms.

“On bodies smaller than atoms” (J. J. Thomson, August 1901)

The masses of the atoms of the various gasses were first investigated about thirty years ago by methods due to Loschmidt, Johnstone Stoney and Lord Kelvin. These physicists, using the principles of the kinetic theory of gasses and making certain assumptions, which it must be admitted are not entirely satisfactory, as to the shape of the atom, determined the mass of an atom of a gas: and when once the mass of an atom of one substance is known the masses of the atoms of all other substances are easily deduced by well-known chemical considerations. 

“The results of these investigations might be thought not to leave much room for the existence of anything smaller than ordinary atoms, for they showed that in a cubic centimeter of gas at atmospheric pressure and at 0° C. there are about 20 million, million, million (2 X 1019) molecules of gas.

Though some of the arguments used to get this result are open to question, the result itself has been confirmed by considerations of quite a different kind. Thus Lord Rayleigh has shown that this number of molecules per cubic centimeter gives about the right value for the optical opacity of the air, while a method, which I will now describe, by which we can directly measure the number of molecules in a gas leads to a result almost identical with that of Loschmidt. This method is founded on Faraday’s laws of electrolysis; we deduce from these laws that the current through an electrolyte is carried by the atoms of the electrolyte, and that all these atoms carry the same charge, so that the weight of the atoms required to carry a given quantity of electricity is proportional to the quantity carried. We know too, by the results of experiments on electrolysis, that to carry the unit charge of electricity requires a collection of atoms of hydrogen which together weigh about 1/10 of a milligram; hence if we can measure the charge of electricity on an atom of hydrogen we see that 1/10 of this charge will be the weight in milligrams of the atom of hydrogen. This result is for the case when electricity passes through a liquid electrolyte. I will now explain how we can measure the mass of the carriers of electricity required to convey a given charge of electricity through a rarefied gas. In this case the direct methods which are applicable to liquid electrolytes cannot be used, but there are other, if more indirect, methods, by which we can solve the problem. The first case of conduction of electricity through gasses we shall consider is that of the so-called cathode rays, those streamers from the negative electrode in a vacuum tube which produce the well-known green phosphorescence on the glass of the tube. These rays are now known to consist of negatively electrified particles moving with great rapidity. Let us see how we can determine the electric charge carried by a given mass of these particles. We can do this by measuring the effect of electric and magnetic forces on the particles. If these are charged with electricity they ought to be deflected when they are acted on by an electric force. It was some time, however, before such a deflection was observed, and many attempts to obtain this deflection were unsuccessful. The want of success was due to the fact that the rapidly moving electrified particles which constitute the cathode rays make the gas through which they pass a conductor of electricity; the particles are thus as it were moving inside conducting tubes which screen them off from an external electric field; by reducing the pressure of the gas inside the tube to such an extent that there was very little gas left to conduct, I was able to get rid of this screening effect and obtain the deflection of the rays by an electrostatic field. The cathode rays are also deflected by a magnet, the force exerted on them by the magnetic field is at right angles to the magnetic force, at right angles also to the velocity of the particle and equal to Hev sin 𝜽 where H is the magnetic force, e the charge on the particle and 𝜽 the angle between H and v. Sir George Stokes showed long ago that, if the magnetic force was at right angles to the velocity of the particle, the latter would describe a circle whose radius is mv/eH (if m is the mass of the particle); we can measure the radius of this circle and thus find m/ve. To find v let an electric force F and a magnetic force H act simultaneously on the particle, the electric and magnetic forces being both at right angles to the path of the particle and also at right angles to each other. Let us adjust these forces so that the effect of the electric force which is equal to Fe just balances that of the magnetic force which is equal to Hev; when this is the case Fe = Hev or v = F. We can thus find v, and knowing from the previous experiment the value of vm/e, we deduce the value of m/e. The value of m/e found in this way was about 10-7, and other methods used by Wiechert, Kaufmann and Lenard have given results not greatly different. Since m/e = 10-7, we see that to carry unit charge of electricity by the particles forming the cathode rays only requires a mass of these particles amounting to one ten thousandth of a milligram while to carry the same charge by hydrogen atoms would require a mass of one-tenth of a milligram.* 

Thus to carry a given charge of electricity by hydrogen atoms requires a mass a thousand times greater than to carry it by the negatively electrified particles which constitute the cathode rays, and it is very significant that, while the mass of atoms required to carry a given charge through a liquid electrolyte depends upon the kind of atom, being, for example, eight times greater for oxygen than for hydrogen atoms, the mass of cathode ray particles required to carry a given charge is quite independent of the gas through which the rays travel and of the nature of the electrode from which they start.

The exceedingly small mass of these particles for a given charge compared with that of the hydrogen atoms might be due either to the mass of each of these particles being very small compared with that of a hydrogen atom or else to the charge carried by each particle being large compared with that carried by the atom of hydrogen. It is therefore essential that we should determine the electric charge carried by one of these particles. The problem is as follows: suppose in an enclosed space we have a number of electrified particles each carrying the same charge, it is required to find the charge on each particle. It is easy by electrical methods to determine the total quantity of electricity on the collection of particles and knowing this we can find the charge on each particle if we can count the number of particles. To count these particles the first step is to make them visible. We can do this by availing ourselves of a discovery made by C. T. R. Wilson working in the Cavendish Laboratory. Wilson has shown that when positively and negatively electrified particles are present in moist dust-free air a cloud is produced when the air is closed by a sudden expansion, though this amount of expansion would be quite insufficient to produce condensation when no electrified particles are present: the water condenses round the electrified particles, and, if these are not too numerous, each particle becomes the nucleus of a little drop of water. Now Sir George Stokes has shown how we can calculate the rate at which a drop of water falls through air if we know the size of the drop, and conversely we can determine the size of the drop by measuring the rate at which it falls through the air, hence by measuring the speed with which the cloud falls we can determine the volume of each little drop; the whole volume of water deposited by cooling the air can easily be calculated, and dividing the whole volume of water by the volume of one of the drops we get the number of drops, and hence the number of the electrified particles. We saw, however, that if we knew the number of particles we could get the electric charge on each particle; proceeding in this way 1 found that the charge carried by each particle was about 6.5 × 10-10 electrostatic units of electricity or 2.17 × 10-20 electro-magnetic units. According to the kinetic theory of gasses, there are 2 × 1019 molecules in a cubic centimeter of gas at atmospheric pressure and at the temperature 0° C.; as a cubic centimeter of hydrogen weighs about 1/11 of a milligram each molecule of hydrogen weighs about 1/ (22 × 1019) milligrams and each atom therefore about 1/(44 × 1019) milligrams and as we have seen that in the electrolysis of solutions one-tenth of 2 milligram carries unit charge, the atom of hydrogen will carry a charge equal to 10/(44 × 1019)= 2.27 × 10-29 electro-magnetic units. The charge on the particles in a gas we have seen is equal to 2.17 × 10-20 units, these numbers are so nearly equal that, considering the difficulties of the experiments, we may feel sure that the charge on one of these gaseous particles is the same as that on an atom of hydrogen in electrolysis. This result has been verified in a different way by Professor Townsend, who used a method by which he found, not the absolute value of the electric charge on a particle, but the ratio of this charge to the charge on an atom of hydrogen and he found that the two charges were equal.

As the charges on the particle and the hydrogen atom are the same, the fact that the mass of these particles required to carry a given charge of electricity is only one-thousandth part of the mass of the hydrogen atoms shows that the mass of each of these particles is only about 1/1000 of that of a hydrogen atom. These particles occurred in the cathode rays inside a discharge tube, so that we have obtained from the matter inside such a tube particles having a much smaller mass than that of the atom of hydrogen, the smallest mass hitherto recognized. These negatively electrified particles, which I have called corpuscles, have the same electric charge and the same mass whatever be the nature of the gas inside the tube or whatever the nature of the electrodes; the charge and mass are invariable. They therefore form an invariable constituent of the atoms or molecules of all gasses and presumably of all liquids and solids.

Nor are the corpuscles confined to the somewhat inaccessible regions in which cathodic rays are found. I have found that they are given off by incandescent metals, by metals when illuminated by ultraviolet light, while the researches of Becquerel and Professor and Madame Curie have shown that they are given off by that wonderful substance the radio-active radium. In fact in every case in which the transport of negative electricity through gas at a low pressure (i.e., when the corpuscles have nothing to stick to) has been examined, it has been found that the carriers of the negative electricity are these corpuscles of invariable mass.

A very different state of things holds for the positive electricity. The masses of the carriers of positive electricity have been determined for the positive electrification in vacuum tubes by Wien and by Ewers, while I have measured the same thing for the positive electrification produced in a gas by an incandescent wire. The results of these experiments show a remarkable difference between the property of positive and negative electrification, for the positive electricity, instead of being associated with a constant mass 1/1000 of that of the hydrogen atom, is found to be always connected with a mass which is of the same order as that of an ordinary molecule. and which, moreover, varies with the nature of the gas in which the electrification is found.

These two results, the invariability and smallness of the mass of the carriers of negative electricity, and the variability and comparatively large mass of the carriers of positive electricity, seem to me to point unmistakably to a very definite conception as to the nature of electricity. Do they not obviously suggest that negative electricity consists of these corpuscles or, to put it the other way, that these corpuscles are negative electricity: and that positive electrification consists in the absence of these corpuscles from ordinary atoms? Thus this point of view approximates very closely to the old one-fluid theory of Franklin; on that theory electricity was regarded as a fluid, and changes in the state of electrification were regarded as due to the transport of this fluid from one place to another. If we regard Franklin’s electric fluid as a collection of negatively electrified corpuscles, the old one-fluid theory will, in many respects, express the results of the new. We have seen that we know a good deal about the ‘electric fluid’; we know that it is molecular or rather corpuscular in character; we know the mass of each of these corpuscles and the charge of electricity carried by it; we have seen too that the velocity with which the corpuscles move can be determined without difficulty. In fact the electric fluid is much more amenable to experiment than an ordinary gas, and the details of its structure are more easily determined.

Negative electricity (i.e., the electric fluid) has mass; a body negatively electrified has a greater mass than the same body in the neutral state; positive electrification, on the other hand, since it involves the absence of corpuscles, is accompanied by a diminution in mass.

An interesting question arises as to the nature of the mass of these corpuscles which we may illustrate in the following way. When a charged corpuscle is moving, it produces in the region around it a magnetic field whose strength is proportional to the velocity of the corpuscle; now in a magnetic field there is an amount of energy proportional to the square of the strength, and thus, in this case, proportional to the square of the velocity of the corpuscle.

Thus if e is the electric charge on the corpuscle and v its velocity, there will be in the region round the corpuscle an amount of energy equal to ½βe2v2? where β is a constant which depends upon the shape and size of the corpuscle. Again if m is the mass of the corpuscle its kinetic energy is ½mv2, and thus the total energy due to the moving electrified corpuscle is ½(m + βe2)v2,so that for the same velocity it has the same kinetic energy as a non-electrified body whose mass is greater than that of the electrified body by βe2. Thus a charged body possesses in virtue of its charge, as I showed twenty years ago, an apparent mass apart from that arising from the ordinary matter in the body. Thus in the case of these corpuscles, part of their mass is undoubtedly due to their electrification, and the question arises whether or not the whole of their mass can be accounted for in this way. I have recently made some experiments which were intended to test this point; the principle underlying these experiments was as follows: if the mass of the corpuscle is the ordinary “mechanical mass, then, if a rapidly moving corpuscle is brought to rest by colliding with a solid obstacle, its kinetic energy being resident in the corpuscle will be spent in heating up the molecules of the obstacle in the neighborhood of the place of collision, and we should expect the mechanical equivalent of the heat produced in the obstacle to be equal to the kinetic energy of the corpuscle. If, on the other hand, the mass of the corpuscle is “electrical,” then the kinetic energy is not in the corpuscle itself, but in the medium around it, and, when the corpuscle is stopped, the energy travels outwards into space as a pulse confined to a thin shell traveling with the velocity of light. I suggested some time ago that this pulse forms the Rontgen rays which are produced when the corpuscles strike against an obstacle. On this view, the first effect of the collision is to produce Rontgen rays and thus, unless the obstacle against which the corpuscle strikes absorbs all these rays, the energy of the heat developed in the obstacle will be less than the energy of the corpuscle. Thus, on the view that the mass of the corpuscle is wholly or mainly electrical in its origin, we should expect the heating effect to be smaller when the corpuscles strike against a target permeable by the Rontgen rays given out by the tube in which the corpuscles are produced than when they strike against a target opaque to these rays. I have tested the heating effects produced in permeable and opaque targets, but have never been able to get evidence of any considerable difference between the two cases. The differences actually observed were small compared with the total effect and were sometimes in one direction and sometimes in the opposite. The experiments, therefore, tell against the view that the whole of the mass of a corpuscle is due to its electrical charge. The idea that mass in general is electrical in its origin is a fascinating one, although it has not at present been reconciled with the results of experience.

The smallness of these particles marks them out as likely to afford a very valuable means for investigating the details of molecular structure, a structure so fine that even waves of light are on far too large a scale to be suitable for its investigation, as a single wavelength extends over a large number of molecules. This anticipation has been fully realized by Lenard’s experiments on the obstruction offered to the passage of these corpuscles through different substances. Lenard found that this obstruction depended only upon the density of the substance and not upon its chemical composition or physical state. He found that, if he took plates of different substances of equal areas and of such thicknesses that the masses of all the plates were the same, then, no matter what the plates were made of, whether of insulators or conductors, whether of gasses, liquids or solids, the resistance they offered to the passage of the corpuscles through them was the same. Now this is exactly what would happen if the atom of the chemical elements were aggregations of a large number of equal particles of equal mass; the mass of an atom being proportional to the number of these particles contained in it and the atom being a collection of such particles through the interstices between which the corpuscle might find its way. Thus a collision between a corpuscle and an atom would not be so much a collision between the corpuscle and the atom as a whole, as between a corpuscle and the individual particles of which the atom consists; and the number of collisions the corpuscle would make, and therefore the resistance it would experience, would be the same if the number of particles in unit volume were the same, whatever the nature of the atoms might be into which these particles are aggregated. The number of particles in unit volume is however fixed by the density of the substance and thus on this view the density and the density alone should fix the resistance offered by the substance to the motion of a corpuscle through it; this, however, is precisely Lenard’s result, which is thus a strong confirmation of the view that the atoms of the elementary substances are made up of simpler parts all of which are alike. This and similar views of the constitution of matter have often been advocated; thus in one form of it, known as Prout’s hypothesis, all the elements were supposed to be compounds of hydrogen. We know, however, that the mass of the primordial atom must be much less than that of hydrogen. Sir Norman Lockyer has advocated the composite view of the nature of the elements on spectroscopic grounds, but the view has never been more boldly stated than it was long ago by Newton who says:

“The smallest particles of matter may cohere by the strongest attraction and compose bigger particles of weaker virtue and many of these may cohere and compose bigger particles whose virtue is still weaker and so on for divers succession, until the progression ends in the biggest particles on which the operations in Chemistry and the colours of natural bodies depend and which by adhering compose bodies of a sensible magnitude.”

The reasoning we used to prove that the resistance to the motion of the corpuscle depends only upon the density is only valid when the sphere of action of one of the particles on a corpuscle does not extend as far as the nearest particle. We shall show later on that the sphere of action of a particle on a corpuscle depends upon the velocity of the corpuscle, the smaller the velocity the greater being the sphere of action and that if the velocity of the corpuscle falls as low as 107 centimeters per second, then, from what we know of the charge on the corpuscle and the size of molecules, the sphere of action of the particle might be expected to extend further than the distance between two particles and thus for corpuscles moving with this and smaller velocities we should not expect the density law to hold. 

Existence of free corpuscles or negative electricity in metals

In the cases hitherto described the negatively electrified corpuscles had been obtained by processes which require the bodies from which the corpuscles are liberated to be subjected to somewhat exceptional treatment. Thus in the case of the cathode rays the corpuscles were obtained by means of intense electric fields. in the case of the incandescent wire by great heat, in the case of the cold metal surface by exposing this surface to light. The question arises whether there is not to some extent, even in matter in the ordinary state and free from the action of such agencies, a spontaneous liberation of those corpuscles a kind of dissociation of the neutral molecules of the substance into positively and negatively electrified parts, of which the latter are the negatively electrified corpuscles.

Let us consider the consequences of some such effect occurring in a metal, the atoms of the metal splitting np into negatively electrified corpuscles and positively electrified atoms and these again after a time re-combining to form neutral system. When things have got into a steady state the number of corpuscles re-combining in a given time will be equal to the number liberated in the same time. There will thus be diffused through the metal swarms of these corpuscles, these will be moving about in all directions like the molecules of a gas and, as they can gain or lose energy by colliding with the molecule of the metal, we should expect by the kinetic theory of gasses that they will acquire such an average velocity that the mean kinetic energy of a corpuscle moving about in the metal is equal to that possessed by a molecule of a gas at the temperature of the metal; this would make the average velocity of the corpuscles at 0° C. about 107 centimeters per second. This swarm of negatively electrified corpuscles when exposed to an electric force will be sent drifting along in the direction opposite to the force; this drifting of the corpuscles will be an electric current, so that we could in this way explain the electrical conductivity of metals.

The amount of electricity carried across unit area under a given electric force will depend upon and increase with (1) the number of free corpuscles per unit volume of the metal, (2) the freedom with which these can move under the force between the atoms of the metal; the latter will depend upon the average velocity of these corpuscles, for if they are moving with very great rapidity the electric force will have very little time to act before the corpuscle collides with an atom, and the effect produced by the electric force annulled. Thus the average velocity of drift imparted to the corpuscles by the electric field will diminish as the average velocity of translation, which is fixed by the temperature, increases. As the average velocity of translation increases with the temperature, the corpuscles will move more freely under the action of an electric force at low temperatures than at high, and thus from this cause the electrical conductivity of metals would increase as the temperature diminishes. In a paper presented to the International Congress of Physics at Paris in the autumn of last year, I described a method by which the number of corpuscles per unit volume and the velocity with which they moved under an electric force can be determined. Applying this method to the case of bismuth, it appears that at the temperature of 20° C, there are about as many corpuscles in a cubic centimeter as there are molecules in the same volume of a gas at the same temperature and at a pressure of about ¼ of an atmosphere, and that the corpuscles under an electric field of 1 volt per centimeter would travel at the rate of about 70 meters per second. Bismuth is at present the only metal for which the data necessary for the application of this method exists, but experiments are in progress at the Cavendish Laboratory which it is hoped will furnish the means for applying the method to other metals. We know enough, however, to be sure that the corpuscles in good conductors, such as gold, silver or copper, must be much more numerous than in bismuth, and that the corpuscular pressure in these metals must amount to many atmospheres. These corpuscles increase the specific heat of a metal and the specific heat gives a superior limit to the number of them in the metal.

An interesting application of this theory is to the conduction of electricity through thin films of metal. Longden has recently shown that when the thickness of the film falls below a certain value, the specific resistance of the film increases rapidly as the thickness of the film diminishes. This result is readily explained by this theory of metallic conduction, for when the film gets so thin that its thickness is comparable with the mean force path of corpuscle the number of collisions made by a corpuscle in a film will be greater than in the metal in bulk, thus the mobility of the particles in the film will be less and the electrical resistance consequently greater.

The corpuscles disseminated through the metal will do more than carry the electric current, they will also carry heat from one part to another of an unequally heated piece of metal. For if the corpuscles in one part of the metal have more kinetic energy than those in another, then, in consequence of the collisions of the corpuscles with each other and with the atoms, the kinetic energy will tend to pass from those places where it is greater to those where it is less, and in this way heat will flow from the hot to the cold parts of the metal, as the rate with which the heat is carried will increase with the number of corpuscles and with their mobility, it will be influenced by the same circumstances as the conduction of electricity, so that good conductors of electricity should also be good conductors of heat. If we calculate the ratio of the thermal to the electric conductivity on the assumption that the whole of the heat is carried by the corpuscles we obtain a value which is of the same order as that found by experiment.

Weber many years ago suggested that the electrical conductivity of metals was due to the motion through them of positively and negatively electrified particles, and this view has recently been greatly extended and developed by Riecke and by Drude, the objection to any electrolytic view of the conduction through metals is that, as in electrolysis, the transport of electricity involves the transport of matter, and no evidence of this has been detected, this objection does not apply to the theory sketched above, as on this view it is the corpuscles which carry the current, these are not atoms of the metal, but very much smaller bodies which are the same for all metals.

It may be asked if the corpuscles are disseminated through the metal and moving about in it with an average velocity of about 107 centimeters per second, how is it that some of them do not escape from the metal into the surrounding air? We must remember, however, that these negatively electrified corpuscles are attracted by the positively electrified atoms and in all probability by the neutral atoms as well, so that to escape from these attractions and get free a corpuscle would have to possess a definite amount of energy, if a corpuscle had less energy than this then, even though projected away from the metal, it would fall back into it after traveling a short distance. When the metal is at a high temperature, as in the case of the incandescent wire, or when it is illuminated by ultraviolet light some of the corpuscles acquire sufficient energy to escape from the metal and produce electrification in the surrounding gas. We might expect too that, if we could charge a metal so highly with negative electricity, that the work done by the electric field on the corpuscle in a distance not greater than the sphere of action of the atoms on the corpuscles was greater than the energy required for a corpuscle to escape, then, the corpuscles would escape and negative electricity stream from the metal. In this case the discharge could be effected without the participation of the gas surrounding the metal and might even take place in an absolute vacuum, if we could produce such a thing. We have as yet no evidence of this kind of discharge, unless indeed some of the interesting results recently obtained by Earhart with very short sparks should be indications of an effect of this kind.

A very interesting case of the spontaneous emission of corpuscles is that of the radio-active substance radium discovered by M. and Madame Curie. Radium gives out negatively electrified corpuscles which are deflected by a magnet. Becquerel has determined the ratio of the mass to the charge of the radium corpuscles and finds it is the same as for the corpuscles in the cathode rays. The velocity of the radium corpuscles is, however, greater than any that has hitherto been observed for either cathode or Lenard rays: being, as Becquerel found, as much as 2 X 1010 centimeters per second or two-thirds the velocity of light. This enormous velocity explains why the corpuscles from radium are so very much more penetrating than the corpuscles from cathode or Lenard rays; the difference in this respect is very striking, for while the latter can only penetrate solids when they are beaten out into the thinnest films, the corpuscles from radium have been found by Curie to be able to penetrate a piece of glass 3 millimeters thick. To see how an increase in the velocity can increase the penetrating power, let us take as an illustration of a collision between the corpuscle and the particles of the metal the case of a charged corpuscle moving past an electrified body; a collision may be said to occur between these when the corpuscle comes so close to the charged body that its direction of motion after passing the body differs appreciably from that with which it started. A simple calculation shows that the deflection of the corpuscle will only be considerable when the kinetic energy, with which the corpuscle starts on its journey towards the charged body is not large compared with the work done by the electric forces on the corpuscle in its journey to the shortest distance from the charged body. If d is the shortest distance, e and e’ the charge of the body and corpuscles, the work done is ee’/d; while if m is the mass and v the velocity with which the corpuscle starts the kinetic energy to begin with is ½mv2; thus a considerable deflection of the corpuscle, i.e., a collision will occur only when ee’/d is comparable with ½mv2; and d the distance at which a collision occurs, will vary inversely as v2. As d is the radius of the sphere of action for collision and as the number of collisions is proportional to the area of a section of this sphere, the number of collisions is proportional to d2, and therefore varies inversely as v4. This illustration explains how rapidly the number of collisions and therefore the resistance offered to the motion of the corpuscles through matter diminishes as the velocity of the corpuscles increases, so that we can understand why the rapidly moving corpuscles from radium are able to penetrate substances which are nearly impermeable to the more slowly moving corpuscles from cathode and Lenard rays.

Cosmical effects produced by corpuscles

As a very hot metal emits these corpuscles it does not seem an improbable hypothesis that they are emitted by that very hot body, the sun. Some of the consequences of this hypothesis have been developed by Paulsen, Birkeland and Arrhenius who have developed a theory of the Aurora Borealis from this point of view. Let us suppose that the sun gives out corpuscles which travel out through interplanetary space; some of these will strike the upper regions of the Earth’s atmosphere and will then or even before then, come under the influence of the Earth’s magnetic field. The corpuscles when in such a field, will describe spirals round the lines of magnetic force; as the radii of these spirals will be small compared with the height of the atmosphere; we may for our present purpose suppose that they travel along the lines of the Earth’s magnetic force. Thus the corpuscles which strike the Earth’s atmosphere near the equatorial regions where the lines of magnetic force are horizontal will travel horizontally, and will thus remain at the top of the atmosphere where the density is so small that but little luminosity is caused by the passage of the corpuscles through the gas; as the corpuscles travel into higher latitudes where the lines of magnetic force dip, they follow these lines and descend into the lower and denser parts of the atmosphere, where they produce luminosity, which on this view is the Aurora.

As Arrhenius has pointed out the intensity of the Aurora ought to be a maximum at some latitude intermediate between the pole and the equator, for, though in the equatorial regions the rain of corpuscles from the sun is greatest, the Earth’s magnetic force keeps these in such highly rarefied gas that they produce but little luminosity, while at the pole, where the magnetic force would pull them straight down into the denser air, there are not nearly so many corpuscles; the maximum luminosity will therefore be somewhere between these places. Arrhenius has worked out this theory of the Aurora very completely and has shown that it affords a very satisfactory explanation of the various periodic variations to which it is subject.

As a gas becomes a conductor of electricity when corpuscles pass through it, the upper regions of the air will conduct, and when air currents occur in these regions, conducting matter will be driven across the lines of force due to the Earth’s magnetic field, electric currents will be induced in the air, and the magnetic force due to these currents will produce variations in the Earth’s magnetic field. Balfour Stewart suggested long ago that the variation on the Earth’s magnetic field was caused by currents in the upper regions of the atmosphere, and Schuster has shown, by the application of Gauss’ method, that the seat of these variations is above the surface of the Earth.

The negative charge in the Earth’s atmosphere will not increase indefinitely in consequence of the stream of negatively electrified corpuscles coming into it from the sun, for as soon as it gets negatively electrified it begins to repel negatively electrified corpuscles from the ionized gas in the upper regions of the air, and a state of equilibrium will be reached when the Earth has such a negative charge that the corpuscles driven by it from the upper regions of the atmosphere are equal in number to those reaching the Earth from the sun. Thus, on this view, interplanetary space is thronged with corpuscular traffic, rapidly moving corpuscles coming out from the sun while more slowly moving ones stream into it.

In the case of a planet which, like the moon, has no atmosphere there will be no gas for the corpuscles to ionize, and the negative electrification will increase until it is so intense that the repulsion exerted by it on the corpuscles is great enough to prevent them from reaching the surface of the planet.

Arrhenius has suggested that the luminosity of nebulae may not be due to high temperature, but may be produced by the passage through their outer regions of the corpuscles wandering about in space, the gas in the nebulae being quite cold. This view seems in some respects to have advantages over that which supposes the nebulae to be at very high temperatures. These and other illustrations, which might be given did space permit, seem to render it probable that these corpuscles may play an important Dart in cosmical as well as in terrestrial physics.

*Professor Schuster in 1889 was the first to apply the method of the magnetic deflection of the discharge to get a determination of the value of m/e; he found rather widely separated limiting values for this quantity and came to the conclusion that it was of the same order as in electrolytic solutions, the result of the method mentioned above as well as those of Wiechert, Kaufmann and Leonard make it very much smaller.

Particle Physics photo
The cover of August 1901’s Popular Science Monthly.

Some text has been edited to match contemporary standards and style.

The post From the archives: The discovery of electrons breaks open the subatomic era appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The souped-up Large Hadron Collider is back to take on its weightiest questions yet https://www.popsci.com/science/large-hadron-collider-restarts-run/ Sun, 15 May 2022 17:00:00 +0000 https://www.popsci.com/?p=442423
The Large Hadron Collider's magnet chain.
A chain of magnets inside the tunnel at the Large Hadron Collider. Samuel Joseph Hertzog via CERN

What happens where the LHC's beams meet tells us how the universe works.

The post The souped-up Large Hadron Collider is back to take on its weightiest questions yet appeared first on Popular Science.

]]>
The Large Hadron Collider's magnet chain.
A chain of magnets inside the tunnel at the Large Hadron Collider. Samuel Joseph Hertzog via CERN

The bleeding edge of physics lies in a beam of subatomic particles, rushing in a circle very near the speed of light in an underground tunnel in Central Europe. That beam crashes into another racing just as fast in the other direction. The resulting collision produces a flurry of other particles, captured by detectors before they blink out of existence.

This is standard procedure at the Large Hadron Collider (LHC), which recently switched on for the first time since 2018, its beams now more powerful than ever. The LHC, located at the European Organization for Nuclear Research (CERN) near Geneva, is the world’s largest particle collider: a mammoth machine that literally smashes subatomic particles together and lets scientists watch the fountain of quantum debris that spews out.

That may seem unnecessarily violent for a physics experiment, but physicists have a good reason for the destruction. Inside those collisions, physicists can peel away the layers of our universe to see what makes it tick at the smallest scales.

The physicists behind the machine

The “large” in the LHC’s name is no exaggeration: The collider cuts a 17-mile-long magnetic loop, entirely underground, below the Geneva suburbs on both sides of the ragged French-Swiss border (home of CERN’s headquarters), through the shadows by the eastern slopes of France’s Jura Mountains, and back again.

Assembling such a colossus took time. First proposed in the 1980s and approved in the mid-1990s, the LHC took over a decade to build before its beam first switched on in 2008. Construction took $4.75 billion, mostly coming from the coffers of various European governments.

LHC consumes enough electricity to power a small city. Even before its current upgrades, LHC’s experiments produced a petabyte of data per day, enough to hold over 10,000 4K movies—and that’s after CERN’s computer network filtered out the excess. That data passes through the computers of thousands of scientists from every corner of the globe, although some parts of the world are better represented than others.

[Related: The biggest particle collider in the world gets back to work]

Time, money, and people power continue to pour into the collider as physicists seek to answer the universe’s most fundamental questions. 

For instance, what causes mass to exist? Helping to answer that question has been one of the LHC’s most public triumphs to date. In 2012, LHC scientists announced the discovery of a long-sought particle known as the Higgs boson. The boson is the product of a field that, when particles interact with the field, gives those particles mass.

The discovery of the Higgs boson was the final brick in the wall known as the Standard Model. It’s the heart of modern particle physics, a schematic that lays out about a dozen subatomic particles and how they neatly fit together to give rise to the universe we see.

But with every passing year, the Standard Model seems increasingly inadequate to answer basic questions. Why is there so much more matter in the universe than antimatter, its opposite? What makes up the massive chunk of our universe that seems to be unseen and unseeable? And why does gravity exist? The answers are anything but simple.

The answers may come in the form of yet-undiscovered particles. But, so far, they’ve eluded even the most powerful particle colliders. “We have not found any non-Standard Model particles at the LHC so far,” says Finn Rebassoo, a particle physicist at Lawrence Livermore National Laboratory in California and an LHC collaborator.

Upgrading the behemoth

Although the COVID-19 pandemic disrupted the LHC’s reopening (it was originally scheduled for 2020) , the collider’s stewards have not sat by idly since 2018. As part of a raft of technical upgrades, they’ve topped up the collider’s beam, boosting its energy by about 5 percent. 

That may seem like a pittance (and it certainly pales in comparison to the planned High-Luminosity LHC upgrade later this decade that will boost the number of collisions). But scientists say that it still makes a difference.

“This means an increase in the likelihood for producing interesting physics,” says Elizabeth Brost, a particle physicist at Brookhaven National Laboratory on Long Island, and an LHC collaborator. “As a personal favorite example, we will now get 10 percent more events with pairs of Higgs bosons.”

The Standard Model says that paired Higgs bosons should be an extremely rare occurrence—and perhaps it is. But, if the LHC does produce pairs in abundance, it’s a sign that something yet undiscovered is at play.

“It’s a win-win situation: Either we observe Higgs pair production soon, which implies new physics,” says Brost, “or we will eventually be able to confirm the Standard Model prediction using the full LHC dataset.”

The enhancements also provide the chance to observe things never before seen. “Every extra bit provides more potential for finding new phenomena,” says Bo Jayatilaka, a particle physicist at Fermilab in suburban Chicago and an LHC collaborator.

It wasn’t long ago that one potential fodder for observation emerged—not from CERN, but from an old, now-shuttered accelerator at Fermilab, outside Chicago. Researchers poring over old data found that the W boson, a particle responsible for causing radioactive decay inside atoms, seemed to have a heavier mass than anticipated. If that’s true, it could blow the Standard Model wide open.

Naturally, particle physicists want to make sure it is true. They’re already planning to repeat that W boson measurement at CERN, both with data collected from past experiments and with new data from experiments yet to come.

It will likely take time to get the LHC up to its newfound full capacity. “Typically, when the LHC is restarted it is a slow restart, meaning the amount of data in the first year is not quite as much as the subsequent years,” says Rebassoo. And analyzing even that data it produces takes time, even for the great masses of scientists who work on the collider.

But as soon as 2023, we could see results—taking advantage of the collider’s newfound energy boost, Jayatilaka speculates.

The post The souped-up Large Hadron Collider is back to take on its weightiest questions yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The Theory of Relativity gains speed https://www.popsci.com/science/theory-of-relativity-popularity/ Thu, 12 May 2022 11:00:00 +0000 https://www.popsci.com/?p=441261
A collage of images from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914)
“The theory of relativity and the new mechanics” (William Marshall, June 1914). Popular Science

A June 1914 article in Popular Science Monthly explored the precedents and implications of Einstein's 1905 Theory of Relativity.

The post From the archives: The Theory of Relativity gains speed appeared first on Popular Science.

]]>
A collage of images from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914)
“The theory of relativity and the new mechanics” (William Marshall, June 1914). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Although it may seem like Albert Einstein’s Theory of Relativity caught the world by surprise at the turn of the 20th century, in fact, it was a long time coming. Relativity’s roots can be traced to Galileo’s writings in 1632. To prove Copernicus’s heliocentric system, physics had to show that although Earth swung through space and rotated on its axis, observers on Earth would have no direct way of knowing that they were the ones in motion relative to the cosmos. Since early 17th century mathematics lacked the tools to aid Galileo’s proof, he conducted a thought experiment that employed the cabin of a ship to demonstrate the principle of relativity—how space and time are relative to frames of reference.

Even when Einstein published his theory in 1905, it did not arrive with a thunderclap. Rather, it slipped into the world almost incognito, in an Annalen der Physik article, “On the Electrodynamics of Moving Bodies.” By the time Popular Science published a detailed account of Einstein’s Theory of Relativity in 1914, its profound implications—such as light dictating the speed limit for everything, and the notion that time is not the same for everyone—had finally made its way through scientific circles. But as mathematician William Marshall, who penned Popular Science’s eminently readable explanation of the new theory, pointed out, Einstein’s work—somewhat poetically—was not accomplished in isolation. 

“The theory of relativity and the new mechanics” (William Marshall, June 1914)

He who elects to write on a mathematical topic is confronted with a choice between two evils. He may decide to handle his subject mathematically, using the conventional mathematical symbols, and whatever facts, formulas and equations the subject may demand—save himself who can! Or he may choose to abandon all mathematical symbols, formulas and equations, and attempt to translate into the vernacular this language which the mathematician speaks so fluently. In the one case there results a finished article which only the elect understand, in the other, only a rather crude and clumsy approximation to the truth. A similar condition exists in all highly specialized branches of learning, but it can safely be said that in no other science must one fare so far, and accumulate so much knowledge on the way, in order to investigate or even understand new problems. And so it is with some trepidation that the attempt is made to discuss in the following pages one of the newest and most important branches of mathematical activity. For the writer has chosen the second evil, and, deprived of his formulas, to borrow a figure of Poincaré’s, finds himself a cripple without his crutches.

After this mutually encouraging prologue let us introduce the subject with a definition. What is relativity? By relativity, the theory of relativity, the principle of relativity, the doctrine of relativity, is meant a new conception of the fundamental ideas of mechanics. By the relativity mechanics, or as we may sometimes say, the new mechanics, is meant that body of doctrine which is based on these new conceptions. Now this is a very simple definition and one which would be perfectly comprehensible to everybody, provided the four following points were made clear: first, what are the fundamental concepts of mechanics, second, what are the classical notions about them, third, how are these modified by the new relativity principles, and fourth, how did it come about that we have been forced to change our notions of these fundamental concepts which have not been questioned since the time of Newton? These four questions will now be discussed, though perhaps not in this order. The results reached are, to say the least, amazing, but perhaps our astonishment will not be greater than it was when first we learned, or heard rather, that the Earth is round, and that there are persons directly opposite us who do not fall off, and stranger yet, do not realize that they are in any immediate danger of doing so. 

In the first place then, how has it come about that our conceptions of the fundamental notions of mechanics have been proved wanting? This crime like many another may safely be laid at the door of the physicists, those restless beings who, with their eternal experimenting, are continually raising disturbing ghosts, and then frantically imploring the aid of the mathematicians in order to exorcize them. Let us briefly consider the experiment which led us into those difficulties from which the principle of relativity alone apparently can extricate us.

Consider a source of sound A at rest (Fig. 1), and surrounded by air, in which sound is propagated, also at rest. 

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

Now, as every schoolboy knows, the time taken for sound to go to B is the same as that taken to go to C, if B and C are at the same distance from A. The same is true also if A, B and C are all moving with uniform velocity in any direction, carrying the air with them. This may be realized by a closed railway car or a boat. But if the points A, B, and C are moving with uniform velocity, and the air is at rest relative to them, or what is the same thing, if they are at rest and the air is moving past them with uniform velocity, the state of affairs is very different. If the three points are moving in the direction indicated by the arrow (Fig. 2), and if the air is at rest, and if a sound wave is sent out from A, then the time required for this sound wave to go from A to C is not the same as that required from A to B. Now as sound is propagated in air, so is light in an imaginary medium, the ether. Moreover, this ether is stationary, as many experiments show, and the earth is moving through it, in its path around the sun with a considerable velocity. Therefore we have exactly the same case as before, and it should be very easy to show that the velocity of light in a direction perpendicular to the Earth’s direction of motion is different from that in a direction which coincides with it. But a famous experiment of Michelson and Morley, carried out with the utmost precision, showed not the slightest difference in these velocities. So fundamental are these two simple experimental facts, that it will be worthwhile to repeat them in slightly different form. If the three points A, B, C (Fig. 2), are moving to the right with a uniform unknown velocity through still air, and if a sound wave were sent out from A, it would be exceedingly simple to determine the velocity of the point A by a comparison of the time necessary for sound to travel from A to B and from A to C. But now if the same three points move through stationary ether, and if the wave emanating from A is a light wave, there is absolutely no way in which an observer connected with these three points can determine whether he is moving or not. Thus we are, in consequence of the Michelson and Morley experiment, driven to the first fundamental postulate of relativity: The uniform velocity of a body can not be determined by experiments made by observers on the body.

Consider now one of the fundamental concepts of mechanics, time. Physicists have not attempted to define it, admitting the impossibility of a definition, but still insisting that this impossibility was not owing to our lack of knowledge, but was due to the fact that there are no simpler concepts in terms of which time can be defined. As Newton says: “Absolute and real time flows on equably, having no relation in itself or in its nature to any external object.”

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

Let us examine this statement, which embodies fairly our notion of time, in the light of the first fundamental principle of relativity just laid down. Suppose A and B (Fig. 3) are two observers, some distance apart, and they wish to set their clocks together. At a given instant agreed upon beforehand, A sends out a signal, by wireless if you wish, and B sets his clock at this instant. But obviously the signal has taken some time to pass from A to B, so B’s clock is slow. But this seems easy to correct; B sends a signal and A receives, and they take the mean of the correction. But says the first principle of relativity, both A and B are moving through the ether with a velocity which neither knows, and which neither can know, and therefore the time taken for the signal to pass from A to B is not the same as that taken to pass from B to A. Therefore the clocks are not together, and never can be, and when A’s clock indicates half-past two, B’s does not indicate this instant, and worse yet, there is absolutely no way of determining what time it does indicate. Time then is purely a local affair. The well-known phrase, “at the same instant” has no meaning for A and B, unless a definition be laid down giving it a meaning. The “now” of A may be the “past” or “future” of B. To state the case in still other words, two events can no more happen simultaneously at two different places, than can two bodies occupy the same position.

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

But doubtless the reader is anxious to say, this matter of adjusting the clocks together can still be settled. Let there be two clocks having the same rate at a point A, and let them be set together. Then let one of them be carried to the point B, can not they then be said to be together? Let us examine this relative motion of one clock with respect to another, in the light of the first principle of relativity. Let there be two observers as before with identical clocks, and for simplicity, suppose A is at rest and B moving on the line BX (Fig. 4). Suppose further BX parallel to AY. Let now A send out a light signal which is reflected on the line BX and returns to A. The signal has then traveled twice the distance between the lines in a certain time. В then repeats the same experiment, for, as far as he knows, he is at rest, and A moving in the opposite direction. The signal traverses twice the distance between the lines, and B’s clock must record the same interval of time as A’s did. But now suppose B’s experiment is visible to A. He sees the signal leave B, traverse the distance between the lines, and return, but not to the point B, but to the point to which B has moved in consequence of his velocity. 

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

That is, A secs the experiment as in Fig. 5, where the position of B’ depends on B’s velocity with respect to A. The state of affairs is to A then simply this: A signal with a certain known velocity has traversed the distance ABA while his (A’’s) clock has registered a certain time interval. The same signal, moving with the same velocity, has traversed the greater distance BCB’ while B’s clock registers exactly the same time interval. The only conclusion is that to A, B’s clock appears to be running slow as we say, and its rate will depend on the relative velocity of A and B. Thus we are led to a second conclusion regarding time in the relativity mechanics. To an observer on one body the time unit of another body moving relative to the first body varies with this relative velocity. This last conclusion regarding time is certainly staggering, for it takes away from us what we have long regarded as its most distinguishing characteristic, namely, its steady, inexorable, onward flow, which recognizes neither place nor position nor movement nor from anything else. But now in the new mechanics it appears only as a relative notion, just as velocity is. There is no more reason why two beings should be living at the same rate, to coin an expression, than that two railroad trains should be running at the same speed. It is no longer a figure of speech to say that a thousand years are but as yesterday when it is past, but a thousand years and yesterday are actually the same time interval provided the bodies on which these two times are measured have a sufficiently high relative velocity.

It is to be noted that in the above discussion, use was made of the fact that the light signal sent out by B appeared to A to have the same velocity as one sent out by A himself. This stated in general terms, the velocity of light in free space appears the same to all observers, regardless of the motion of the source of light or of the observer, is the second fundamental postulate of relativity. It is an assumption pure and simple, reasonable on account of the analogy between sound and light, and does not contradict any known facts.

Now there is a second fundamental concept of mechanics, very much resembling time in that we are unable to define it, namely, space. Instead of being one-dimensional, as is time, it is three-dimensional, which is not an essential difference. From the days of Newton and Galileo, physicists have agreed that space like time is everywhere the same, and that it too is independent of any motion or external object. To fix the ideas, consider any one of the units in measuring length, the yard, for example. To be sure, the bar of wood or iron, which in length more or less nearly represents this yard, may vary, as everyone knows, in its dimensions, on account of varying temperature or pressure or humidity, or whatnot, but the yard itself, this unit of linear space which we have arbitrarily chosen, according to all our preconceived notions, neither depends on place nor position, nor motion, nor any other thinkable thing. But let us follow through another imaginary experiment in the light of the two fundamental postulates of relativity.

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

Consider again our two observers A and B (Fig. 6), each furnished with a clock and a yardstick, A at rest, B moving in the direction indicated by the arrow. Suppose A sends out a light signal and adjusts a mirror at C say, so that a ray of light goes from A to C and returns in say one second. A then measures the distance AC with his yardstick and finds a certain number. Then B, supposing that he himself is at rest and A in motion, sends out a light signal and adjusts a mirror at D so that a ray travels the distance BD and back again in one second of his time. 

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

B then measures the distance BD with his yardstick, and since the velocity of light is the same in any system, B comes out with the same number of units of length in BD as A found in AC. But A watching B’s experiment sees two remarkable facts: first, that the light has not traversed the distance BDB at all, but the greater distance BD’B’ (Fig. 7), where D’ and B’ are the points, respectively, to which D and B have moved in consequence of the motion; second, since B’s clock is running slow, the time taken for light to traverse this too great distance is itself too great. Now if too great a distance is traversed in too great a time, then the velocity will remain the same provided the factor which multiplies the distance is the same as that which multiplies the time. But unfortunately, or fortunately, a very little mathematics shows that this multiplier is not the same. A sees too short a distance being traversed by light in a second of time, and therefore B’s yardstick is too short, and by an amount depending on the relative velocity of A and B. Thus we are led to the astonishing general conclusion of the relativity theory with reference to length: If two bodies are moving relative to each other, then to an observer on the one, the unit of length of the other, measured in the direction of this relative velocity, appears to be shortened by an amount depending on this relative velocity. This shortening must not be looked upon as due to the resistance of any medium, but, as Minkowski puts it, must be regarded as purely a gift of the gods, a necessary accompaniment of the condition of motion. The same objection might be raised here as in the case of the time unit. Perhaps the length of the yardstick appears to change, but does the real length change? But the answer is, there is no way of determining the real length, or more exactly, the words real length have no meaning. Neither A nor B can determine whether he is in motion or at rest absolutely, and if B compares his measure with another one traveling with him, he learns nothing, and if he compares it with one in motion relative to him, he finds the two of different length, just as A did.

This startling fact, that a railway train as it whizzes past us is shorter than the same train at rest, is at first a trifle disturbing, but how much of our amazement is due to our experience, or lack of it. [EDITOR’S NOTE: The author, below, demonstrated his point by means of an unfortunately racist analogy.] A certain African king, on beholding white men for the first time, reasoned that as all men were black, these beings, being white, could not be men. Are we any more logical when we say that since in our experience no yardsticks have varied appreciably on account of their velocity, hence it is absurd to admit the possibility of such a thing.

Perhaps it might be well at this point to give some idea of the size of these apparent changes in the length of the time unit and the space unit, although the magnitude is a matter of secondary importance. The whole history of physics is a record of continual striving after more exact measurements, and a fitting of theory to meet new corrections, however small. So it need not occasion surprise to learn that these differences are exceedingly minute; the amazing thing, and the thing of scientific interest, is that they exist at all. If we consider the velocity of the earth in its orbit, which is about 19 miles per second, the shortening of the Earth’s diameter due to this velocity as seen by an observer at rest relative to the earth would be approximately a couple of inches only. Similarly for the relative motion of the Earth and the sun, the shortening of the time unit would be approximately one second in five years. Even if this were the highest relative velocity known, the results would still be of importance, but the Earth is by no means the most rapid in its movement of the heavenly bodies, while the velocity of the radium discharge is some thousand times the velocity of the most rapidly moving planet.

In addition to space and time there is a third fundamental concept of mechanics, though the physicists have not yet settled to the satisfaction of everybody whether it is force or mass. But in any case, the one taken as the fundamental motion, mass say, is, in the classical mechanics, independent of the velocity. Mass is usually defined in physics as the quantity of matter in a body, which means simply that there is associated with every body a certain indestructible something, apart from its size and shape, independent of its position or motion with respect to the observer, or with respect to other masses. But in the relativity mechanics this primary concept fares no better than the other ones, space and time. Without going into the details of the argument by means of which the new results are obtained, and this argument, and the experiment underlying it, are by no means simple, it may suffice to say that the mass of a body must also be looked upon as depending on the velocity of the body. This result would seem at first glance to introduce an unnecessary and almost impossible complication in all the considerations of mechanics, but as a matter of fact exactly the opposite is true. It has been known for some time, that electrons moving with the great velocity of the electric discharge, have suffered an apparent increase of mass or inertia due to this velocity, that physicists for some time have been accustomed to speak of material mass and electromagnetic mass. But now in the light of the principles of relativity, this distinction between material mass and electromagnetic mass is lost, and a great gain in generality is made. All masses depend on velocity and it is only because the velocity of the electric discharge approaches that of light, that the change in mass becomes striking. This perhaps may be looked upon as one of the most important of the consequences of the theory of relativity in that it subjects electromagnetic phenomena to those laws which underlie the motions of ordinary bodies.

In consequence of this revision of our notions of space, time and mass, there result changes in the derived concepts of mechanics, and in the relations between them. In fact the whole subject of mechanics has had to be rewritten on this new basis, and a large part of the work of those interested in the relativity theory has been the building up of the mathematics of the new subject. Some of the conclusions, however, can be understood without much mathematics. For example, we can no longer speak of a particle moving in space, nor can we speak of an event as occurring at a certain time. Space and time are not independent things, so that when the position of a point is mentioned, there must also be given the instant at which it occupied this position. The details of this idea, as first worked out by Minkowski, may be briefly stated. With every point in space there is associated a certain instant of time, or to drop into the language of mathematics for a moment, a point is determined by four coordinates, three in space and one in time. We still use the words space and time out of respect for the memory of these departed ideas, but a new term including them both is actually in use. Such a combination, i. e., a certain something with its four coordinates, is called by Minkowski a world point. If this world point takes a new position, it has four new coordinates, and as it moves it traces out in what Minkowski calls the world, a world-line. Such a world-line gives us then a sort of picture of the eternal life history of any point, and the so-called laws of nature can be nothing else than statements of the relations between these world-lines. Some of the logical consequences of this world-postulate of Minkowski appear to the untrained mind as bordering on the fantastic. For example, the apparatus for measuring in the Minkowskian world is an extraordinarily long rod carrying a length scale and a time scale, with their zeros in coincidence, together with a clock mechanism which moves a hand, not around a circle as in the ordinary clock, but along the scale graduated in hours, minutes and seconds.

Some of the conclusions of the relativity mechanics with reference to velocity are worth noting. In the classical mechanics we were accustomed to reason in the following way: Consider a body with a certain mass at rest. If it be given certain impulse, as we say, it takes on a certain velocity. The same impulse again applied doubles this velocity, and so on, so that the velocity can be increased indefinitely, and can be made greater than any assigned quantity. But in the relativity mechanics, a certain impulse produces a certain velocity, to be sure; this impulse applied again does not double the velocity; a third equal impulse increases the velocity but by a still less amount, and so on, the upper limit of the velocity which can be given to a body being the velocity of light itself. This statement is not without its parallel in another branch of physics. There is in heat what we call the absolute zero, a value of the temperature which according to the present theory is the lower limit of the temperature as a body is indefinitely cooled. No velocity then greater than the velocity of light is admitted in the relativity mechanics, which fact carries with it the necessity for a revision of our notion of gravitational action, which has been looked upon as instantaneous.

In consequence of the change in our ideas of velocity, there results a change in one of the most widely employed laws of velocity, namely the parallelogram law. Briefly stated, in the relativity mechanics, the composition of velocities by means of the parallelogram law is no longer allowable. This follows evidently from the fact that there is an upper limit for the velocity of a material body, and if the parallelogram law were to hold, it would be easy to imagine two velocities which would combine into a velocity greater than that of light. This failure of the parallelogram law to hold is to the mathematician a very disturbing conclusion, more heretical perhaps than the new doctrines regarding space and time.

Another striking consequence of the relativity theory is that the hypothesis of an ether can now be abandoned. As is well known, there have been two theories advanced in order to explain the phenomena connected with light, the emission theory which asserts that light effect is due to the impinging of particles actually sent out by the source of light, and the wave theory which assumes that the sensation we call light is due to a wave in a hypothetical universal medium, the ether. Needless to say this latter theory is the only one which recently has received any support. And now the relativists assert that the logical thing to do is to abandon the hypothesis of an ether. For they reason that not only has it been impossible to demonstrate the existence of an ether, but we have now arrived at the point where we can safely say that at no time in the future will any one be able to prove its existence. And yet the abandoning of the ether hypothesis places one in a very embarrassing position logically, as the three following statements would indicate: 

1. The Michelson and Morley experiment was only possible on the basis of an ether hypothesis.

2. From this experiment, follow the essential principles of the relativity theory.

3. The relativity theory now denies the existence of the ether. Whether there is anything more in this state of affairs than mere filial ingratitude is no question for a mathematician.

It should perhaps be pointed out somewhat more explicitly that these changes in the units of time, space and mass, and in those units depending on them, are changes which are ordinarily looked upon as psychological and not physical. If we imagine that A has a clock and that about him move any number of observers,. B, C, D, . . . , in different directions and with different velocities, each one of these observers sees A’s clock running at a different rate. Now the actual physical state of A’s clock, if there is such a state, is not affected by what each observer thinks of it; but the difficulty is that there is no way for any one except A to get at the actual state of A’s clock. We are then driven to one of the two alternatives: Either we must give up all notion of time at all, for bodies in relative motion, or we must define it in such a way as will free it of this ambiguity, and this is exactly what the relativity mechanics attempts to do.

Any discussion of the theory of relativity would be hardly satisfactory without a brief survey of the history of the development of the subject. As has been stated, for many years the ether theory of light has found general acceptance, and up to about twenty-five years ago practically all of the known phenomena of light, electricity and magnetism were explained on the basis of this theory. This hypothetical ether was stationary, surrounded and permeated all objects, did not, however, offer any resistance to the motion of ponderable matter. There came then, in 1887, into this fairly satisfactory state of affairs, the famous Michelson and Morley experiment. This experiment was directly undertaken to discover, if possible, the so-called ether drift.

In this experiment, the apparatus was the most perfect that the skill of man could devise, and the operator was perhaps one of the most skillful observers in the world, but in spite of all this no result was obtained. Physicists were then driven to seek some theory which would explain this experiment, but with varying success. It was proposed that the ether was carried along with the Earth, but a host of experiments show this untenable. It was suggested that the velocity of light depends on the velocity of the source of light, but here again there were too many experiments to the contrary. Michelson himself offered no theory, though he suggested that the negative result could be accounted for by supposing that the apparatus underwent a shortening in the direction of the velocity and due to the velocity, just enough to compensate for the difference in path. This idea was later, in 1892, developed by Lorentz, a Dutch physicist, and under the name of the Lorentz-shortening hypothesis has had a dignified following. The Michelson and Morley experiment, together with certain others undertaken for the same purpose, remained for a number of years as an unexplained fact-a contradiction to ascertained well-established and orderly physical theory. Then there appeared in 1905, in the Annalen der Physik, a modest article by A. Einstein, of Bern, Switzerland, entitled, “Concerning the Electrodynamics of Moving Bodies.” In this article Einstein, in a very unassuming way, and yet in all confidence, boldly attacked the problem and showed that the astonishing results concerning space and time which we have just considered, all follow very naturally from very simple assumptions. Naturally a large part of his paper was devoted to the mathematical side-to the deduction of the equations of transformation which express mathematically the relation between two systems moving relative to each other. It may safely be said that this article laid the foundation of the relativity theory.

Einstein’s article created no great stir at the time, but within a couple of years his theory was claiming the attention of a number of prominent mathematicians and physicists. Minkowski, a German mathematician of the first rank, just at this time turning his attention to mathematical physics, came out in 1909 with his famous world postulate, which has been briefly described. It is interesting to note that within a year translations of Minkowski’s article appeared in English, French and Italian, and that extensions of his theories have occupied the attention of a number of Germany’s most famous mathematicians. Next Poincaré, perhaps the most brilliant mathematician of the last quarter century, stamped the relativity theory with the unofficial approval of French science, and Lorentz, of Holland, one of the most famous in a land of famous physicists, aided materially to the development of the subject. Thus we find within five years of the appearance of Einstein’s article, a fairly consistent body of doctrine developed, and accepted to a surprising degree by many of the prominent mathematical physicists of the foremost scientific nations. No sooner was the theory in a fairly satisfactory condition, than the attempt was made to verify some of the hypotheses by direct experiment. Naturally the difficulties in the way of such experimental verification were very great-insurmountable in fact for many experiments, since no two observers could move relative to each other with a velocity approaching that of light. But the change in mass of a moving electron could be measured, and a qualitative experiment by Kaufmann, and a quantitative one by Bucherer gave results which were in good agreement with the theoretical equations. It was the hope of the astronomers that the new theory would account for the long-outstanding disagreement between the calculated and the observed motion of Mercury’s perihelion, but while the relativity mechanics gave a correction in the right direction, it was not sufficient. To bring this very brief historical sketch down to the present time, it will perhaps be sufficient to state that this theory is at present claiming the attention of a large number of prominent mathematicians and physicists. The details are being worked out, the postulates are being subjected to careful mathematical investigation, and every opportunity is being taken to substantiate experimentally those portions of the theory which admit of experimental verification. Practically all of the work which has been done is scattered through research journals in some six languages, so that it is not very accessible. Some idea of the number of articles published may be obtained from the fact that a certain incomplete bibliography contains the names of some fifty-odd articles, all devoted to some phase of this subject-varying all the way from the soundest mathematical treatment, at the one end of the scale, to the most absurd philosophical discussion at the other. And these fifty or more articles include only those in three languages, only those which an ordinary mathematician and physicist could read without too great an expenditure of time and energy, and with few exceptions, only those which could be found in a rather meager scientific library.

In spite of the fact that the relativity theory rests on a firm basis of experiment, and upon logical deductions from such experiments, and notwithstanding also that this theory is remarkably self-consistent, and is in fact the only theory which at present seems to agree with all the facts, nevertheless it perhaps goes without saying that it has not been universally accepted. Some objections to the theory have been advanced by men of good standing in the world of physics, and a fair and impartial presentation of the subject would of necessity include a brief statement of these objections. I shall not attempt to answer these objections. Those who have adopted the relativity theory seem in no wise concerned with the arguments put forward against it. In fact, if there is one thing which impresses the reader of the articles on relativity, it is the calm assurance of the advocates of this theory that they are right. Naturally the theory and its consequences have been criticized by a host of persons of small scientific training, but it will not be necessary to mention these arguments. They are the sort of objections which no doubt Galileo had to meet and answer in his famous controversy with the Inquisition. Fortunately for the cause of science, however, the authority back of these arguments is not what it was in Galileo’s time, for it is not at all certain just how many of those who have enthusiastically, embraced relativity would go to prison in defense of the dogma that one man’s now is another man’s past, or would allow themselves to be led to the stake rather than deny the doctrine that the length of a yardstick depends upon whether one happens to be measuring north and south with it, or east and west.

In general it may be said that the chief objection to the relativity theory is that it is too artificial. The end and aim of the science of physics is to describe the phenomena which occur in nature, in the simplest manner which is consistent with completeness, and the objectors to the relativity theory urge that this theory and especially its consequences, are not simple and intelligible to the average intellect. Consider, for example, the theory which explains the behavior of a gas by means of solid elastic spheres. This theory may be clumsy, but it is readily understood, rests upon an analogy with things which can be seen and felt, in other words is built up of elements essentially simple. But the objectors to the relativity theory say that it is based on ideas of time and space which are not now and which never can be intelligible to the human mind. They claim that the universe has a real existence quite apart from what anyone thinks about it, and that this real universe, through the human senses, impresses upon the normal mind certain simple notions which can not be changed at will. Minkowski’s famous world-postulate practically assumes a four-dimensional space in which all phenomena occur, and this say the objectors, on account of the construction of the human mind, can never be intelligible to any one in spite of its mathematical simplicity. They insist that the words space and time, as names for two distinct concepts, are not only convenient, but necessary. Nor can any description of phenomena in terms of a time which is a function of the velocity of the body on which the time is measured ever be satisfactory, simply because the human mind can not now nor can it ever appreciate the existence of such a time. To sum up, then, this model of the universe which the relativists have constructed in order to explain the universe, can never satisfactorily do this, for the reason that it can never be intelligible to everybody. It is a mathematical theory and can not be satisfactory to those lacking the mathematician’s sixth sense.

A second serious objection urged against the relativity theory is that it has practically abandoned the hypothesis of an ether, without furnishing a satisfactory substitute for this hypothesis. As has been previously stated, the very experiment which the relativity theory seeks to explain depends on interference phenomena which are only satisfactorily accounted for on the hypothesis of an ether. Then too, there are in electromagnetism certain equations of fundamental importance, known as the Maxwell equations, and it is perhaps just as important that the relativity theory retain these equations, as it is that it explain the Michelson and Morley experiment. But the electro-magnetic equations were deduced on the hypothesis of an ether, and can be explained, or at least have been explained only on the hypothesis that there is some such medium in which the electric and magnetic forces exist. So, say the objectors to the relativity theory, the relativists are in the same illogical (or worse) position that they occupy with reference to the Michelson and Morley experiment, in that they deny the existence of the medium which made possible the Maxwell equations, which equations the relativity theory must retain at any cost. Professor Magie, of Princeton, who states with great clearness the principal objections to the theory, waxes fairly indignant on this point, and compares the relativists to Baron Munchausen, who lengthened a rope which he needed to escape from prison, by cutting off a piece from the upper end and splicing it on the lower. The objectors to the relativity theory point out that there have been advocated only two theories which have explained with any success the propagation of light and other phenomena connected with light, and that of these two, only the ether theory has survived. To abandon it at this time would mean the giving up of a theory which lies at the foundation of all the great advances which have been made in the field of speculative physics.

It remains finally to ask and perhaps also to answer the question, whither will all this discussion of relativity lead us, and what is the chief end and aim and hope of those interested in the relativity theory. The answer will depend upon the point of view. To the mathematician the whole theory presents a consistent mathematical structure, based on certain assumed or demonstrated fundamental postulates. As a finished piece of mathematical investigation, it is, and of necessity must remain, of theoretical interest, even though it be finally abandoned by the physicists. The theory has been particularly pleasing to the mathematician in that it is a generalization of the Newtonian mechanics, and includes this latter as a special case. Many of the important formulas of the relativity mechanics, which contain the constant denoting the velocity of light become, on putting this velocity equal to infinity, the ordinary formulas of the Newtonian mechanics. Generality is to the mathematician what the philosopher’s stone was to the alchemist, and just as the search for the one laid the foundation of modern chemistry, so is the striving after the other responsible for many of the advances in mathematics.

On the other hand, those physicists who have advocated the theory of relativity see in it a further advance in the long attempt to rightly explain the universe. The whole history of physics, is, to use a somewhat doubtful figure of speech, strewn with the wrecks of discarded theories. One does not have to go back to the middle ages to find amusing reading in the description of these theories, which were seriously entertained and discarded only with the greatest reluctance. But all the arguments of the wise, and all the sophistries of the foolish, could not prevent the abandoning of a theory, if a few stubborn facts were not in agreement with it. Of all the theories worked out by man’s ingenuity, no one has seemed more sure of immortality than the one we know as the Newtonian mechanics. But the moment a single fact appears which this system fails to explain, then to the physicist with a conscience this theory is only a makeshift until a better one is devised. Now this better one may not be the relativity mechanics-its opponents are insisting rather loudly that it is not. But in any case, the entire discussion has had one result pleasing alike to the friends and foes of relativity. It has forced upon us a fresh study of the fundamental ideas of physical theory, and will give us without doubt, a more satisfactory foundation for the superstructure which grows more and more elaborate.

It can well happen that scientists, some generations hence, will read of the relativity mechanics with the same amused tolerance which marks our attitude towards, for example, Newton’s theory of fits of easy transmission and reflection in his theory of the propagation of light. But whatever theory may be current at that future time, it will owe much to the fact that in the early years of the twentieth century, this same relativity theory was so insistent and plausible, that mathematicians and physicists in sheer desperation were forced either to accept it, or to construct a new theory which shunned its objectionable features. Whether the relativity theory then is to serve as a pattern for the ultimate hypothesis of the universe or whether its end is to illustrate what is to be avoided in the construction of such a hypothesis, is perhaps after all not the important question.

Physics photo
The very minimalist cover of the June 1914 issue of Popular Science Monthly.

Some text has been edited to match contemporary standards and style.

The post From the archives: The Theory of Relativity gains speed appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The standard model of particle physics may be broken https://www.popsci.com/science/standard-model-broke/ Tue, 10 May 2022 01:00:00 +0000 https://www.popsci.com/?p=441837
Photo of LHC tunnel.
Inside the tunnel of the Large Hadron Collider, 2019. Brice, Maximilien: CERN

New, precise measurements of already discovered particles are shaking up physics, according to a scientist working at the Large Hadron Collider.

The post The standard model of particle physics may be broken appeared first on Popular Science.

]]>
Photo of LHC tunnel.
Inside the tunnel of the Large Hadron Collider, 2019. Brice, Maximilien: CERN

This article was originally featured on The Conversation.

As a physicist working at the Large Hadron Collider (LHC) at Cern, one of the most frequent questions I am asked is “When are you going to find something?”. Resisting the temptation to sarcastically reply “Aside from the Higgs boson, which won the Nobel Prize, and a whole slew of new composite particles?”, I realise that the reason the question is posed so often is down to how we have portrayed progress in particle physics to the wider world.

We often talk about progress in terms of discovering new particles, and it often is. Studying a new, very heavy particle helps us view underlying physical processes—often without annoying background noise. That makes it easy to explain the value of the discovery to the public and politicians.

Recently, however, a series of precise measurements of already known, bog-standard particles and processes have threatened to shake up physics. And with the LHC getting ready to run at higher energy and intensity than ever before, it is time to start discussing the implications widely.

In truth, particle physics has always proceeded in two ways, of which new particles is one. The other is by making very precise measurements that test the predictions of theories and look for deviations from what is expected.

The early evidence for Einstein’s theory of general relativity, for example, came from discovering small deviations in the apparent positions of stars and from the motion of Mercury in its orbit.

Three key findings

Particles obey a counter-intuitive but hugely successful theory called quantum mechanics. This theory shows that particles far too massive to be made directly in a lab collision can still influence what other particles do (through something called “quantum fluctuations”). Measurements of such effects are very complex, however, and much harder to explain to the public.

But recent results hinting at unexplained new physics beyond the standard model are of this second type. Detailed studies from the LHCb experiment found that a particle known as a beauty quark (quarks make up the protons and neutrons in the atomic nucleus) “decays” (falls apart) into an electron much more often than into a muon—the electron’s heavier, but otherwise identical, sibling. According to the standard model, this shouldn’t happen—hinting that new particles or even forces of nature may influence the process.

Intriguingly, though, measurements of similar processes involving “top quarks” from the ATLAS experiment at the LHC show this decay does happen at equal rates for electrons and muons.

Meanwhile, the Muon g-2 experiment at Fermilab in the US has recently made very precise studies of how muons “wobble” as their “spin” (a quantum property) interacts with surrounding magnetic fields. It found a small but significant deviation from some theoretical predictions—again suggesting that unknown forces or particles may be at work.

The latest surprising result is a measurement of the mass of a fundamental particle called the W boson, which carries the weak nuclear force that governs radioactive decay. After many years of data taking and analysis, the experiment, also at Fermilab, suggests it is significantly heavier than theory predicts—deviating by an amount that would not happen by chance in more than a million million experiments. Again, it may be that yet undiscovered particles are adding to its mass.

Interestingly, however, this also disagrees with some lower-precision measurements from the LHC (presented in this study and this one).

The verdict

While we are not absolutely certain these effects require a novel explanation, the evidence seems to be growing that some new physics is needed.

Of course, there will be almost as many new mechanisms proposed to explain these observations as there are theorists. Many will look to various forms of “supersymmetry”. This is the idea that there are twice as many fundamental particles in the standard model than we thought, with each particle having a “super partner”. These may involve additional Higgs bosons (associated with the field that gives fundamental particles their mass).

Others will go beyond this, invoking less recently fashionable ideas such as “technicolor”, which would imply that there are additional forces of nature (in addition to gravity, electromagnetism and the weak and strong nuclear forces), and might mean that the Higgs boson is in fact a composite object made of other particles. Only experiments will reveal the truth of the matter—which is good news for experimentalists.

The experimental teams behind the new findings are all well respected and have worked on the problems for a long time. That said, it is no disrespect to them to note that these measurements are extremely difficult to make. What’s more, predictions of the standard model usually require calculations where approximations have to be made. This means different theorists can predict slightly different masses and rates of decay depending on the assumptions and level of approximation made. So, it may be that when we do more accurate calculations, some of the new findings will fit with the standard model.

Equally, it may be the researchers are using subtly different interpretations and so finding inconsistent results. Comparing two experimental results requires careful checking that the same level of approximation has been used in both cases.

These are both examples of sources of “systematic uncertainty”, and while all concerned do their best to quantify them, there can be unforeseen complications that under- or over-estimate them.

None of this makes the current results any less interesting or important. What the results illustrate is that there are multiple pathways to a deeper understanding of the new physics, and they all need to be explored.

With the restart of the LHC, there are still prospects of new particles being made through rarer processes or found hidden under backgrounds that we have yet to unearth.

Roger Jones is a Professor of Physics and Head of Department at Lancaster University. He receives funding from STFC and is a member of the ATLAS Collaboration.

The post The standard model of particle physics may be broken appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Edward A. Bouchet paved a path for generations of Black students https://www.popsci.com/science/edward-bouchet-profile/ Mon, 09 May 2022 15:00:00 +0000 https://www.popsci.com/?p=441440
Edward Bouchet portrait
Edward A. Bouchet, the first African American to earn a Ph.D. in physics, worked to create opportunities for generations of Black students. Esther Goh

As the first African American to earn a Ph.D., Bouchet's legacy continues to resonate.

The post Edward A. Bouchet paved a path for generations of Black students appeared first on Popular Science.

]]>
Edward Bouchet portrait
Edward A. Bouchet, the first African American to earn a Ph.D. in physics, worked to create opportunities for generations of Black students. Esther Goh

The annals of science journalism weren’t always as inclusive as they could have been. So PopSci is working to correct the record with In Hindsight, a series profiling some of the figures whose contributions we missed. Read their stories and explore the rest of our 150th anniversary coverage here

Many of us Black physicists know Edward A. Bouchet as the first African American Ph.D. in the field. Historians of 19th century Black education know him as the first African American to earn a Ph.D. in any subject. And in fact, when Yale College awarded him this academic distinction in 1876, Bouchet was part of a moment that was not only transformational when viewed through the prism of race: He became one of the first 20 people to earn that degree in the United States. These first Ph.D.s went on to be professors and leaders, the forefront of a wave of change in American intellectual life. 

Bouchet may have been part of that generation, but the contours of his trajectory are distinct. As a Black man in a segregated hiring market, he had few job options. As was and is the case for so many Black Americans facing discrimination, Bouchet found his home in a Black institution, where he in turn created opportunities for a new generation of Black students. 

Edward Alexander Bouchet was born in September 1852 in New Haven, Connecticut, and into a country where slavery was not only legal, but foundational to the national economy. The North is often popularly remembered as a haven from racism. But, as Kabria Baumgartner describes in her award-winning book In Pursuit of Knowledge: Black Women and Educational Activism in Antebellum America, just 20 years before Bouchet’s birth, a school in Canterbury, Connecticut had to close after white community members reacted violently to 20 young Black women attending there. Bouchet himself attended segregated elementary schools. 

Though his story starts in New England, Bouchet had southern roots. His father William was born into slavery in South Carolina; in 1824, the family who owned him brought him to New Haven to work for free as a servant for their son while he attended Yale. William eventually gained his freedom and became a prominent member of New Haven’s first Black parish, Temple Street Congregational Church (now Dixwell Avenue Congregational United Church of Christ), and a maintenance worker at Yale College. He and his free-born wife Susan Cooley Bouchet, who was a washer woman at the college, watched as rich, white sons of power were educated in how to wield that power. There, they saw possibilities for their son. Bouchet attended an academy that was a feeder for Yale College, graduating valedictorian.

He graduated summa cum laude from Yale College in 1874 and stayed on for his Ph.D. He completed his doctorate in two years, writing a thesis on measuring how light bends when it passes through various glasses. One could easily imagine the story continuing, “He then went on to become Yale College’s first Black professor of physics.” And surely, in some version of the universe where majority-white institutions were not so deeply entrenched in a legacy of profiting from slavery and upholding white supremacy, that is exactly what happened. 

In this universe, however, rather than compromise institutional excellence in the face of exclusion, Bouchet took his brilliance to a home Black people created for one another—one where Black excellence was a welcomed, expected, and celebrated norm. The Institute for Colored Youth (ICY), now known as Cheyney University of Pennsylvania—a storied school whose alumni include an ambassador, civil right leaders, the first Black woman to head a science college, and the second Black woman physician—hired Bouchet to teach across the sciences: physics, astronomy, chemistry, geography, and physiology. 

For more than 25 years, he cultivated a curiosity about the universe in a generation of Black intellectuals and professionals, and helped sustain an important site of Black education. Students of ICY during Bouchet’s tenure included Julian F. Abele, architect of Duke University Chapel as well as a contributing designer to Harvard’s Widener Library and Philadelphia’s Central Library and Museum of Art; and James B. Dudley, the second president of North Carolina A&T State University. Bouchet himself also exemplified an early model of a community-engaged Black scientist, giving public lectures for Black audiences while broadly advocating for science education.

In the last several decades, Bouchet has been transformed into a symbol of Black excellence, ambition, and dreams deferred. This began in 1988, when Nobel Prize in Physics winner Abdus Salam and Black American physicist Joseph Andrew Johnson III founded the Edward Bouchet-ICTP Institute in recognition of the first known person of African descent to earn a Ph.D. in the field. (The Institute was renamed the Edward Bouchet Abdus Salam Institute [EBASI] in 1998 to honor Salam, who died in 1996.) EBASI supports professional development for African students in physics while strengthening collaboration between African and American scientists through international conferences and workshops. There is the Edward Alexander Bouchet Graduate Honor Society, created in 2005 at Yale University—an institution that continues to struggle with not only its historical involvement in slavery, but also modern-day racial discrimination and disparity. 

There is also the American Physical Society’s Edward A. Bouchet Award—which I received in 2021—that honors a minority physicist who simultaneously excels in scientific research and advancing the place of marginalized people in the field. The dual success that the Award recognizes is emblematic of the life Bouchet himself led: a whole, curious-about-the-universe human being whose lifetime was marked by swathes of society refusing to acknowledge his humanity.

Bouchet was unfairly denied the opportunity to make lasting, direct contributions as a researcher, but his imprint is present in other ways. Many of us walk in his footsteps. Because of the professional path he carved—and the Black institutions that supported intellectuals like him—a growing number of us have more of a chance now. Even as some level of integration has occurred and we are at times able to obtain the resources to excel, we continue to struggle against white supremacy. Bouchet’s love for science and for our community reminds us to keep going.

The post Edward A. Bouchet paved a path for generations of Black students appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These levitating beads can teach physicists about spinning celestial objects https://www.popsci.com/science/sound-levitation-spinning-particles/ Fri, 29 Apr 2022 18:30:00 +0000 https://www.popsci.com/?p=440241
a bunch of beads glom together and float in air
There's no magic trick here. Melody X. Lim, Bryan VanSaders, Anton Souslov, and Heinrich M. Jaeger, Physical Review X (2022)

Manipulated by sound waves, these floating particles can offer clues about the physics of asteroids and black holes.

The post These levitating beads can teach physicists about spinning celestial objects appeared first on Popular Science.

]]>
a bunch of beads glom together and float in air
There's no magic trick here. Melody X. Lim, Bryan VanSaders, Anton Souslov, and Heinrich M. Jaeger, Physical Review X (2022)

Levitation might seem like a superpower out of science fiction. But unlike Doctor Strange, scientists don’t need spells to suspend and manipulate objects in midair. In a study published last week in Physical Review X, physicists at the University of Chicago and University of Bath described a way of harnessing the power of sound to make clumps of plastic particles float, spin, and break apart. Their findings could help us understand the physics of other rapidly-rotating entities—including black holes, atomic nuclei, and asteroids. 

“Acoustic levitation is a really cool way to manipulate objects, because it’s literally using something very much like a loudspeaker,” says lead study author Melody Lim, an experimental soft matter physicist at UChicago. 

Lim and her colleagues hoped to refine this sort of levitation so they could use it to move and manipulate objects without touching them. This has a myriad of potential applications: some physicists are even investigating acoustic levitation as a means of rearranging cells for the purpose of tissue engineering

Lim and her team placed tiny round plastic particles—less than one millimeter in diameter each—inside a transparent box. To generate enough force to float and move them, they put a speaker inside the box that could generate standing waves—a type of a sound wave that is stationary. Lim likens it to the wave of a vibrating violin string.

Under the influence of sound waves, plastic particles gradually rise and assemble into a rotating clump. (Particle diameter is about 190 micrometers; video slowed down up by 100 times.) Credit: Melody X. Lim, Bryan VanSaders, Anton Souslov, and Heinrich M. Jaeger, Physical Review X (2022)

The waves caused the beads to bounce and amass together in suspension, forming a single layer of little particles in a circle. In this levitated configuration, the sound waves generate a weak attraction, or cohesive force, between particles—“you can think of it just like a kind of sticky glue that holds everything together,” Lim says. As Lim and the team tweaked the frequency of the sound coming out of the speaker, the circular clump would wobble like a raft on a rocky ocean before beginning to spin.  

Imagine riding on a swinging carousel as it picks up speed. The faster it turns, the more centrifugal force you experience—which makes the swing you’re dangling in start to fan outward instead of hanging straight down. This is essentially what happens to the floating disc of particles, Lim explains. But in this case, there are no strings tethering the floaters together. 

[Related: This lamp can levitate thanks to electromagnets]

As the sound waves are manipulated further, “it reaches a point where it’s rotating as fast as it can possibly go,” Lim says. “Then that pushing force is stronger than the sticky glue that’s binding the particles together, and so the whole thing has to change shape.”  

At these high speeds, centrifugal force rips apart the clump of particles and stretches it out into a longer, skinnier blob, or even breaking into smaller chunks. Interestingly, the fragments eventually glom back together into a single circular disk.

After reaching sufficient rotation speed, the clump abruptly distorts into an ellipse. Eventually, rotation tears the clump apart, but the pieces later reconnect. (Video slowed down by 60 times.) Credit: Melody X. Lim, Bryan VanSaders, Anton Souslov, and Heinrich M. Jaeger, Physical Review X (2022)

“What this experiment really is probing is, how much does [the spinning disc] hate having things on the outside versus having things on the inside?” says Lim. Similar to water droplets, the particles rather pull themselves into compact circles that reduce the surface area, she says. “It doesn’t really like having a long stretched out shape, because then you have many particles kind of sitting around on the outside, and it’s not very happy.” 

Asteroids might undergo similar processes, Lim says. The ones located in the asteroid belt are essential clumps of rocks bound together by gravity and heated up by the sun. “One hypothesis is that the sun shines on one side of the asteroid, which means that that side is hotter,” Lim says. The heat causes the rocks facing the sun to spit out gas. “It’s like you just turned on a little gas thruster that’s attached only to one side of your asteroid,” she says, which gradually spins the asteroid.

[Related: What if the speed of Earth’s rotation suddenly got faster?]

Similar to the tiny particles in Lim’s experiment, the rotation gets faster and faster over time, potentially causing the structure to change shape. But asteroids’ massive size and slow simmer from the faraway sun makes the phenomenon difficult to study. 

“Measurements of this effect for asteroids say that the time needed to double how fast this thing is spinning is on the order of 100,000 years,” Lim says. It would take a long time for rotational force to noticeably change an asteroid’s shape, she says.

The research group’s tiny acoustic levitation system could mimic the physics of those celestial bodies, as well as other spinning objects like black holes and atomic nuclei, to better unpack processes that would otherwise be challenging to study, says Lim. Beyond practical applications, Lim says, she was also mesmerized by the beauty of this phenomenon. 

“I started realizing that when you have many levitating particles put together, they form these very visually interesting structures,” she says. “In the acoustic levitation device, they do all of this fascinating stuff, like spin and split and merge to make bigger droplets. It was really aesthetic.”

The post These levitating beads can teach physicists about spinning celestial objects appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This one-way superconductor could be a step toward eternal electricity https://www.popsci.com/science/superconductor-one-way-electricity/ Wed, 27 Apr 2022 19:30:00 +0000 https://www.popsci.com/?p=439667
A superconducting chip, in an artist's impression.
A superconducting chip, in an artist's impression. TU Delft

The material used in this first-of-a-kind superconductor could make data servers more energy-efficient.

The post This one-way superconductor could be a step toward eternal electricity appeared first on Popular Science.

]]>
A superconducting chip, in an artist's impression.
A superconducting chip, in an artist's impression. TU Delft

Imagine if your computer could run on electricity that flows forever without overheating. This isn’t magic: It’s the potential future of a real phenomenon called superconductivity, which today underpins everything from cutting-edge magnetic research to MRIs.

Now, scientists have found that they can make a superconductor that’s different from others that have come before. It lets electricity flow in only one direction: Like a train pointing downhill, it slides freely one way but faces a daunting uphill in the other. It sounds arcane, but this ability is critical to making electronic circuits like the ones that power your computer. If these scientists’ results hold, it could bring that future one step closer.

“There are so many fun possibilities available now,” says Mazhar Ali, a physicist at Delft University of Technology in the Netherlands, and one of the authors who published their work in the journal Nature on April 27.

Superconductivity flies in the face of how physics ought to work. Normally, as electric current flows along a wire, the electrons inside face stiff resistance, brushing up against the atoms that form the wire. The electrical energy gets lost, often as heat. It’s a large part of why your electronics can feel hot to the touch. It’s also a massive drain on efficiency.

But if you deep-chill a material that conducts electricity, you’ll reach a point that scientists call the critical temperature. The precise critical temperature depends on the substance, but it’s usually in the cryogenic realm, barely above absolute zero, the coldest temperature allowed by physics. At the critical point, the material’s resistance plunges off a cliff to functionally nil. Now, you’ve created a superconductor.

What does resistance-free electricity look like? It means that current can flow through a wire, theoretically for an eternity, without dissipating. That’s a startling achievement in physics, where perpetual motion shouldn’t be possible.

“It violates our current understanding of how one-way superconductivity can occur.”

Mazhar Ali

We’ve known about this magical-sounding quirk of quantum physics since a student in the Netherlands happened across it in 1911. Today, scientists use superconductivity to watch extremely tiny magnetic fields, such as those inside mouse brains. By coiling superconducting wires around a magnet, engineers can craft low-energy, high-power electromagnets that fuel everything from MRI machines in hospitals to the next generation of Japanese bullet trains.

Bullet trains were probably not on the minds of Ali and his colleagues when they set about their work. “My group was not approaching this research with the goal of realizing one-way superconductivity actually,” says Ali.

Ali‘s group, several years ago, had begun investigating the properties of an evocatively named metal, Nb3Br8, made from atoms of niobium (a metal often used in certain types of steel and specialized magnets) and bromine (a halogen, similar to chlorine or iodine, that’s often found in fire retardants). 

As the study team made thinner and thinner sheets of Nb3Br8, they found that it actually became more and more conductive. That’s unusual. To further investigate, they turned to a tried technique: making a sandwich. Two pieces of a known superconductor were the bread, and Nb3Br8 was the filling. The researchers could learn more about Nb3Br8 from how it affected the sandwich. And when they looked, they found that they’d made a one-way superconductor.

What Ali’s group has created is very much like a diode: a component that only conducts electricity in one direction. Diodes are ubiquitous in modern electronics, critical for underpinning the logic that lets computers operate.

Yet Ali and his colleagues don’t fully know how this effect works in the object they created. It also, as it turns out, “violates our current understanding of how one-way superconductivity can occur,” says Ali. “There is a lot of fundamental research as well that needs to be done” to uncover the hidden new physics.

It isn’t the first time physicists have built a one-way superconducting road, but previous constructions generally needed magnetic fields. That’s common when it comes to manipulating superconductors, but it makes engineers’ lives more complicated.

“Applying magnetic fields is cumbersome,” says Anand Bhattacharya, a physicist at Argonne National Laboratory in suburban Chicago, who was not one of the paper authors. If engineers want to manipulate different parts within a superconductor, for instance, magnetic fields make a formidable challenge. “You can’t really apply a magnetic field, very locally, to one little guy.” 

For people who dream of constructing electronics with superconductors, the ability to send electricity in one direction is a powerful inspiration. “You could imagine very cool device applications at low temperatures,” says Bhattacharya.

Such devices, some scientists believe, have some obvious hosts: quantum computers, which harness particles like atoms to make devices that do things conventional computers can’t. The problem is that tiny amounts of heat can throw quantum computers off, so engineers have to build them in cryogenic freezers that keep them barely above absolute zero. The problem compounds again: Normal electronics don’t work very well at those temperatures. An ultra-cold superconducting diode, on the other hand, may thrive.

[Related: What the heck is a quantum network?]

Conventional computers could benefit, too: Not your personal computer or laptop, most likely, but larger behemoths like industrial supercomputers. Other beneficiaries could be the colossal server racks that line the world’s data centers. They account for a whopping 1 percent of the world’s energy consumption, comparable to entire mid-sized countries. Bringing superconductors to data servers could make them thousands of times more energy-efficient.

There is some way to go before that can happen. One next step is finding how to produce many superconducting diodes at once. Another is to find how to make them operate above -321°F, the boiling point of liquid nitrogen: That temperature sounds extremely low, but it’s easier to achieve than the even colder temperatures, supplied by liquid hydrogen, that current devices might need.

Despite those challenges, Ali is excited about the future of his group’s research. “We have very specific ideas for attacking both of these avenues and hope to see some more ground-breaking results in the next couple of years,” he says.

The post This one-way superconductor could be a step toward eternal electricity appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The biggest particle collider in the world gets back to work https://www.popsci.com/science/large-hadron-collider-restarts/ Mon, 25 Apr 2022 17:55:01 +0000 https://www.popsci.com/?p=439183
CERN worker examining semiconducting magnets of Large Hardon Collider accelerator
The CERN team is running 24/7 experiments again on the newly upgraded Large Hadron Collider in Switzerland. Maximilien Brice/CERN

It will be several more months until the Large Hadron Collider is at its full potential. But once it is, it should be more powerful than ever.

The post The biggest particle collider in the world gets back to work appeared first on Popular Science.

]]>
CERN worker examining semiconducting magnets of Large Hardon Collider accelerator
The CERN team is running 24/7 experiments again on the newly upgraded Large Hadron Collider in Switzerland. Maximilien Brice/CERN

The Large Hadron Collider (LHC), the world’s most powerful particle accelerator and a pivotal tool in high-energy physics discoveries, roared back to life on April 22 after going on hiatus in December 2018.

A statement from CERN, the organization that runs and houses the 16-mile-long superconductor in Switzerland, explained that its team successfully completed a break-in run of the accelerator on Friday afternoon. The LHC will undergo several more months of tests and preparation before it can collect applicable data on ions, quarks, bosons, and other weird and wild varieties of particles again. The latest experiment consisted of “two beams of protons circulated in opposite directions … at their injection energy of 450 billion electronvolts,” according to CERN’s post.  

“These beams circulated at injection energy and contained a relatively small number of protons. High-intensity, high-energy collisions are a couple of months away,” Rhodri Jones, head of CERN’s Beams department, explained in the statement. “But first beams represent the successful restart of the accelerator after all the hard work of the long shutdown.”

[Related: Inside the discovery that could change particle physics]

First launched in September of 2008, the LHC was temporarily decommissioned in December of 2018 for much-needed repairs and upgrades. This marked the second long-term shutdown in the accelerator’s history. In 2013, the LHC was turned off for two years to have its cryogenic and vacuum systems serviced and a number of its magnets replaced. The system also got a more than 60 percent energy boost, raising the reading of teraelectronvolts (TeV) on each proton beam from 8 to 13. The recent interruption wrapped in similar adjustments. 

Blue tube-like magnets being lifted by a crane from the Large Hadron Collider during 2013 maintenance
The LHC got a power boost in 2013 after a few of its dipolar semiconducting magnets were replaced. Anna Pantelia/CERN

“The LHC itself has undergone an extensive consolidation programme and will now operate at an even higher energy and, thanks to major improvements in the injector complex, it will deliver significantly more data to the upgraded LHC experiments,” Mike Lamont, CERN’s director for Accelerators and Technology, said in the statement from Friday. While the full potential of the juiced-up LHC remains to be seen, the physicists behind it are aiming to hit 13.6 TeV. That’s close to a third of the energy transmitted by some of the strongest gamma rays recorded in the Milky Way.

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

Once the accelerator is recharged, CERN will dive into “Run 3” of its particle physics experiments to observe new states of matter like quark-gluon plasma and continue old projects that recreate the conditions from after the Big Bang. When the LHC last left off, it was yielding more data than ever before, including on the famed Higgs Boson, the presence of antimatter (or lack thereof), and the heft of W and Z particles. In the gap since 2018, collaborators have been digging through petabytes of calculations to shed light on long-standing riddles in high-energy physics. For instance, just this January, researchers from MIT pinpointed an ephemeral particle they labeled as X(3872)

The LHC can also build on findings gleaned from other circular particle colliders in its next phase. In early April, a team of collaborators from across the US used results from the now-defunct Tevatron accelerator in Illinois to come up with the most precise weight of the W boson to date. CERN can now confirm or refute that measurement, which could make waves for the underlying Standard Model in particle physics.

All of which is to say, with the LHC back in swing, the world is about to get more curious and maybe, a little less enigmatic. 

The post The biggest particle collider in the world gets back to work appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What MIT’s new ‘Oreometer’ revealed about twisting Oreos https://www.popsci.com/science/twisting-oreo-creme-filling/ Tue, 19 Apr 2022 21:00:00 +0000 https://www.popsci.com/?p=438219
Try as you might, you can't get the creme to stick to one side of an Oreo.
Pixabay

It’s basically impossible twist an Oreo evenly in half.

The post What MIT’s new ‘Oreometer’ revealed about twisting Oreos appeared first on Popular Science.

]]>
Try as you might, you can't get the creme to stick to one side of an Oreo.
Pixabay

No matter how carefully you peel back the two chocolate wafers of an Oreo, the creme filling will always stick to one side. It’s not possible to split the creme down the middle.  That’s according to new fluid dynamics research published in the journal Physics of Fluid.

While the finding might not be a total shock to snack aficionados, it sheds light on the next generation of 3D printed manufacturing. Crystal Owens, the lead author on the study, and a PhD student in fluid dynamics at the Massachusetts Institute of Technology, usually focuses on using ink-containing carbon nanotubes to 3D print electronics, not the texture of food. Oreos are a way of putting those concepts in the hands of the general public.

But the central measurements are similar. Substances that can be 3D printed must be malleable enough to squeeze from tubes yet stiff enough to stay in place once printed. For carbon nanotubes, which could be used in electronics manufacturing, the goal is “get [the printed structure] as strong as it can be most of the time,” Owens says.

The study of how fluids and plastics move under stress is called rheology, a niche that the average person has no idea exists, says Owens. But Oreos are a perfect in-hand demonstration: “If I say, it’s like an Oreo—I put fluid between plates and I rotate them, and that helps me characterize the viscosity [of materials] we use for 3D printing—suddenly people understand what I’m saying.”

[Related: The future might be filled with squishy robots printed to order]

Testing the properties of printable nanotubes is exactly like measuring the twist of an Oreo. The researchers begin with a tool called a rheometer, a pair of plates that can sandwich a fluid together, and spin it to measure the liquid’s strength. In the case of the Oreo, the researchers wanted to see what would happen to the inner “creme” if they twisted the outer chocolate “wafers” (according to their technical description of an Oreo cookie). The key measurement was “yield stress,” or the amount of force it took to split wafer from creme.

The study authors also tested the cookies with a DIY “Oreometer”—a clamping device that uses rubber bands and coins to twist the wafers apart, developed by coauthor Max Fan, an undergraduate at MIT. “We were talking a lot about how we give people an intuitive sense of what a yield stress is,” says Owens. “The best thing we came up with is, let other people do the same sort of tests.”

(If you’d like to build your own Oreometer, the study authors published instructions online.)

Going in, Owens says she and her colleagues expected to be able to split the filling in Oreos down the middle under some experimental conditions. With the molten plastic used to make things like deck chairs, “if you rotate too fast, you will get a seam right around the equator, and the fluid will shear right along that band,” says Owens. “It’s a very classic phenomenon, so we actually spent some time trying to make this happen with the Oreos.”

But no matter how fast they twisted the wafer, the creme always stuck to one side. “It turns out there’s not really a trick to it,” Owens says. “Everything you try to do will get mostly a clean break. It’s a bit disappointing that there’s not some secret twist.” She suspects that heating up the creme could make a middle split possible. Cookies with extra creme (“Double Stuf”) or special flavorings didn’t change results.

Still, the faster the scientists spun the wafer, the harder it was to break the creme. “That’s actually the proof that creme is a liquid,” says Owens. Pure solids always require the same force to break no matter how quickly they’re twisted.

[Related: Scientists create a small, allegedly delicious piece of yeast-free pizza dough]

In the world of food textures, Oreo creme sits in the mushy category, along with mashed potatoes, cranberry sauce, and stuffing. “Actually a lot of Thanksgiving foods are [fluids],” Owens says. “Not chicken—I think that’s just a solid.”

When asked if canned cranberry sauce was a fluid or solid, Owens pauses. “Depending on who you ask, it could be a soft solid or a complex fluid,” she says. But there’s another definition of a fluid, she says. “If we can measure it in our rheometer then it’s considered a fluid.” In fact, she says, the properties of an Oreo’s filling suggests that it could be 3D printed.

The post What MIT’s new ‘Oreometer’ revealed about twisting Oreos appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The most distant galaxy we’ve ever discovered might have closely followed the Big Bang https://www.popsci.com/space/most-distant-galaxy-image/ Thu, 07 Apr 2022 22:11:02 +0000 https://www.popsci.com/?p=436337
Starburst galaxy in pastel colors in Hubble composite image
12 million-light years away, M82 is of the most famous starburst galaxies ever imaged, with a composite from three space telescopes, including Hubble. The newly discovered HD1 (see below) is more than 1,000 times farther away, and likely packs way more star power. X-ray: NASA/CXC/JHU/D.Strickland; Optical: NASA/ESA/STScI/AURA/The Hubble Heritage Team; IR: NASA/JPL-Caltech/Univ. of AZ/C. Engelbracht

Astronomers have two explanations for why it still shines so bright.

The post The most distant galaxy we’ve ever discovered might have closely followed the Big Bang appeared first on Popular Science.

]]>
Starburst galaxy in pastel colors in Hubble composite image
12 million-light years away, M82 is of the most famous starburst galaxies ever imaged, with a composite from three space telescopes, including Hubble. The newly discovered HD1 (see below) is more than 1,000 times farther away, and likely packs way more star power. X-ray: NASA/CXC/JHU/D.Strickland; Optical: NASA/ESA/STScI/AURA/The Hubble Heritage Team; IR: NASA/JPL-Caltech/Univ. of AZ/C. Engelbracht

In the sixth episode of Star Trek: The Next Generation, Officer Jean Luc Picard and his android companion Data find themselves on the edge of the known universe. As Picard examines the barrier in front of the Enterprise, Data declares that they’ve landed “where none have gone before.”

While researchers have not quite made it to the edge of the universe, they did just creep one step further with the discovery of a galaxy up to 13.5 billion light-years away, named HD1. In a study published today in The Astrophysical Journal and an accompanying paper in the Monthly Notices of the Royal Astronomical Society Letters, University of Tokyo astronomer Yuichi Harikane and his colleagues outline the methods of discovery and possible implications of HD1’s existence. It’s the most distant cosmic body on record so far.

[Related: Hubble spies the most distant star ever found]

“It was very hard work to find HD1 out of more than 700,000 objects,” Harikane said in a press release. “HD1’s red color matched the expected characteristics of a galaxy 13.5 billion light-years away surprisingly well, giving me a little bit of goosebumps when I found it.”

Harikane and his team spent over 1,200 hours capturing images through the VISTA Telescope in Chile, the former Spitzer Space Telescope, and both the UK Infrared Telescope and the Subaru Telescope on the Big Island of Hawaii. They then used an array of radio wavelength receivers, also in Chile, to calculate redshift, a formula that helps astronomers estimate distances based on how light changes as the universe expands. The data showed an unexpectedly bright UV signature from HD1, which the study argues is due to one of two causes.

Zoomed-in telescope image of distant galaxies and stars with a red one labeled as HD1
HD1, object in red, appears at the center of a zoomed-in telescope image. Harikane et al. (2022)

“The very first population of stars that formed in the universe were more massive, more luminous and hotter than modern stars,” Fabio Pacucci, an astronomer at the Center for Astrophysics in Massachusetts and coauthor of the study, said in the press release. “If we assume the stars produced in HD1 are these first, or Population III, stars, then its properties could be explained more easily. In fact, Population III stars are capable of producing more UV light than normal stars, which could clarify the extreme ultraviolet luminosity of HD1.” 

HD1 might contain stars created relatively soon after the Big Bang, which would explain the high amount of light intensity researchers logged. If the luminosity is not due to Population III Stars, it could be coming from a black hole 100 million times as massive as the sun. Such a gigantic void would consume mass violently enough to generate bright light. These hypotheses, however, do not explain the rate at which the superpowered galaxy forms stars. HD1 appears to churn out roughly 100 stars per year, which is nearly 10 times the rate of similarly fashioned galaxies. 

Timeline of UV wavelengths changing during the Big Bang and expansion of the universe on a black background
A timeline displays the earliest galaxy candidates and the history of the universe. Harikane et al. (2022), NASA, EST and P. Oesch/Yale

“Answering questions about the nature of a source so far away can be challenging,” Pacucci said in a press release. “It’s like guessing the nationality of a ship from the flag it flies, while being faraway ashore, with the vessel in the middle of a gale and dense fog. One can maybe see some colors and shapes of the flag, but not in their entirety. It’s ultimately a long game of analysis and exclusion of implausible scenarios.”

While questions continue to surround HD1, like a more exact distance, size, and composition, the team’s findings furthers humanity’s map of the known universe. And with further confirmation, they could deepen our understanding of the origins of the universe, too.

Correction (April 7, 2022): Due to an editing error, the headline on the story incorrectly stated that HD1 predated the Big Bang. It’s estimated that it formed .3 billion years after.

The post The most distant galaxy we’ve ever discovered might have closely followed the Big Bang appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This subatomic particle’s surprising heft has weighty consequences https://www.popsci.com/science/w-boson-heavy-mass/ Thu, 07 Apr 2022 20:00:00 +0000 https://www.popsci.com/?p=436252
Fermilab's Collider Detector.
The Collider Detector at Fermilab, one of the two detectors on the Tevatron particle accelerator. Fermilab

If the W boson is heavier than expected, then a foundational idea in physics is unfinished.

The post This subatomic particle’s surprising heft has weighty consequences appeared first on Popular Science.

]]>
Fermilab's Collider Detector.
The Collider Detector at Fermilab, one of the two detectors on the Tevatron particle accelerator. Fermilab

The sun, a nuclear power plant, and carbon dating all draw their abilities from interactions between particles in the hearts of atoms. Those are, in part, the work of a subatomic particle called the W boson. The W boson is an invisible bearer of the weak nuclear force, the fundamental force of the universe responsible for causing radioactive decay.

It is also the subject of the newest mystery in particle physics. The latest, most precise, and most informed measurement yet of the W boson’s mass, published in Science on April 7, reveals that the particle is heavier than anticipated.

It’s a deviation that can’t easily be explained. If the measurement is confirmed—and that’s a very big if—it could be the strongest evidence yet that particle physics’ long-standing understanding of the universe at the tiniest scales, known as the Standard Model, is unfinished.

“Is nature also hiding yet another particle which would influence this particular quantity?” says Ashutosh Kotwal, a particle physicist at Duke University and a member of the collaboration that published the paper.

The W boson isn’t a newly discovered particle: CERN scientists found it in the early 1980s, and theoreticians had predicted its existence over a decade earlier. Sorting out its mass had been a goal right from the start.

“There’s a long, long history of making this measurement and making the precision better and better, because it’s always been recognised as a very important measurement to make,” says Claudio Campagnari, a particle physicist at the University of California, Santa Barbara, who wasn’t one of the paper’s authors.

In fact, the latest Science paper is the fruit of experiments that are over a decade old. The myriad collaborators who co-authored the paper all worked with data from the Tevatron: a particle accelerator, located at Fermilab in suburban Chicago, whose final collision came in 2011.

As particles whirled around the Fermilab’s rings and smashed into each other, they’d erupt into a glittering high-energy confetti of particles—W bosons included. With more collisions came more data for scientists to poke and prod to piece together the W boson’s mass.

“Our task that we defined for ourselves was: Go measure the facts. And here’s our best effort yet to get at that fact,” says Kotwal.

Those particles spiralled around the accelerator at very near the speed of light, pummeling into each other almost instantaneously. Analysing their collisions, on the other hand, takes years. The Fermilab group had done it before, in 2006 and 2012, taking four and five years, respectively, to sort through previous sets of data. 

That’s because measuring the W boson’s mass is a delicate and highly sensitive process that must account for all sorts of minute distractions, from shifts in the magnetic field inside the accelerator to the angles of the detectors that glimpsed the collisions.

“Small mistakes could have a big effect, so it has to be done very carefully, and as far as I can tell, the authors have done an extremely careful job, and that’s why they have been working on it for so many years,” Martijn Mulders, a particle physicist at CERN, in Switzerland, who was not one of the paper authors.

The study authors took over a decade. At the end of it, they found that the W boson was more massive than any of the previous measurements and too massive to align with  for theoretical predictions. It’s almost certainly too big a difference to be written off as a mere statistical accident.

“I don’t think people really expected that a new result would be so far off the prediction,” Campagnari says.

[Related: Physicists close in on the exceedingly short life of the Higgs boson]

The W boson is a brick in the wall of the Standard Model, the heart of modern particle physics. The Standard Model consists of a dozen subatomic particles, basic building blocks of the universe, tightly woven by tethers of theory. The Standard Model has been physicists’ guide to discovering new particles: Most notably, it led researchers to the Higgs boson, the long-sought particle that helps give its peers mass. Time and again, the Standard Model’s predictions have held up.

But the Standard Model is not a compendium, and its picture leaves much of the universe unanswered. It doesn’t explain how or why gravity works, it doesn’t explain dark matter, and it doesn’t explain why there is so much more matter in the universe than antimatter.

“By no means do we believe the Standard Model is intrinsically complete,” says Kotwal.

And if the result holds, “I think we can honestly say it’s probably the biggest problem that the Standard Model has encountered over many years,” says Mulders.

In the coming days and months, particle physicists will pick apart every aspect of the paper in search of an explanation. It’s possible that the Fermilab team made an undiscovered error; it’s also possible that a minor tweak in the theoretical background could explain the discrepancy.

Even if the Fermilab finding is in order, the task still isn’t finished. Physicists would have to independently cross-check the result, verifying it in a completely different experiment. They’d want to know, for instance, why no previous measurement saw W bosons as massive as this one. “For that, the hope would be on CERN experiments,” says Mulders.

In fact, CERN’s Large Hadron Collider (LHC) has already observed more W bosons than Tevatron ever did. Now, scientists working with its data have new motivation to calculate the mass from those observations. They may find aid from new collisions when the LHC becomes fully operational later this year—or, further in the future, when it’s upgraded in 2027. 

But suppose that LHC does give proof. Then, the misbehaving W boson could be the fingerprint of something lurking unseen in the quantum shadows. Perhaps it’s a sign of another particle, such as one predicted by a long-elusive theory called supersymmetry, or a hitherto unknown force.

“This is really at the heart of what we think of as the Standard Model, and that would be broken…you have to start questioning everything,” says Mulders.

The post This subatomic particle’s surprising heft has weighty consequences appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Manipulating atomic motion could make metals stronger and bendier https://www.popsci.com/science/study-reveals-metal-atoms-in-motion/ Fri, 18 Mar 2022 20:30:16 +0000 https://www.popsci.com/?p=432173
Rare platinum ore nugget on a black background
Platinum might look stolid on the surface, but on the atomic level, it's got as much as a marching band. Deposit Photos

By tracking individual atoms in crystal grains, materials scientists might find ways to make metals like platinum more useful.

The post Manipulating atomic motion could make metals stronger and bendier appeared first on Popular Science.

]]>
Rare platinum ore nugget on a black background
Platinum might look stolid on the surface, but on the atomic level, it's got as much as a marching band. Deposit Photos

The Earth’s crust is cracked into seven major tectonic plates, constantly sliding and grinding into each other. You can’t see it happen, but you can see the results: the mountains and the volcanoes that erupt when plates collide, for instance, or the valleys and seas left behind when plates break apart.

But the crust isn’t alone in behaving like that. Many metals—including steel, copper, and aluminum, which are critical to making the modern world tick—are made of little crystal bits. If you take a sheet of one of those metals and pull on it or squish it down, those little bits  move against each other, just like tectonic plates. Their boundaries can shift.

After years of trying to see those shifting boundaries for themselves, materials scientists have now shown that they can zoom into the atomic scale to watch it happen. In a study published in the journal Science on March 17, they explain how this could unlock the ability for other researchers to tinker with crystal grains and sculpt metals into better building-blocks for manufacturing.

It might seem odd to describe metals as crystals, but many are, just like gemstones and ice. What defines a crystal is that their atoms are arranged in regular geometric patterns—hexagons, for instance, or cubes repeating through space. Solid glass, on the other hand, isn’t a crystal, because its atoms don’t have a defined structure and sit wherever they please.

You might think of these patterns as street grids in cities. But if an urban center is large enough, chances are that it won’t share a single grid. A megacity like New York, Tokyo, or Jakarta might be fashioned from many smaller cities, suburbs, or quarters, each one with a grid laid out at its own angle. 

The patterns in metals are called “polycrystals,” and their mini-crystal components are deemed “crystal grains.” Crystal grains might share the same pattern, but not connect cleanly to their neighbors. Sometimes one grain’s atoms don’t line up with another grain’s, or are arranged at a different angle.

What’s more, the grains are not static or fixed; they slide past one another, or twist and dance. In the parlance of materials scientists, all this is called grain boundary motion. It can change how the whole material behaves when it’s under pressure. Depending on how the grains are arranged, the material could also become hardier—or more fragile.

[Related: Time crystals just got a little easier to make]

Researchers have been trying to study grain boundary motion for decades. The problem was that, to do that, they had to zoom in enough to examine the individual atoms in a piece of material.

In recent years, they’d come closer than ever before, thanks to transmission electron microscopes, which scan a slice of material by blasting it with electrons and watching the shapes that pass through on the far side.

That works when grain boundaries are simple, like two-flat-surfaced cubes twisting away from each other. But most boundaries are far more complicated: They might be jagged, or they might slice through a piece of metal at strange angles. “It is very challenging to observe, track, and understand atomic movements at these,” says Ting Zhu, a materials engineer at the Georgia Institute of Technology and one of the Science paper’s authors.

Yellow and pink atoms on a grain boundary from a platinum scan
The electron microscopy image shows a grain boundary between two adjoining crystals where platinum atoms are colored in yellow and pink, respectively. Wang et al. 2022

Zhu and his colleagues studied platinum, which despite being rare, is frequently used in wind turbine blades, computer hard disks, and car catalytic converters. They took cross-sections of platinum just a few billionths of a meter thick, and ran them through an electron microscope. They also used an automatic atom-tracker—a kind of software—to examine the images coming out of the microscope and label the atoms. With that, the researchers could track how those individual atoms moved over time.

When they analyzed the platinum, they found something they hadn’t expected. Sometimes, as crystal grains moved and their boundaries shifted, the atoms at the edge would jump from one grain to another. The boundaries would bend and change to accomodate more atoms.

Zhu compares the atoms’ motion to that of marching band members. “When one line of band members moves to pass a neighboring line in parallel, the two lines of band members are merged into one line,” he explains.

[Related: Inside the high-powered process that could recycle rare earth metals]

Platinum might seem like a shining anomaly in this field, but Zhu says their work could translate to other metals, too. Tinkering with the grains in steel, copper, and aluminum can make those metals more durable and flexible at the same time.

It’s something that materials scientists can consider going forward. “Engineering such fine-grained polycrystals is an important strategy for making stronger engineering materials,” says Zhu.

Zhu says he’d expect to find grain boundary motion like this in most metals, including alloys that include atoms of multiple elements. To confirm, materials scientists would have to zoom in on each one’s atoms, studying what makes the acrobatics of aluminum different from the dance inside copper.

The post Manipulating atomic motion could make metals stronger and bendier appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Time crystals just got a little easier to make https://www.popsci.com/science/time-crystal-uses/ Mon, 28 Feb 2022 21:16:48 +0000 https://www.popsci.com/?p=427595
Silver pocket watch being swung on a chain on a black background to symbolize a time crystal
Time crystals form a repeating pattern by "flipping" between two atomic states precisely on the clock. Those properties could be useful for silicon chips, fiber optics, and much more. Deposit Photos

A new kind of time crystal can exist at room temperature, making it all the more relevant for the real world.

The post Time crystals just got a little easier to make appeared first on Popular Science.

]]>
Silver pocket watch being swung on a chain on a black background to symbolize a time crystal
Time crystals form a repeating pattern by "flipping" between two atomic states precisely on the clock. Those properties could be useful for silicon chips, fiber optics, and much more. Deposit Photos

To make a space crystal, you need the immense pressures of the Earth’s surface bearing down on minerals and magma. But to make a time crystal, you need esoteric equations and ridiculously precise lasers.

At least, that’s how physicists shaped the first self-standing time crystal in a lab last year. Now, they’ve turned into an even more tangible object by creating a time crystal from common elements that can withstand room temperature. They shared their design in the journal Nature Communications on February 14.

If you’re wondering what a time crystal is (outside of pulp science fiction), most physicists also had the same question until pretty recently. It’s a form of matter that wasn’t proposed until 2012, and wasn’t even seen in rudimentary stages until 2016.

To wrap your head around this wonky chapter of quantum mechanics, think of a crystalized structure like a piece of salt or a diamond. The atoms deep within those objects are arranged in repeating, predictable patterns in space. For instance, if you take an ice cube from your freezer and zoom into the tiniest scales, you’ll see the hydrogen and oxygen atoms of the water molecules forming a mosaic of tiny hexagons. (This is why snowflakes tend to be hexagonal.)

As a result, physicists also call these formations “space crystals.” But just as the three axes of space form different dimensions, time also makes a dimension. Physicists began to wonder if they could find a crystal—or something like it—whose atoms formed repeating patterns in time.

[Related: What the heck is a time crystal, and why are physicists obsessed with them?]

Over the past few years, labs across the world have been working out what a time crystal might look like. Some started with a space crystal whose atoms were arranged one way. They then buzzed the crystal with a finely tuned laser to “flip” the atoms into another state, heated it up again to switch it back to the first arrangement, over to the second, and so on, with precise regularity. This laser-driven setup is specifically called a “discrete time crystal.” (In theory, there are other types of time crystals.)

In 2016, physicists at the University of Maryland created a basic but discrete time crystal with atoms from the rare earth metal ytterbium. Other groups have tinkered with exotic environments like the insides of diamond or a wavy state of matter called a Bose-Einstein condensate. More recently, in November of 2021, physicists from Stanford University and Google announced that they’d created a time crystal in a quantum computer.

But early time crystals have been limited. For one, they can usually only exist at cryogenic temperatures barely above absolute zero, and are impractical for most systems that everyday people use. Partly for that reason, those time crystals have existed in isolated systems like quantum computers, away from the “real world.” Moreover, they weren’t long-lasting: The change between states would come to a halt after mere milliseconds, almost like a windup toy running out of thread.

And just as a space crystal can be big or small in space, depending on how much the pattern repeats itself, a time crystal can be long or short, depending on the duration of each state. Time crystals so far have tended to be short or “small.” That left room for growth.

So, this global group of physicists set about engineering a time crystal that circumvented some of these problems to, hopefully, work in the real world. Their device consists of a crystal about 2 millimeters across, fashioned from fluorine and magnesium atoms. It uses a pair of lasers to move between patterns, and can do so at 70 degrees Fahrenheit (room temperature).

Once the team finished fine-tuning their systems, they found that they could create a variety of time crystals “bigger” than any seen before. “The lifetime of the generated discrete time crystals in our system is, in principle, infinite,” Hossein Taheri, an electrical engineer at the University of California, Riverside, and contributor on the study, told the “Physics World Weekly” podcast.

“Generally in physics, wherever there is a path for energy exchange between the system and its environment, noise also creeps in through the same path,” Taheri said on the podcast. That can undo the delicate physics needed for time crystals to form, which is why they need to be contained by such impractical means. But Taheri and his collaborators were able to bypass the limitations by keeping the state change going with two lasers.

[Related: The trick to a more powerful computer chip? Going vertical.]

With the researchers’ achievement, time crystals might be one step closer to existing outside of the lab. If that’s the case, what applications would they have?

No one’s going to put time crystals in time machines or warp drives soon, but their precise properties could pair well with atomic clocks or silicon chips for specialized devices. Or, because they’re driven by laser lights, they could back stronger fiber optic connections. Alternatively, they could help people better understand quantum physics and unique states of matter.

“We can use our device to predict what we can observe in much more complex experiments,” Andrey Matsko, an engineer at Jet Propulsion Laboratory in Pasadena, California and another one of the authors, told “Physics World Weekly.”

In fact, he and his team think time crystals could spawn a whole field of study with a beautifully science-fiction-esque name: “timetronics.” 

“I believe that timetronics is around the corner,” Krzysztof Sacha, a physicist at Jagiellonian University in Krakow, Poland and research co-author, said on the podcast. So while you’re still a long way from being able to hold time crystals, they might enter your world sooner than you’d expect.

The post Time crystals just got a little easier to make appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet the mysterious particle that’s the dark horse in dark matter https://www.popsci.com/science/axion-dark-matter-explained/ Fri, 25 Feb 2022 11:00:00 +0000 https://www.popsci.com/?p=427150
The ADMX experiment, used to look for axions.
The ADMX experiment, used to look for axions, being extracted for maintenance. Nick Du

The search for elusive axions reaches a critical mass.

The post Meet the mysterious particle that’s the dark horse in dark matter appeared first on Popular Science.

]]>
The ADMX experiment, used to look for axions.
The ADMX experiment, used to look for axions, being extracted for maintenance. Nick Du

A tiny particle that acts like a wave is gaining ground as the explanation for dark matter, according to a pair of scientific reviews published this week.

The theoretical particles called axions could be scientists’ best bet for explaining dark matter—the elusive stuff that makes up 85 percent of all matter in the universe that barely interacts with our “regular” matter. Two new reviews show how this particle went from runner-up to spotlight in recent years. The theories around dark matter have evolved to make these particles more convincing, the reports say. They also explore how scientists might be able to detect axions. Both reports were published this week in Science Advances

If axions are dark matter, then “there are axions going through you right now,” but you don’t notice because they barely interact, says Francesca Chadha-Day, a physicist at Durham University in the UK and an author of the theory review. She also works on TOORAD, an axion detector collaboration.

When she started working on axions a decade ago they were more of a fringe dark matter candidate, she says. They’ve become more prominent since, in large part due to what she says was “the failure of the Large Hadron Collider, or any of the dark matter detectors” to find the favored explanation of dark matter for decades, so-called weakly interacting massive particles (WIMPS). 

Both are hard-to-spot theoretical, subatomic particles. So what’s the difference between axions and WIMPS?

WIMPS are a little more substantial—they’re big, heavy neutral particles that are kind of like the center of a heavy atom, says Gray Rybka, a physicist and spokesperson for the Axion Dark Matter eXperiment (ADMX) at the University of Washington who wasn’t involved in either review. 

Axions by contrast are less massive, even lighter than the lightest known particle called the neutrino. In being so light they behave strangely.

“The interesting thing about axions is that axion dark matter behaves more like a kind of field covering the whole universe” rather than individual particles moving around, Chadha-Day says.

Every object has what’s called wave-particle duality—meaning things act to some extent like a wave and to some extent like a particle. For example, something tiny like an electron can act like a particle–getting shot through space and colliding with other particles like billiard balls. But it can also act like a wave–combining with other waves and not existing definitely in any one place at a time. But the heavier the object, the less pronounced this effect is–the more it acts like a particle. As part of this wave-nature, axions should weakly interact with light and when surrounded by a magnetic field. If they exist, axions can convert to photons, which are particles of light, and photons back to axions.

The theory review is “a good summary of where the thinking is going,” says Rybka. WIMPS and axions were put forth several decades ago as explanations for dark matter around the same time, Rybka says. They aren’t the only candidates for dark matter, but they are the most prominent.

“At the time, the community mostly favored WIMPS” because they were easier to look for with existing technology, Rybka says. After looking for WIMPS for three decades and finding nothing, “there are people who are getting tired of this idea.”

String theory predicts that axions should exist, as a solution to the puzzle of balancing matter and antimatter in the universe.

[Related: Astronomers may have found a galaxy that formed without dark matter]

Part of the reason for the growing interest in axions is that researchers now have the technology to hunt for them. There’s been “a blooming of ideas of how you would build an experiment to look for [axions],” Rybka says. In the last few years scientists have gone from devising experiments to detect the particles, to “doing searches where we think we have a good chance of discovery.”

So far experiments like ADMX have tried to detect axions by building a chamber with a strong magnetic field, and of a specific size, so that it resonates with a certain mass of axion. It’s sort of like how two differently sized instruments—a violin versus a cello—would hum as they pick up different ambient sounds of higher or lower pitch. This resonance would cause more axions to turn into photons in the cavity, Chadha-Day says, and extremely sensitive detectors would pick them up.

ADMX is only sensitive to axions in a narrow range of possible masses. However, experimentalists have thought of clever ways to make detectors that could pick up a wider range of axion masses, Chadha-Day says. This is important because “there’s no reason the universe would be nice to us” and make axions a mass that’s easy to search for, she says.

Astronomers could also potentially observe axions from space by looking at two key places that host magnetic fields: galaxy clusters and neutron stars. Axions would change the spectrum of light coming off matter falling into black holes in the hearts of galaxies. They would also cause neutron stars to emit more light than expected in radio frequencies.

If these ongoing and upcoming experiments don’t find axions in the places scientists expect, theorists will have to go back to the drawing board. But if axions are detected Rybka says, finally, “we’ll know what dark matter is.”

Correction, February 28, 2022: A previous version of this article indicated Chadha-Day led the axion theory review. She is the first author, but not the lead.

The post Meet the mysterious particle that’s the dark horse in dark matter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Rare ‘upside-down stars’ are shrouded in the remains of cannibalized suns https://www.popsci.com/science/astronomers-discover-new-subdwarf-star/ Sat, 19 Feb 2022 23:00:00 +0000 https://www.popsci.com/?p=426101
Merging white dwarf stars can create a new kind of star.
Two white dwarf stars merge in an artist's impression. Nicole Reindl

A rare type of star probably forms when two stars spiral around each other and smash together.

The post Rare ‘upside-down stars’ are shrouded in the remains of cannibalized suns appeared first on Popular Science.

]]>
Merging white dwarf stars can create a new kind of star.
Two white dwarf stars merge in an artist's impression. Nicole Reindl

White dwarf stars can collide to create a new type of strange, “upside-down” star, according to two new studies published this week.

A team of astronomers identified two small, bright stars called hot subdwarfs that had an as-of-yet unseen makeup. Another research group found a mechanism to explain how these kinds of stars could have formed.

The two strange stars were “the first of their kind” to be identified and must be extremely rare, says Marcelo Miller Bertolami, an astrophysicist at the Institute of Astrophysics in La Plata, Argentina, who led the theoretical study on how such stars might form, published in the Monthly Notices of the Royal Astronomy Society.

White dwarfs are the slowly-cooling cores of dead stars. Hot subdwarfs are fairly uncommon, old stars that burn four times hotter than the surface of the sun and, unlike our sun, fuse helium in their cores instead of hydrogen, Miller Bertolami says.

Most stars are about three-quarters hydrogen, one-quarter helium, and a smattering of other elements, Miller Bertolami says. A star fuses its lightest elements first, so it will only fuse helium if it’s already burned up its hydrogen, or if another object’s gravity pulled the hydrogen away from the star.

But the two newfound stars didn’t resemble other helium-burning subdwarfs, as a team of astronomers noticed, according to a report published in the same journal this week. The team, led by Klaus Werner, an astronomer at the Center for Astro and Particle Physics in Germany, realized they saw not only a star with a helium-rich surface, but one rich in carbon and oxygen as well. 

“This is extremely, extremely uncommon,” Miller Bertolami says. Typically, a star creates carbon and oxygen by burning helium in its core, and those elements wouldn’t be visible on the surface. But the team found that about 40 percent of the star’s surface was made of carbon and oxygen.

“You need to find a way to carry all that carbon and oxygen to the surface and this is not easy,” Miller Bertolami says.

[Related: Black holes have a reputation as devourers. But they can help spawn stars, too.]

Werner’s team, who already knew Miller Bertolami, reached out to him to try to figure out how these bizarre objects could have formed. Miller Bertolami’s team was already working on a similar project and were able to show how such a strange star could form.

Miller Bertolami and his colleagues found that a heavier helium-rich white dwarf, under the right conditions, could interact with a lighter carbon- and oxygen-rich white dwarf. Together, that pair could create a hot subdwarf that combines the materials of both.

These two stars would have existed in a binary—meaning they orbited around each other. Over time, they would emit gravitational waves and spiral toward each other until they met, Miller Bertolami says. In this process, they would get close enough that tidal forces would rip apart one or both of them.

Typically, the more massive star will destroy the less massive one and cannibalize the lighter star’s material. In Miller Bertolami’s model, the helium white dwarf destroyed a lighter carbon-oxygen one, which is how the surviving star ended up with carbon and oxygen star guts splattered all over its surface.

The white dwarfs were previously dead, in that they no longer fused atoms into heavier elements to produce energy. But the merger between them reignited fusion within a new, “reborn” star—the subdwarf, Miller Bertolami says.

The two studies form “a nice connection between the theory and the observations,” Warren Brown, an astrophysicist at the Center for Astrophysics | Harvard & Smithsonian who wasn’t involved with either study. “The measurements are fairly straightforward,” he says, and the theoretical explanation seems like “a great solution” to an observational puzzle.

What’s surprising is that these stars are “kind of upside down,” Brown says. The heavy elements that scientists would expect to be created in their cores—carbon and oxygen—appear on the surface, while the cores are full of light helium.

“It’s physically possible, so it must happen,” Brown says. The question is how often? Once astronomers get a larger sample, hundreds, or at least tens of stars, they can begin to figure out how prevalent these kinds of stars are in the galaxy.

Although this observation is unprecedented, astronomers may soon have new tools to spot similar stellar events. The Laser Interferometer Space Antenna (LISA) and other next-generation gravitational wave detectors will be able to pick up tens of thousands of stellar binaries in our galaxy, Brown says. Mergers of white dwarfs like these will emit gravitational waves and offer another lens through which to study them.

Correction February 24, 2022: Miller Bertolami clarified the two spiraling stars would emit gravitational waves, not gravity waves, as the article previously stated. 

The post Rare ‘upside-down stars’ are shrouded in the remains of cannibalized suns appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The most precise atomic clocks ever are proving Einstein right—again https://www.popsci.com/science/atomic-clock-measures-time-dilation/ Thu, 17 Feb 2022 23:00:00 +0000 https://www.popsci.com/?p=426079
An atomic clock at the National Institute of Standards and Technology's JILA.
An atomic clock at the National Institute of Standards and Technology's JILA. Jacobson/NIST

One of the atomic clocks can track time to within one second over 300 billion years.

The post The most precise atomic clocks ever are proving Einstein right—again appeared first on Popular Science.

]]>
An atomic clock at the National Institute of Standards and Technology's JILA.
An atomic clock at the National Institute of Standards and Technology's JILA. Jacobson/NIST

For most of human history, we kept time by Earth’s place in space. The second was a subdivision of an Earth day, and, later, an Earth year: The timespan was defined by where Earth was. Then came the atomic clock. 

Scientists delved into atoms of the element cesium, where a process called the hyperfine transition emits and absorbs microwaves, which scientists could time very precisely with the help of a vibrating quartz crystal. That underpins the basis of how scientists measure time today, and allowed them to craft a more accurate definition of the second in 1967. 

That definition hasn’t changed significantly in over half a century, nor has the timing of the atomic clocks used to create it. Those clocks wouldn’t have lost a second since the extinction of the dinosaurs. But better atomic clocks are here, and they’re good for more than just keeping time—they’re great physics tools, too.

Now, two different groups have created clocks that can measure subtle physics within the clocks themselves. The research teams published their respective results in two different papers in the journal Nature on February 16. These new clocks can measure one of Albert Einstein’s predictions—time dilation due to gravity—on the smallest scale yet.

Cutting-edge atomic clocks such as these use neither cesium nor quartz. Instead, their foundation is pancake-like structures of super-chilled strontium atoms. Their operators can control the atoms using a laser that emits visible light. Hence, they’re called “optical clocks.”

One such optical clock exists at the University of Wisconsin-Madison. This clock holds six strontium pancakes—in effect, six smaller clocks—in the same structure. (There’s nothing unique about that number; they could add more or less. “Six is somewhat arbitrary,” says Shimon Kolkowitz, a physicist at the University of Wisconsin-Madison.)

The Madison clock can keep time to within one second over 300 billion years—over 20 times longer than the age of the universe. That would have been a world record, but this clock is not even the most powerful one out there. It’s outmatched by another multi-clock at JILA, a joint project of the National Institute of Standards and Technology (NIST) and the University of Colorado, Boulder. 

[Related: Researchers just linked three atomic clocks, and it could change the future of timekeeping]

Having multiple “clocks” in the same device isn’t necessarily useful for timekeeping. (Which clock do you watch, for instance?) But they do allow you to compare the clocks to each other. Since these clocks are very, very precise, they can measure some very precise physics. For instance, the Boulder group could test time dilation within one device.

“It’s kind of been, up till now, something you find by comparing separate clocks over distances,” says Tobias Bothwell, a graduate student at JILA and NIST.

According to relativity, time slows the faster you go as you approach the speed of light. Gravitational fields can cause the same slowdown, too: The stronger the field, the greater the time dilation. Take Earth. The closer you are to Earth’s center the more Earth’s gravity is pulling you down, and the more time dilation you experience.

In fact, you’re experiencing time slower than birds above your head, and the stuff under your feet is actually experiencing time slower than you are. Earth’s core is actually 2.5 years younger than Earth’s crust. That might sound like a lot, but against our planet’s 4.6-billion-year-long history, it’s not even a drop in the bucket of time. Yet scientists have been measuring these kinds of subtle differences for decades, using everything from gamma rays to radio signals to Mars to, indeed, atomic clocks. 

In 1971, two scientists carried atomic clocks on board commercial flights and flew them around the world, one in each direction. They measured a subtle difference of several hundred nanoseconds, matching predictions. In 2020, scientists used two clocks, one 1,480 feet above the other at the Tokyo Skytree, and found a difference that again proved Einstein correct.

These experiments show relativity is universal. “It’s the same everywhere on Earth, basically,” says Alexander Aeppli, a graduate student at JILA and NIST. “If you can measure one centimeter here, you can measure one centimeter somewhere else.”

NIST had already gotten down to the centimeter level. In 2010, scientists at NIST performed a similar measurement using different clocks about a foot apart.

In one of the new studies, two strontium pancakes in a single device were separated by even less: about a millimeter. After 90 hours of collecting data, the Boulder group were able to discern the subtle difference in the light, making a measurement 50 times more precise than any before it. 

The record for their measurement before was observing a dilation—a difference in the light’s frequency—to 19 decimal places, says Bothwell. “Now, we’ve gone to 21 digits…Normally, when you move a single decimal, you get excited. But we were fortunate to be in a position where we could go for two.”

Theise, according to Kolkowitz, are “very beautiful and exciting results.”

But Kolkowitz, who wasn’t part of the NIST study, says that NIST’s clock has one disadvantage: It is not so easy to take out of the lab. “The NIST group has the best laser in the world, and it’s not very portable,” he says. 

He sees the two groups’ work as complementary to each other. The Boulder clock could measure time and other physical properties with ever-greater precision. Meanwhile, he thinks a more mobile clock, similar to the one at Madison, could be carried to a number of settings, including into space to search for dark matter or gravitational waves.

While it’s pretty cool to prove that basic physics works as Einstein and friends figured it does, there are actually quite a few real-world applications for this sort of science, too. Navigation, for instance, could benefit from more accurate clocks; GPS has to correct for time dilation. And measuring the strength of time dilation could allow you to measure gravitational fields more precisely, which could, for instance, look under Earth’s surface.

“You can look at magma plumes under the Earth, and figure out, maybe, when a volcano might erupt,” says Aeppli. “Things like that.”

Correction March 2, 2022: A previous version of this story stated that Earth’s core is about 2.5 years older than its crust. In fact, thanks to gravitational time dilation, the core is 2.5 years younger.

The post The most precise atomic clocks ever are proving Einstein right—again appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the quadruple axel jump is nearly impossible to land https://www.popsci.com/science/quadruple-axel-figure-skating-impossible/ Sun, 13 Feb 2022 02:00:00 +0000 https://www.popsci.com/?p=425023
Figure skater's boots and blades on an ice rink
A quintuple jump may be the final realm of what figure skaters can master. Pavel Danilyuk from Pexels

There's a razor-thin margin of error for figure skaters attempting these feats.

The post Why the quadruple axel jump is nearly impossible to land appeared first on Popular Science.

]]>
Figure skater's boots and blades on an ice rink
A quintuple jump may be the final realm of what figure skaters can master. Pavel Danilyuk from Pexels

On February 10 Japanese figure skater Yuzuru Hanyu nearly hit the first quadruple axel in Olympic history during the men’s free skate program. It was an audacious goal, almost hubristic, and experts, athletes, and fans alike say that Hanyu’s attempt at the feat is a triumph in itself. He may not have achieved the leap, underrotating and falling during his routine, but he was heartrendingly close. 

If anyone could have completed a quadruple axel at the Beijing 2022 Winter Olympics, it would have been Hanyu, says Sarah Ridge, a biomechanist at Brigham Young University. “The thing about him is that he’s built for this perfectly”—and has the skills and talent to match. Landing the quadruple axel in the future, she adds, is not out of the question for Hanyu. 

A quadruple jump requires a grueling four full revolutions in the air. The axel, the only figure skating jump that involves taking off facing forward and then landing backward, elevates the challenge, demanding skaters tack on an additional half revolution to their mighty vault. According to The Washington Post, 2022 Olympics men’s figure skating gold medallist Nathan Chen mastered all the quadruple jumps except for the axel.

A sequence of complicated events needs to occur for a skater to complete any spinning jump on the ice. Before they take the leap, the contender creates rotational momentum while on solid ground, by extending the limbs and twisting the body like a preloaded spring. Then, they must propel into the air. (The higher they jump, the more time they spend aloft.) Once airborne, the skater snaps into a ramrod straight position, tucking the arms and legs into the center of the body and axis of rotation. This reduces the body’s moment of inertia, or resistance to spinning, and speeds up the rotation. Upon landing, they once again spread out their limbs to increase the moment of inertia and essentially halt rotation, all the while keeping balance on a razor-thin blade. It’s an astonishing amount of physical changes and decision making packed into a split second. 

The margin of error gets even smaller when an athlete tries to stitch more revolutions into a seamless stunt, says Deborah King, a biomechanist at Ithaca College. Given the limited air time, skaters need to reach their maximum rotational speed as soon as they leave the ice to finish their target revolutions. Once they’re at top speed, they must hold their body positions and whirl away for as long as possible, right up until the landing. 

For now, no female or male figure skater has broken the upper limits of four spins, or a quadruple. That puts four-and-a-half revolutions, or a quadruple axel, tantalizingly within reach, says Ridge. 

But is it possible to push beyond that number? The Olympics motto—faster, higher, stronger—adds another ideal when it comes to skating: lighter. 

King says that the maximum number of rotations possible is limited by the figure skater’s moment of inertia, which in turn is dependent on body mass and shape. A slight constitution naturally gives them the advantage of a low rotational resistance. But no matter how much an athlete contracts and elongates their bodies during a jump, they can only squeeze as small as the width of their shoulders and hips, King explains. “If you watch a skater when they jump and you look at their position in the air while they’re doing quads, they’re already pretty much as small as they can get,” she says. 

Another requirement is that figure skaters need to be strong, without the cost of increasing muscle mass. (Hanyu himself only weighs about 125 pounds and stands 5 foot 8 inches tall.) More strength translates to higher jumps and the ability to hold the body tightly while spinning in the air—an exhausting process despite how easy the pros make it look. The faster a skater spins, the more centrifugal force acts on their limbs, causing them to fling away from the body and out of the ideal rotational form. In some sequences, the twirler might feel a force of up to one-and-a-half times their body weight on their arms.

[Related: Figure skaters have to train themselves to ignore their natural reflexes]

Still, there’s only so fast an athlete can spin, and there’s only so high they can jump. “You can’t put all your effort into rotating; you can’t put all your effort into jumping. There’s a fine balance,” says King. Figure skaters typically hover in the air for about half a second, but hardly more than that, she says. With that math, she thinks five revolutions is probably the highest number of spins possible. Six spins or higher is nearly unfathomable.

“I really cannot see that happening,” she says, “I would love to be proven wrong.”

Other experts agree that a quintuple jump might be the most spins that the sport can dream for. Ridge herself has measured the spinning rate of top American figure skating athletes. Even for those topping 2,000 degrees per second, the highest measurable range of Ridge’s instruments, she doesn’t expect anything higher than five revolutions in a single jump. “Maybe it is possible, and I just don’t have enough imagination,” she notes. 

The only way the number would be beaten, Ridge says, is with a drastic revamp in ice skating equipment to allow athletes to better harness their motion for stunts. But given that skaters only have their boots and blades to work with, there’s not a lot of wiggle room to engineer a way into a more impressive feat.  

https://www.instagram.com/p/CYiO02LKepQ/

Mirai Nagasu, the first American female figure skater to land a triple axel at the Olympics back in 2018, also thinks a quintuple is within the realm of possibility. She says younger athletes like “quad god” Ilia Malinin will be inspired by the record-breaking achievements of current skaters to push the boundaries of the sport in the years to come. But landing new feats only counts if it can be done safely, she adds. 

“We need more protection for these young athletes who are pushing their bodies because they want to win so badly,” Nagasu says. While learning the triple axel, she tore her lower labrum and had to undergo multiple hip surgeries. Breaking records in the sport may not be worth it, she says, if figure skaters exchange a heartbeat of glory for a lifetime of physical pain. As much as the public delights in thrilling new athletic displays, in the end, it takes Olympians many sacrifices to make the impossible possible.

Correction (February 14, 2022): The story previously stated that centripetal force causes a skater’s limbs to push away from their body as they spin. It should be centrifugal force.

The post Why the quadruple axel jump is nearly impossible to land appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the high-powered process that could recycle rare earth metals https://www.popsci.com/environment/rare-earth-metal-recycling/ Fri, 11 Feb 2022 13:31:41 +0000 https://www.popsci.com/?p=424957
Two chemists pulling apart an old computer in a rare earth metal recycling experiment
Researchers at Rice University successfully separated rare earth metals out of old computers and other waste. Jeff Fitlow/Rice University

It takes some tricky chemistry to mine industrial waste and old electronics for critical rare earths.

The post Inside the high-powered process that could recycle rare earth metals appeared first on Popular Science.

]]>
Two chemists pulling apart an old computer in a rare earth metal recycling experiment
Researchers at Rice University successfully separated rare earth metals out of old computers and other waste. Jeff Fitlow/Rice University

Look in the second row from the bottom of most periodic tables, and you’ll find the lanthanides, split off from an archive of elements that doesn’t know what to do them. The lanthanides are a close-knit bunch that are hard to distinguish from each other because of their similar colors and properties. Even for most scientists, they live in a cold and distant land, thoroughly inorganic and far from the comforts of hydrogen and carbon and oxygen. 

But these metals are critical to making the modern world tick. They’re members of a group known as rare earth elements, or rare earths, that support everything from magnets that power clean energy technology to telescope lenses to the screen on device you’re reading this upon. And mining them is difficult and ecologically costly.

So, chemists and engineers are trying to make the best use of rare earths that have already been processed by recycling them out of industrial waste and old electronics. In new research published on February 9 in Science Advances, they show how they’re trying to do that with bright flashes of electricity. 

Molecules of coal fly ash separating in a black and white microscope image
A molecular look at rare earths separating in coal fly ash. Tour Group/Rice University

Most rare earths aren’t actually that rare (certainly not compared to truly rare elements like iridium), but they’re not easy to get. After their ore is mined from the ground, they have to be separated to make specialized products—a tedious process, given their similar properties. Most rare earth mining homes in on lanthanum and cerium, but heavier metals like neodymium and dysprosium are especially desirable for the magnets used in clean energy tech.

The supermajority (some estimates say more than 90 percent) of the world’s supply today comes from China, which makes the resource more vulnerable to geopolitical tensions. In 2010, after a Chinese fishing boat collided with a Japanese Coast Guard boat in disputed waters, China stopped rare earth exports to Japan. The blockade didn’t last, but Japan has spent the years since then aggressively seeking alternative sources of rare earths. So have other countries.

More importantly, rare earths’ extraction comes at an environmental cost. “It’s energy- and chemically intensive,” says Simon Jowitt, a geochemist at the University of Nevada, Las Vegas who was not involved with the latest research. “Depending on how you process them, it involves high-strength acids.” Those acids can leach into the environment.

[Related: How shape-shifting magnets could help build a lower-emission computer]

One way of reducing the burden is by recycling goods that already contain these elements—but that’s still not common. Callie Babbitt, a professor of sustainability at Rochester Institute of Technology in upstate New York who was also not involved with the new study, says that only about 1 to 5 percent of the world’s rare earths get recycled.

Which is why researchers are innovating to find new ways of breaking rare earths down. Some have tried bacteria, but feeding those microbes has proven energy-intensive. 

Now, one group from Rice University has devised a recycling method that relies on intense electricity called “flash joule heating.” The researchers behind it had previously tested it on old, chopped-up circuit boards to strip them of precious metals like palladium and gold and heavy metals like chromium and mercury before safely disposing them in agricultural soil.

This time, they applied flash joule heating to other industrial byproducts: coal fly ash, which is a pollutant from fossil fuel power plants, red mud, which is a toxic substance left over from turning bauxite into aluminum, and, indeed, more electronic waste.

Their process looked something like this. They put the substance they were breaking down into a finger-sized quartz tube, where electricity “flashed” it to around 5400 degrees Fahrenheit. The separated components were then dissolved in a solution for chemists to retrieve later.

The process does release some toxic compounds, but the system aims to capture them and prevent them from getting into the air. “When you do this industrially, you wouldn’t just release these compounds to the air,” says James Tour, a chemist at Rice University and one of the authors of the study. “You would trap them.”

“Our waste stream is very different,” Tour explains. Unlike the strong nitric acid that’s often used to extract rare earths from the ground, their solution is a much weaker, more diluted hydrochloric acid. “If that got on your hand, I don’t think you’d even feel it,” Tour says.

However, even with a step forward in this research, it will be some time before piles of industrial waste can be recycled for rare earths. “There’s a lot of activity going on in this area, but I haven’t seen anything in the way of breakthroughs,” says Jowitt.

[Related: You throw out 44 pounds of electronic waste a year. Here’s how to keep it out of the dump.]

One issue with flash joule heating, according to Jowitt, is that the rare earths still need to be separated before they can be molded into gadgets. What’s more, using pollutants like coal fly ash means there will be other harmful leftovers from the process. “Extracting and recovering the [rare earths] they contain are only part of a larger challenge of managing these wastes,” says Babbitt.

When it comes to e-waste, it won’t be easy to mine mountains of disused computers and phones for valuable components. The amount of rare earths in an average smartphone, for instance, add up to fractions of a gram. And many consumers wouldn’t know where and how to recycle them.

With that, Jowitt thinks the solution could lie in the products raising the demand for rare earths. “One obvious thing is changing the way to design things to make them more recyclable.”

The post Inside the high-powered process that could recycle rare earth metals appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Our universe probably isn’t special enough to be in a multiverse https://www.popsci.com/science/is-universe-fine-tuned-for-life/ Thu, 10 Feb 2022 23:00:00 +0000 https://www.popsci.com/?p=424779
A person looks at the Milky Way
Our universe's constants may not be so constant after all. Pixabay

That life in the universe arose thanks to extremely lucky circumstances may be a misconception.

The post Our universe probably isn’t special enough to be in a multiverse appeared first on Popular Science.

]]>
A person looks at the Milky Way
Our universe's constants may not be so constant after all. Pixabay

A remarkable number of things had to go right for life on Earth to work out the way it did. A small change in gravity or tweak to another fundamental force could have made the difference between our universe and one that’s completely uninhabitable.

This idea has led some scientists to examine the universe’s metaphorical dials and knobs and wonder why complex systems like galaxies, planets, and life were able to form when that didn’t have to be the case, what’s called the “fine-tuning” of our universe. A new report published by the Foundational Questions Institute explains the major recent milestones in an ongoing debate whether the universe is fine-tuned and what that even means.

The universe has lots of fixed properties, like the strength of gravity or the cosmological constant–which measures how fast the universe is expanding–that dictate how the universe took shape. Physicists have pondered why these values are what they are for at least a century and tried to find scientific answers to this question for decades.

Based on theoretical models, some scientists argue that if one of the many fundamental constants of the universe was significantly different, life wouldn’t have been able to form. This leads to a dilemma: If it’s vanishingly unlikely that the universe happened to have all the right values necessary for life, why did it work out that way?

Despite the profound implications of the question, the field is still fairly niche. “Most physicists have not given fine-tuning much thought, or even heard about it,” says Geraint F. Lewis, a cosmologist and astrophysicist at the Sydney Institute for Astronomy of the University of Sydney in Australia who was not involved with the report, because “most of physics is focused upon this universe.” Lewis is an exception: He co-authored a 2016 book on fine-tuning. Discussions are polarizing and quickly can become more rooted in philosophy than science, he says.

Some scientists aren’t bothered by this seeming coincidence. For others, the idea of a “multiverse,” made possible by string theory, became a popular way to explain this dilemma—if there are infinitely many universes, it shouldn’t be too surprising that one ended up with the right conditions for life, the reasoning goes. String theory is highly speculative, however, and some scientists deem the multiverse unscientific because they don’t see a way to test whether it exists.

Theologians and even some scientists have used the fine-tuning argument to suggest that the universe must have been created for life to form. But recent research in the field provides other alternatives and suggests fine-tuning itself might be an illusion, possibly due to a lack of big-picture view in analyses of potentially different universes. And some scientists have argued that this universe, though it produced life, is not optimally friendly to it.

“Maybe it’s not as simple as we thought,” says Miriam Frankel, a science editor for the Conversation, and a freelance journalist who wrote the report for the Foundational Questions Institute. When diving into these big questions, it hits you “how much is actually missing from physics,” Frankel says. Physicists still don’t have a very good “theory of everything.”  Results from last year’s Muon g-2 experiment, for example, have some scientists talking about a potential, game-changing fifth force of nature.

Appealing as the multiverse theory is for some, it really only shifts the fine-tuning debate up a level, Frankel writes in the report. If we live in a multiverse, how did the multiverse form and how was it fine-tuned to work?

The report shows “that fine-tuning is not a solved problem,” Lewis says.

Cosmological constants directly influenced the formation of life through the creation of stars and the elements produced by stars. If you shifted a fundamental value like the strength of electromagnetism, it would change the way stars form and potentially stop them from producing carbon—a necessary ingredient for life on Earth, the report says.

However, some stellar models indicate stars are more versatile than scientists thought, Frankel says, so they might be able to produce the conditions for life under different universal values.

It’s also conceivable that life could form without carbon, though the case for life based on another element, silicon, is not solid. Scientists tend to look for ways that Earth-like life could form, but ultimately they don’t know what kind of life–possibly unrecognizable–might be able to form in a differently structured universe.

[Related: Good news! We’re probably not living in a computer simulation.]

Most research that has looked into what happens when you change a universal constant has focused on only one constant at a time. For example, if the force of gravity was too much weaker or stronger, it would prevent life as we know it from forming. But, recent evidence shows that changing multiple constants together might be more likely to make a working universe, giving the changes a chance to even out against other tweaks, the report says.

Other experiments have hinted that the universal constant related to the expansion of the universe might have changed over time—which means it wouldn’t really be constant at all. If this and similar findings pan out, they would undermine the whole concept of fine-tuning, the report says. Features of the universe, far from being perfectly tuned at the start, would be constantly, if slowly, changing.

Fundamentally it’s hard to say how strange our universe is. “How can you say this universe is weird when we don’t know what a typical universe would be?” Lewis says.

In fact, we may never get direct answers to these questions. “I think that the way the debate is framed right now is actually somewhat confused. Because I think the fundamental issues are not scientific,” says Jason Waller, a philosopher at Kenyon University who wrote a 2020 book on fine-tuning arguments.

Waller thinks the attempts to explain fine-tuning are missing the central fact that there will always be another level to explain. If the multiverse, or a simulated reality, or even some godlike entity is responsible, we still end up with the same question: Why this explanation and not something else?

Waller is less concerned with the boundary between science and philosophy and more concerned with what’s true and how people justify their ideas. “Just because we don’t have a scientific proof, or it’s speculative does not mean all the answers are equally plausible,” he says, some positions will be more or less probable.

Barring completely unexpected discoveries that rewrite our understanding of the universe, he says, going forward, “I think the discoveries are going to be conceptual, or philosophical, rather than scientific.”

The post Our universe probably isn’t special enough to be in a multiverse appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Olympic sledders fly down the ice with a few slick tricks https://www.popsci.com/science/how-olympic-sledders-go-so-fast/ Mon, 07 Feb 2022 02:00:00 +0000 https://www.popsci.com/?p=423899
Blue multi-person bobsled speeding around a curve on an ice track
Bobsled riders are able to control the front runners to steer and speed around an ice track. Deposit Photos

Gravity is only part of the equation.

The post Olympic sledders fly down the ice with a few slick tricks appeared first on Popular Science.

]]>
Blue multi-person bobsled speeding around a curve on an ice track
Bobsled riders are able to control the front runners to steer and speed around an ice track. Deposit Photos

John Eric Goff is a professor of Physics, University of Lynchburg. This story originally featured on The Conversation.

Speed alone may be the factor that draws many sports fans to the bobsled, luge, and skeleton events at this year’s Beijing Winter Olympics. But beneath the thrilling descents of the winding, ice-covered track, a myriad of concepts from physics are at play. It is how the athletes react to the physics that ultimately determines the fastest runs from the rest of the pack.

I study the physics of sports. Much of the excitement of a luge run is easy to miss—the athletes’ movements are often too small to notice as they fly by looking like nothing more than a blur on your television. It would be easy to assume that the competitors are simply falling or sliding down a track at the whim of gravity. But that thought merely scratches the surface of all the subtle physics that go into a gold-medal-winning performance.

An aerial view of a large twisting covered track.
Tracks for sledding events—like the Olympic track from the 2018 Pyeongchang Winter Olympics—drop hundreds of feet and feature many tight turns. Korean Culture and Information Service via Wikimedia Commons, CC BY-NC-SA

Gravity and energy

Gravity is what powers the sleds down the ice-covered tracks in bobsled, luge and skeleton events. The big-picture physics is simple – start at some height and then fall to a lower height, letting gravity accelerate athletes to speeds approaching 90 mph.

This year’s races are taking place at the Yanqing National Sliding Center. The track is roughly a mile long, drops 397 feet of elevation—with the steepest section being an incredible 18 percent grade—and comprises 16 curves.

Riders in the sledding events reach their fast speeds because of the conversion of gravitational potential energy into kinetic energy. Gravitational potential energy represents stored energy and increases as an object is raised farther from Earth’s surface. The potential energy is converted to another form of energy once the object starts falling. Kinetic energy is the energy of motion. The reason a flying baseball will shatter the glass if it hits a window is that the ball transfers its kinetic energy to the glass. Both gravitational potential energy and kinetic energy increase as weight increases, meaning there is more energy in a four-person bobsled team than there is in a one-person luge or skeleton for a given speed.

Racers are dealing with a lot of kinetic energy and strong forces. When athletes enter a turn at 80 mph (129 kph) they experience accelerations that can reach five times that of normal gravitational acceleration. Though bobsled, luge, and skeleton may look easy, in reality they are anything but.

Aerodynamics

Most tracks are around a mile long (1.6 km), and the athletes cover that distance in just under a minute. Final times are calculated by adding four runs together. The difference between the gold medal and silver medal in the men’s singles luge at the 2018 Winter Olympics was just 0.026 seconds. Even tiny mistakes made by the best athletes in the world can cost a medal.

All the athletes start at the same height and go down the same track. So the difference between gold and a disappointing result comes not from gravity and potential energy, but from a fast start, being as aerodynamic as possible and taking the shortest path down the track.

While gravity pulls the athletes and their sleds downhill, they are constantly colliding with air particles that create a force called air drag, which pushes back on the athletes and sleds in a direction opposite to their velocity. The more aerodynamic an athlete or team is, the greater the speed.

To minimize drag from the air, luge riders—who are face up—lie as flat as possible. Downward-facing skeleton riders do the same. Whether in a team of two or four, bobsled riders stay tucked tightly inside the sled to reduce the area available for air to smash into. Any body positioning mistakes can make athletes less aerodynamic and lead to tiny increases in time that can cost them a medal. And these mistakes are tough to correct at the high accelerations and forces of a run.

The shortest way down

Besides being as aerodynamic as possible, the other major difference between a fast and a slow run is the path riders take. If they minimize the total length taken by their sleds and avoid zigzagging across the track, riders will cover less distance. In addition to simply not having to go as far to cross the finish line, shortening the path means facing less drag from air and losing less speed from friction with the track.

A skeleton racer running with his sled at the start of a race.
Skeleton racers don’t have a means of directly controlling the runners, so they must use subtle body movements to flex the sled and initiate turns. 121a0012 via Wikimedia Commons, CC BY-SA

Fans often miss the subtleties involved in turning and steering. The sleds for all the events sit on steel blades called runners. Bobsleds have two sets of runners that make contact with the ice. The front rider pulls on rings attached to pulleys that turn the front runners. Runners on luge sleds have curved bows at the front where riders place their calves. By moving their head and shoulders or flexing their calves, athletes can turn the luge. Skeleton riders lack these controls and must flex the sled itself using their shoulders and knee to initiate a turn. Even a tiny head movement can cause the skeleton to move off the optimal path.

All of these subtle movements are hard to see on television, but the consequences can be large—oversteering may lead to collisions with the track wall or even crashes. Improper steering may lead to bad turns that cost riders time.

Though it may appear that the riders simply slide down the icy track at great speeds after they get going, there is a lot more going on. Viewers will have to pay close attention to the athletes on those fast-moving sleds to detect the interesting facets of physics in action.

The Conversation

The post Olympic sledders fly down the ice with a few slick tricks appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The future might be filled with squishy robots printed to order https://www.popsci.com/science/3d-printed-gelatin-robot/ Sat, 05 Feb 2022 20:47:00 +0000 https://www.popsci.com/?p=423866
Three excited kids in robotics class printing robot toys on a 3D printer
3D printed plastic robots may be nothing new, but 3D printed gelatin robots? They could make waves for kids toys, medical procedures, and more. Deposit Photos

T-1000, but jigglier.

The post The future might be filled with squishy robots printed to order appeared first on Popular Science.

]]>
Three excited kids in robotics class printing robot toys on a 3D printer
3D printed plastic robots may be nothing new, but 3D printed gelatin robots? They could make waves for kids toys, medical procedures, and more. Deposit Photos

Mixing gelatin and sugar syrup could make for a tasty 1900’s dessert. But it’s also the base of a gel-like substance that, in the future, could lead to cheap, bendy, and sustainable robots.

Scientists at Johannes Kepler University Linz in Austria have built a tentacle-like robotic finger with gelatin and other materials you can probably find in a shop near you. They cooked the ingredients and then created the finger in a 3D printer. They published their work on February 2 in Science Robotics.

You might be used to thinking of robots as rigid constructs of metal, ceramic, and other hard materials. These are the sorts of machines that build cars and make exoskeletons. But there are other types of robots, ones made from more compliant materials that can bend to their surroundings.

This is the growing world of soft robotics. In the near future, it’s soft robots that might find their way into the human body, where their flexibility could, for instance, allow surgical tools to conform to different body shapes. It’s soft robots that might mimic sea creatures and delve under the sea, both on Earth and on other worlds.

But even if these squishy robots can go to extremes and swim like fish, the materials that make them work are often polymers like plastics, which aren’t renewable nor ecologically friendly. Gelatin, on the other hand, naturally biodegrades, leaving no trace. As such, robot-makers such as those in Linz have been tinkering with gelatin-based materials for a few years now

But gelatin poses other challenges that you might not expect to pop up in a robotics lab. Because it’s essentially sugar and protein, it tends to attract mold. And when the gel’s water content dries up—something that, predictably, happens in very dry environments—the gel becomes hard to work with.

“It was too brittle,” says Florian Hartmann, a physicist at EFPL in Switzerland and one of the researchers behind the new paper. “So, if you stretch it just a little bit, it breaks very easily.”

The Linz group’s recipe gets around a few of those challenges. In addition to gelatin and sugar, they added citric acid, which alters the pH of the material and prevents microorganisms from feasting on it prematurely. They also mixed in glycerol, which helps the gel hold in water. With those upgrades, the material can be stretched up to six times its original length and still retain its structure. The Linz group had first published this recipe by 2020.

“We carried on and tried to make more complicated robots with more performance and more functionality,” says Hartmann.

That brings them to today. Unlike most gelatin builders before, who typically made their parts with molds like you might do in the kitchen, the Linz researchers modified a 3D printer to use their gelatin substance.

3D printing soft robots has so far been a technology with a lot of promise, but few results. Part of the problem is that the few polymers that have been used take a long time to settle and solidify, meaning that printing them takes an unpractical amount of time. But gelatin has an advantage: As a protein, it can crystalize and make viable prints much more quickly than polymers.

“Manufacturing something completely biodegradable coming right out of the 3D printer—I believe it’s a very interesting approach,” says Ramses Martinez, an engineer at Purdue University, who was not involved with this paper.

To make the finger move, the Linz group wrapped it with an exoskeleton made from a material that included ethanol and shellac, the resin that’s used in very old records. These strips are sensitive to how light refracts, or bends, as it passes between the finger and the air around it.

That made it possible to control the 3D printed gelatin finger by pressing compressed air at it. The moving air changed the angle of light passing through it, which makes the strips sway in response. The Linz group controlled the finger with a system containing a Raspberry Pi and a PlayStation 4 controller. In their experiments, they were able to make it push objects away from its surroundings.

[Related: These robotic wings use artificial muscle to flap like an insect]

Hartmann isn’t sure how well this finger might fare outside the lab. The gelatin can go up to around 140 degrees Fahrenheit before it starts to melt, and it will require more tinkering before it can come in contact with water. But the good news is that, because it’s made of widely available ingredients, it’s easy to make more materials for further tests.

“I believe everything related with proteins is something that you can explore putting this technology in,” says Martinez. That might include robotic parts used to manufacture food to avoid the safety risks from non-organic parts. Hartmann also imagines it being used in robotic toys, to minimize harm to children, or in pop-up art installations, to make them easily disposable.

Martinez adds that gelatin robots could be used to enter sensitive environments, such as highly radioactive areas, where operators have to balance the need to reduce harm to themselves with the need to prevent further contamination. “You just simply don’t want to bring them back and recover them,” he says. “So, for these effects, having something that will degrade and actually biodegrade, that would be quite interesting.”

Gelatin robots probably won’t ever lift enough weight to build cars. But as this single finger shows, robots can do far more things than just that.

The post The future might be filled with squishy robots printed to order appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>