Radiation-tolerant microbes might be able to live beneath Mars’ surface for hundreds of millions of years and may yet persist today, thanks in part — counterintuitively — to the Red Planet’s frigid, arid conditions.
In addition to being cold and dry, the Martian surface is constantly bombarded by cosmic rays, charged particles and other radiation from space. Previous studies have shown that desiccation vastly extends a microbe’s potential for surviving by limiting the production of highly reactive oxygen-bearing chemicals that can damage proteins and DNA, among other vital molecules within its tissues. To see how long microbes might survive such an onslaught on Mars, researchers desiccated five species of bacteria and one type of yeast, stored them at −80° Celsius and then irradiated them.
Some of the microbes might remain viable for only a few tens of thousands of years, experiments showed. But one species — Deinococcus radiodurans, a particularly radiation-hardy greebly that some scientists have nicknamed “Conan the bacterium” — might survive for as long as 280 million years if protected from radiation at soil depths of 10 meters or more, physical chemist Brian Hoffman and colleagues report online October 25 in Astrobiology.
D. radiodurans resists radiation damage by having multiple copies of chromosomes and other genetic material in each cell, as well as high levels of manganese-bearing antioxidants that help remove DNA-damaging chemicals (SN: 9/3/10). If similar microbes evolved on Mars, they too could persist for lengthy intervals, even possibly until now — which is “improbable but not impossible,” says Hoffman, of Northwestern University in Evanston, Ill.
Even if microbes that evolved on Mars ultimately succumbed to the harsh conditions, remnants of their proteins or other macromolecules may remain — offering hope that future missions, if equipped with the proper equipment, might be able to detect those signs of former life.
In early 2022, malaria cases in the Ethiopian city of Dire Dawa surged, with more than 2,400 people sickened. The spike in infections was the work of an invasive mosquito species that’s spreading across Africa, scientists report.
The finding, presented November 1 in Seattle at the annual meeting of the American Society of Tropical Medicine and Hygiene, provides evidence that the invasive vector can drive malarial outbreaks. Worryingly, the species can thrive in urban environments, bringing the threat of malaria to potentially many millions more people across the continent.
Anopheles stephensi is a mosquito native to India and the Persian Gulf, where it is a major vector for the Plasmodium parasites that cause malaria in people (SN: 10/26/20). In Africa, the primary malaria vector is Anopheles gambiae. A. stephensi was first reported on the African continent in Djibouti in 2012. Since then, the species has turned up in other African countries such as Ethiopia, Somalia and Nigeria.
It wasn’t clear what kind of malarial burden the invasive mosquito could bring to Africa, says Fitsum Girma Tadesse, a molecular biologist at the Armauer Hansen Research Institute in Addis Ababa, Ethiopia. In the eight years after the mosquito’s arrival in Djibouti, the country reported a 40-fold increase in yearly malaria cases, Tadesse says. But no one had directly linked A. stephensi to the increase.
So when malaria cases suddenly rose in Dire Dawa — from 27 cases to 260 in just three weeks in early 2022 — Tadesse and his team jumped in to investigate.
The researchers tracked 80 patients in the city who had sought care for malaria at a local or university clinic, as well as 210 patients who had sought treatment for other reasons, and they screened the patients’ household members for malaria. The team also scanned the patients’ neighborhoods for the presence of mosquito adults and larvae within a 100-meter radius of households, or in the cases of students that visited a clinic, dormitories.
The team found that the malaria patients primarily lived near water sources used by the invasive mosquito, A. stephensi. Households and dorms close to aquatic habitats harboring A. stephensi larvae were 3.4 times as likely as those not close to such water sources to have a family or dorm member test positive for malaria. And most of the adult mosquitoes the team caught — 97 percent — were of the invasive species, the only mosquito species that the researchers found carrying Plasmodium parasites. A. stephensi “prefers to breed in water storage containers that are typically common in rapidly expanding urban settings,” Tadesse says. The native mosquito species, A. gambiae, tends to use natural sources of water like small pools, which are more common in rural regions, he adds. The concern, then, is that with the expansion of A. stephensi alongside urbanization in Africa, the mosquito could exploit many new sources of water stores.
“This expands the malaria problem from a predominantly rural problem to an urban problem,” says Teun Bousema, an epidemiologist at Radboud University in Nijmegen, the Netherlands.
A 2020 study from another research group estimated that if the invasive mosquito were to spread widely on the continent, an additional 126 million people in cities could be at risk of contracting malaria.
“The spread of Anopheles stephensi is concerning because this species has a number of characteristics that make it difficult to control,” says Tanya Russell, a medical entomologist at James Cook University in Townsville, Australia, who was not involved in the study. Not only can the insects lay their eggs in nearly any available water source, but also the eggs can survive being dry for long periods of time. “This is very uncharacteristic for malaria vectors.”
Insecticide-treated bed nets and spraying a residual insecticide indoors are the primary vector control approaches for malaria-carrying mosquitoes, Russell says. But since A. stephensi also bites outdoors, the mosquito’s spread may blunt the efficacy of these tools.
The key next steps, Tadesse says, are interventions to reduce transmission of the deadly parasites, including targeting the mosquito’s larval phase with chemicals and encouraging communities to cover and secure water containers to prevent mosquitoes from laying eggs in them.
“The window of opportunity to do something about this species is closing,” Bousema says. “So, I really think this calls for very urgent action.”
To coax human nerve cells in a laboratory to thrive, there are three magic words: location, location, location.
Many experiments grow human nerve cells in lab dishes. But a new study enlists some real estate that’s a bit more unconventional: the brain of a rat. Implanted clusters of human neurons grow bigger and more complex than their cohorts grown in dishes, researchers report online October 12 in Nature.
Not only that, but the human cells also appear functional, albeit in very limited ways. The implanted human cells can both receive signals from rat cells and influence the rats’ behavior, connections that “demonstrate more substantial integration of the transplanted neurons,” says Arnold Kriegstein, a developmental neuroscientist at the University of California, San Francisco, who wasn’t involved in the study. “This is a significant advance.”
Over the last decade, scientists have been building increasingly complex brain organoids, 3-D clusters of cells derived from stem cells that grow and mimic the human brain (SN: 2/20/18). These organoids don’t re-create the full complexity of human neurons that develop in an actual brain. But they can be windows into an otherwise inscrutable process — human brain development, and how it can go awry (SN: 9/3/21). “Even if they’re not entirely perfect, [these models] are surrogates for human cells in a way that animal cells are not,” Kriegstein says. “And that’s really exciting.”
To push these cells closer to their full potential, Sergiu Pasca, a neuroscientist at the Stanford School of Medicine, and colleagues surgically implanted human cerebral organoids into the brains of newborn rat pups. Along with their hosts, the human organoids began to grow. Three months later, the organoids were about nine times their starting volume, ultimately making up about a third of one side of the rat’s cortex, the outer layer of the brain. “It pushes the rat cells aside,” Pasca says. “It grows as a unit.”
These human cells flourished because rats’ brains offer perks that lab dishes can’t, such as blood supply, a precise mix of nutrients and stimulation from nearby cells. This environmental support coaxed individual human neurons to grow bigger — six times as large by one measure — than the same sort of cells grown in dishes. Cells grown in the rat brains were also more complex, with more elaborate branching patterns and more cell connections called synapses. The cells looked more mature, but Pasca and his colleagues wanted to know if the neurons would behave that way, too. Tests of electrical properties showed that implanted neurons behaved more similarly to cells that develop in human brains than cells grown in dishes.
Over months of growth, these human neurons made connections with their rat host cells. The human organoids were implanted in the somatosensory cortex, a part of the rat brain that handles whisker input. When researchers puffed air at the whiskers, some of the human cells responded.
What’s more, the human cells could influence the behavior of the rat. In further experiments, the researchers genetically tweaked the organoids to respond to blue light. Prompted by a flash of light, the neurons fired signals, and researchers rewarded the rats with water. Soon, the rats learned to move to the water spout when their human organoid cells sent signals.
In behavioral tests, rats with human implants didn’t show signs of higher intelligence or memory; in fact, researchers were more concerned with deficits. The human organoids were nudging out their hosts’ brains, after all. “Will there be memory deficits? Will there be motor deficits? Will there be seizures?” Pasca asked. But after extensive tests, including behavior tests, EEGs and MRIs, “we could not find differences,” Pasca says.
Other experiments included nerve cells from people with a genetic disorder called Timothy syndrome, a severe developmental disorder that affects brain growth. Growing organoids created with these patients’ cells in rats’ brains might reveal differences that other techniques would not, the researchers reasoned. Sure enough, neurons in these organoids had less complex message-receiving dendrites than those from organoids derived from people without the syndrome.
Organoids made from patient-specific cells could one day even serve as test subjects for treatments, Pasca says. “Challenging disorders will require bold approaches,” he says. “We will need to build human models that recapitulate more aspects of the human brain to study these uniquely human conditions.”
It might be easier for dinosaurs to “mummify” than scientists thought.
Unhealed bite marks on fossilized dinosaur skin suggest that the animal’s carcass was scavenged before being covered in sediment, researchers report October 12 in PLOS ONE. The finding challenges the traditional view that burial very soon after death is required for dinosaur “mummies” to naturally form.
The new research centers on Dakota, an Edmontosaurus fossil unearthed in North Dakota in 1999. About 67 million years ago, Dakota was a roughly 12-meter-long, duck-billed dinosaur that ate plants. Today, Dakota’s fossilized limbs and tail still contain large areas of well-preserved, fossilized scaly skin, a striking example of dinosaur “mummification.”
The creature isn’t a true mummy because its skin has turned into rock, rather than being preserved as actual skin. Researchers have come to refer to such fossils with exquisitely preserved skin and other soft tissues as mummies.
In 2018, paleontologist Clint Boyd of the North Dakota Geological Survey in Bismarck and colleagues began a new phase of cleaning up and examining the dinosaur fossil. The team had found what looked like tears in the tail skin and puncture holes on the animal’s right front foot. To investigate what may have caused the skin marks, the researchers teamed up with Stephanie Drumheller, a paleontologist at the University of Tennessee in Knoxville, to remove extra rocky material around the marks.
The holes in the skin — particularly those on the front limb — are a close match for bite wounds from prehistoric relatives of modern-day crocodiles, the researchers say. “This is the first time that’s been seen in dinosaurian soft tissues,” Drumheller says.
Because the marks on the tail are larger than those on the front limb, the team thinks that at least two carnivores munched on the Edmontosaurus carcass, probably as scavengers because the wounds didn’t heal. But scavenging doesn’t fit into the traditional view of mummification.
“This assumption of rapid burial has been baked into the explanation for mummies for a while,” Drumheller says. That clearly wasn’t the case for Dakota. If scavengers had enough time to snack on its body, then the deceased dino had been out in the open for a while.
Observing Dakota’s deflated skin envelope, shrink-wrapped to the underlying bone with no muscle or other organs, Drumheller had an unexpected “eureka moment,” she says. “I had seen something like this before. It just wasn’t in the paleontological literature. It was in the forensics literature.”
When some smaller modern scavengers like raccoons feed on the internal organs of a larger carcass, the scavengers rip open the carcass’s body. The forensics research showed that this hole gives any gasses and fluids from further decomposition an escape route, allowing the remaining skin to dry out. Burial could happen afterward.
The researchers “make a very good point,” says Raymond Rogers, a researcher at Macalester College in Saint Paul, Minn., who studies how organisms decay and fossilize and wasn’t involved in the research. “It would be very unlikely for a carcass to achieve advanced stages of desiccation and also experience rapid burial. These two generally presumed prerequisites for mummification seem to be somewhat incompatible.”
Fossilization of soft tissues — like skin or brains or fleshy head combs — is uncommon, but not unheard of (SN: 8/20/21; SN: 12/12/13). “If [soft tissue] requires some spectacular confluence of weird events to get it fossilized at all, it’s far more common than then you would expect if that was the case,” Drumheller says. Perhaps, then, mummies originating from common carcass fates could explain this.
But while dry, “jerkylike” skin could survive long enough to be buried, the conditions involved aren’t necessarily common, says Evan Thomas Saitta, a paleontologist at the University of Chicago who was not involved with the study.
“I still suspect that this actual process is a very precise sequence of events, where if you get the timing wrong, you end up without a mummy dinosaur,” he says.
Understanding that sequence of events, and just how common it is, requires figuring out how fossilization proceeds after a mummy’s burial. This is an area of research that Boyd says he’s interested in looking into next.
“Is it just the same fossilization process as for the bones?” he asks. “Or do we also need a different set of geochemical conditions to then fossilize the skin?”
A ground-penetrating eye in the sky has helped to rehydrate an ancient southern Mesopotamian city, tagging it as what amounted to a Venice of the Fertile Crescent. Identifying the watery nature of this early metropolis has important implications for how urban life flourished nearly 5,000 years ago between the Tigris and Euphrates rivers, where modern-day Iraq lies.
Remote-sensing data, mostly gathered by a specially equipped drone, indicate that a vast urban settlement called Lagash largely consisted of four marsh islands connected by waterways, says anthropological archaeologist Emily Hammer of the University of Pennsylvania. These findings add crucial details to an emerging view that southern Mesopotamian cities did not, as traditionally thought, expand outward from temple and administrative districts into irrigated farmlands that were encircled by a single city wall, Hammer reports in the December Journal of Anthropological Archaeology.
“There could have been multiple evolving ways for Lagash to be a city of marsh islands as human occupation and environmental change reshaped the landscape,” Hammer says.
Because Lagash had no geographical or ritual center, each city sector developed distinctive economic practices on an individual marsh island, much like the later Italian city of Venice, she suspects. For instance, waterways or canals crisscrossed one marsh island, where fishing and collection of reeds for construction may have predominated.
Two other Lagash marsh islands display evidence of having been bordered by gated walls that enclosed carefully laid out city streets and areas with large kilns, suggesting these sectors were built in stages and may have been the first to be settled. Crop growing and activities such as pottery making may have occurred there.
Drone photographs of what were probably harbors on each marsh island suggest that boat travel connected city sectors. Remains of what may have been footbridges appear in and adjacent to waterways between marsh islands, a possibility that further excavations can explore.
Lagash, which formed the core of one of the world’s earliest states, was founded between about 4,900 and 4,600 years ago. Residents abandoned the site, now known as Tell al-Hiba, around 3,600 years ago, past digs show. It was first excavated more than 40 years ago. Previous analyses of the timing of ancient wetlands expansions in southern Iraq conducted by anthropological archaeologist Jennifer Pournelle of the University of South Carolina in Columbia indicated that Lagash and other southern Mesopotamian cities were built on raised mounds in marshes. Based on satellite images, archaeologist Elizabeth Stone of Stony Brook University in New York has proposed that Lagash consisted of around 33 marsh islands, many of them quite small.
Drone photos provided a more detailed look at Lagash’s buried structures than possible with satellite images, Hammer says. Guided by initial remote-sensing data gathered from ground level, a drone spent six weeks in 2019 taking high-resolution photographs of much of the site’s surface. Soil moisture and salt absorption from recent heavy rains helped the drone’s technology detect remnants of buildings, walls, streets, waterways and other city features buried near ground level.
Drone data enabled Hammer to narrow down densely inhabited parts of the ancient city to three islands, she says. A possibility exists that those islands were part of delta channels extending toward the Persian Gulf. A smaller, fourth island was dominated by a large temple.
Hammer’s drone probe of Lagash “confirms the idea of settled islands interconnected by watercourses,” says University of Chicago archaeologist Augusta McMahon, one of three co–field directors of ongoing excavations at the site.
Drone evidence of contrasting neighborhoods on different marsh islands, some looking planned and others more haphazardly arranged, reflect waves of immigration into Lagash between around 4,600 and 4,350 years ago, McMahon suggests. Excavated material indicates that new arrivals included residents of nearby and distant villages, mobile herders looking to settle down and slave laborers captured from neighboring city-states.
Dense clusters of residences and other buildings across much of Lagash suggest that tens of thousands of people lived there during its heyday, Hammer says. At that time, the city covered an estimated 4 to 6 square kilometers, nearly the area of Chicago.
It’s unclear whether northern Mesopotamian cities from around 6,000 years ago, which were not located in marshes, contained separate city sectors (SN: 2/5/08). But Lagash and other southern Mesopotamian cities likely exploited water transport and trade among closely spaced settlements, enabling unprecedented growth, says archaeologist Guillermo Algaze of the University of California, San Diego.
Lagash stands out as an early southern Mesopotamian city frozen in time, Hammer says. Nearby cities continued to be inhabited for a thousand years or more after Lagash’s abandonment, when the region had become less watery and sectors of longer-lasting cities had expanded and merged. At Lagash, “we have a rare opportunity to see what other ancient cities in the region looked like earlier in time,” Hammer says.
Food contaminated with fungi can be an inconvenience at best and life-threatening at worst. But new research shows that removing just one protein can leave some fungal toxins high and dry, and that’s potentially good news for food safety.
Some fungi produce toxic chemicals called mycotoxins that not only spoil food such as grains but can also make us sick. Aflatoxins, one of the more dangerous types of mycotoxins, can cause liver cancer and other health problems in people.
“It is a silent enemy,” says fungal researcher Özgür Bayram of Maynooth University in Ireland, because most people don’t notice when food like corn or wheat is spoiled.
For years, researchers have known that some fungi produce these toxins, but didn’t know all the details. Now, Bayram and colleagues have identified a group of proteins responsible for turning on the production of mycotoxins. Genetically engineering the fungus Aspergillus nidulans to remove even just one of the proteins prevents the toxins from being made, the researchers report in the Sept. 23 issue of Nucleic Acids Research.
“There is a long string of genes that is involved with the production of proteins that, in a cascading effect, will result in the production of different mycotoxins,” says Felicia Wu, a food safety expert at Michigan State University in East Lansing who was not involved in the research.
The newly identified proteins act like a key starting a car, Bayram says. The researchers wanted to figure out how to remove the key and prevent the starting signal from going through, meaning that no toxins would be made in the first place.
Bayram and his team identified the proteins in A. nidulans, revealing that four proteins come together to make the key. The researchers genetically engineered the fungus to delete each protein in turn. When any of the four proteins are missing, the key does not start mycotoxin ignition, the team found.
In another study that has yet to be published, deactivating the same group of proteins in the closely related fungus A. flavus, which can make aflatoxins, prevents the production of those toxins, Bayram says. “So this is a big success because we see, at least in two fungi, the same [protein] complex does the same job.”
The new work “is building upon a body of research that’s been done over decades” to prevent fungal contamination of food, Wu says. A range of methods are already used to control such contamination. For instance, because not all A. flavus strains produce aflatoxins, one method to prevent contamination is to sprinkle nontoxic strains onto fields of corn and peanuts, Wu explains. Those fungi multiply and can help prevent other toxic strains from gaining a foothold.
This research is one of several ways that researchers are using genetic engineering to try to combat these toxins in food (SN: 3/10/17). One future application of the new research could be to genetically tweak a toxin-making fungus and then possibly use it on crops and elsewhere. “We can basically prevent aflatoxin contamination in food, for example, in the field, even in the warehouses, where a lot of contamination takes place,” Bayram says.
Fungi and fungi-like organisms known as water molds are estimated to ruin a third of the world’s food crops each year. If that contamination could be prevented, Bayram estimates the saved food would be enough to feed 800 million people in 2022.
The new research is a good start, Wu says, but it will still be a “challenge to try to understand how this can be operationalized for agricultural purposes.” It’s unclear how scalable the technique is, she says, and getting U.S. regulatory agencies to approve the use of a genetically modified fungus on key food crops might be difficult.
After decades of population declines, the future is looking brighter for several tuna and billfish species, such as southern bluefin tuna, black marlins and swordfish, thanks to years of successful fisheries management and conservation actions. But some sharks that live in these fishes’ open water habitats are still in trouble, new research suggests.
These sharks, including oceanic whitetips and porbeagles, are often caught by accident within tuna and billfish fisheries. And a lack of dedicated management of these species has meant their chances of extinction continue to rise, researchers report in the Nov. 11 Science.
The analysis evaluates the extinction risk of 18 species of large ocean fish over nearly seven decades. It provides “a view of the open ocean that we have not had before,” says Colin Simpfendorfer, a marine biologist at James Cook University in Australia who was not involved in this research.
“Most of this information was available for individual species, but the synthesis for all of the species provides a much broader picture of what is happening in this important ecosystem,” he says.
In recent years, major global biodiversity assessments have documented declines in species and ecosystems across the globe, says Maria José Juan-Jordá, a fisheries ecologist at the Spanish Institute of Oceanography in Madrid. But these patterns are poorly understood in the oceans.
To fill this gap, Juan-Jordá and her colleagues looked to the International Union for Conservation of Nature’s Red List, which evaluates changes in a species’s extinction risk. The Red List Index evaluates the risk of extinction of an entire group of species. The team specifically targeted tunas, billfishes and sharks — large predatory fishes that have influential roles in their open ocean ecosystems.
Red List Index assessments occur every four to 10 years. In the new study, the researchers built on the Red List criteria to develop a way of tracking extinction risk continuously over time, rather than just within the IUCN intervals.
Juan-Jordá and her colleagues did this by compiling data on species’ average age at reproductive maturity, changes in population biomass and abundance from fish stock assessments for seven tuna species, like the vulnerable bigeye and endangered southern bluefin; six billfish species, like black marlin and sailfish; and five shark species. The team combined the data to calculate extinction risk trends for these 18 species from 1950 to 2019.
The team found that the extinction risk for tunas and billfishes increased throughout the last half of the 20th century, with the trend reversing for tunas starting in the 1990s and billfishes in the 2010s. These shifts are tied to known reductions in fishing deaths for these species that occurred at the same time.
The results are positive for tunas and billfishes, Simpfendorfer says. But three of the seven tunas and three of the six billfishes that the researchers looked at are still considered near threatened, vulnerable or endangered. “Now is not the time for complacency in managing these species,” Simpfendorfer says.
But shark species are floundering in these very same waters where tuna and billfish are fished, where the sharks are often caught as bycatch. “While we are increasingly sustainably managing the commercially important, valuable target species of tunas and billfishes,” says Juan-Jordá, “shark populations continue to decline, therefore, the risk of extinction has continued to increase.”
Some solutions going forward, says Juan-Jordá, include catch limits for some species and establishing sustainability goals within tuna and billfish fisheries beyond just the targeted species, addressing the issue of sharks that are incidentally caught. And it’s important to see if measures taken to reduce shark bycatch deaths are actually effective, she says.
“There is a clear need for significant improvement in shark-focused management, and organizations responsible for their management need to act quickly before it is too late,” Simpfendorfer says.
Zapping liquid metal droplets with ultrasound offers a new way to make wiring for stretchy, bendy electronics.
The technique, described in the Nov. 11 Science, adds a new approach to the toolbox for researchers developing circuitry for medical sensors that attach to the skin, wearable electronics and other applications where rigid circuit electronics are less than ideal (SN: 6/1/18).
The researchers began by drawing on sheets of stretchy plastic with lines of microscopic droplets made of an alloy of gallium and indium. The metal alloy is liquid at temperatures above about 16° Celsius.
Though the liquid metal is electrically conductive, the droplets quickly oxidize. That process covers each of them with a thin insulating layer. The layers carry static charges that push the drops apart, making them useless for connecting the LEDs, microchips and other components in electronic circuitry.
By hitting the microspheres with high-frequency sound waves, the researchers caused the microscopic balls to shed even smaller, nanoscopic balls of liquid metal. The tiny spheres bridge the gaps between the larger ones, and that close contact allows electrons to tunnel through the oxide layers so that the droplets can carry electricity.
When the plastic that the drops are printed on is stretched or bent, the larger balls of metal can deform, while the smaller ones act like rigid particles that shift around to maintain contact.
The researchers demonstrated their conductors by connecting electronics into a stretchy pattern of LEDs displaying the initials of the Dynamic Materials Design Laboratory, where the work was done. The team also built a sensor with the conductors that can monitor blood through a person’s skin (SN: 2/17/18).
Flexible electronics applications aren’t new, says materials scientist Jiheong Kang of the Korea Advanced Institute of Science and Technology in Daejeon, South Korea. But there are advantages of the new approach over other designs, he says, such as those that rely on channels filled with liquid metal that can leak if the circuitry is damaged. Liquid metal in the conductors that Kang and colleagues developed stays trapped in the tiny spheres that are embedded in the plastic and remains in place even if the material is torn.
Wires made of liquid metal have often been the go-to conductors for stretchy electronics, says Carmel Majidi, a researcher in mechanical engineering at Carnegie Mellon University in Pittsburgh who was not involved with the new study. Using ultrasound introduces a “novel approach to achieving that conductivity.” Other groups have managed that feat by heating circuits, exposing them to lasers, squishing them or vibrating the circuits to get droplets to connect to each other, he says.
Majidi isn’t convinced that the ultrasound approach is a game changer for flexible circuits. But he says that it’s high time the subject is appearing in a leading journal like Science. “I’m personally really excited to see the field overall, and this particular type of material architecture, is now gaining this visibility.”
DENVER — A hidden landscape riddled with landslides is coming into focus in Yellowstone National Park, thanks to a laser-equipped airplane.
Scientists of yore crisscrossed Yellowstone on foot and studied aerial photographs to better understand America’s first national park. But today researchers have a massive new digital dataset at their fingertips that’s shedding new light on this nearly 1-million-hectare natural wonderland.
These observations of Yellowstone have allowed a pair of researchers to pinpoint over 1,000 landslides within and near the park, hundreds of which had not been mapped before, the duo reported October 9 at the Geological Society of America Connects 2022 meeting. Most of these landslides likely occurred thousands of years ago, but some are still moving. Mapping Yellowstone’s landslides is important because they can cripple infrastructure like roadways and bridges. The millions of visitors that explore the park each year access Yellowstone through just a handful of entrance roads, one of which recently closed for months following intense flooding.
In 2020, a small aircraft flew a few hundred meters above the otherworldly landscape of Yellowstone. But it wasn’t ferrying tourists eager for up close views of the park’s famous wolves or hydrothermal vents (SN: 7/21/20, SN: 1/11/21). Instead, the plane carried a downward-pointing laser that fired pulses of infrared light at the ground. By measuring the timing of pulses that hit the ground and reflected back toward the aircraft, researchers reconstructed the precise topography of the landscape.
Such “light detection and ranging,” or lidar, data reveal details that often remain hidden to the eye. “We’re able to see the surface of the ground as if there’s no vegetation,” says Kyra Bornong, a geoscientist at Idaho State University in Pocatello. Similar lidar observations have been used to pinpoint pre-Columbian settlements deep within the Amazon jungle (SN: 5/25/22).
The Yellowstone lidar data were collected as part of the 3D Elevation Program, an ongoing project spearheaded by the United States Geological Survey to map the entirety of the United States using lidar. Bornong and geomorphologist Ben Crosby analyzed the Yellowstone data — which resolve details as small as about one meter — to home in on landslides. The team searched for places where the landscape changed from looking relatively smooth to looking jumbled, evidence that soil and rocks had once been on the move. “It’s a pattern-recognition game,” says Crosby, also of Idaho State University. “You’re looking for this contrast between the lumpy stuff and the smooth stuff.”
The researchers spotted more than 1,000 landslides across Yellowstone, most of which were clustered near the periphery of the park. That makes sense given the geography of Yellowstone’s interior, says Lyman Persico, a geomorphologist at Whitman College in Walla Walla, Wash., who was not involved in the research. The park sits atop a supervolcano, whose previous eruptions blanketed much of the park in lava (SN: 1/2/18). “You’re sitting in the middle of the Yellowstone caldera, where everything is flat,” says Persico.
But steep terrain also abounds in the national park, and there’s infrastructure in many of those landslide-prone areas. In several places, the team found that roads had been built over landslide debris. One example is Highway 191, which skirts the western edge of Yellowstone.
An aerial image of U.S. Highway 191 near Yellowstone shows barely perceptible signs of a long-ago landslide. But laser mapping reveals the structure and extent of the landslide in much greater detail (use the slider to compare images). It’s one of more than 1,000 landslides uncovered by new maps. It’s worth keeping an eye on this highway since it funnels significant amounts of traffic through regions apt to experience landslides, Bornong says. “It’s one of the busiest roads in Montana.”
There’s plenty more to learn from this novel look at Yellowstone, Crosby says. Lidar data can shed light on geologic processes like volcanic and tectonic activity, both of which Yellowstone has in spades. “It’s a transformative tool,” he says.
In July 2017, after weeks of anticipation, a massive iceberg about the size of Delaware split from the Antarctic Peninsula (SN: 7/12/17). Satellite images show that the orphaned iceberg, known as A68, ultimately disintegrated in the Southern Ocean. Now, researchers say they have pieced together the powerful forces that led to that final breakup.
Polar scientist Alex Huth of Princeton University and colleagues combined observations of the iceberg’s drift with simulations of ocean currents and wind stress. Iceberg A68a, the largest remaining chunk of the original berg, was caught in a tug-of-war of ocean currents, and the strain of those opposing forces probably pulled the iceberg apart, the team reports October 19 in Science Advances. After A68’s separation from the Larsen C ice shelf, researchers had questions — such as what creatures live on the seafloor in the ice’s dark shadow (SN: 2/8/19). As for the iceberg itself, it took a while to get moving, lingering in the neighborhood for about a year (SN: 7/23/18). By December 2020, satellite images show, the berg had clearly seen some action and was just two-thirds of its original size. The new simulations suggest how A68a probably met its fate. On December 20, 2020, the long, slender “finger” at one end of the iceberg drifted into a strong, fast-moving current. The rest of the ice remained outside the current. The tension rifted the berg, and the finger sheared off and broke apart within a few days.
Shear stress is a previously unknown mechanism for large iceberg breakup, and isn’t represented in climate simulations, the team says. In the Southern Ocean, the melting of massive bergs can be a large source of cold freshwater to the ocean surface. That, in turn, can have a big impact on ocean circulation and the global climate.