Drowned wildebeests can feed a river ecosystem for years

More than a million wildebeests migrate each year from Tanzania to Kenya and back again, following the rains and abundant grass that springs up afterward. Their path takes them across the Mara River, and some of the crossings are so dangerous that hundreds or thousands of wildebeests drown as they try to traverse the waterway.

Those animals provide a brief, free buffet for crocodiles and vultures. And, a new study finds, they’re feeding an aquatic ecosystem for years.

Ecologist Amanda Subalusky of the Cary Institute of Ecosystem Studies in Millbrook, N.Y., had been studying water quality in the Mara River when she and her colleagues noticed something odd. Commonly used indicators of water quality, such as dissolved oxygen and turbidity, were sometimes poorest where the river flowed through a protected area. They quickly realized that it was because of the animals that flourished there. Hippos, which eat grass at night and defecate in the water during the day, were one contributor. And dead wildebeests were another.

“Wildebeest are especially good at following the rains, and they’re willing to cross barriers to follow it,” says Subalusky. The animals tend to cross at the same spots year after year, and some are more dangerous than others. “Once they’ve started using a site, they continue, even if it’s bad,” she notes. And on average, more than 6,000 wildebeests drown each year. (That may sound like a lot, but it’s only about 0.5 percent of the herd.) Their carcasses add the equivalent of the mass of 10 blue whales into the river annually.

Subalusky and her colleagues set out to see how all that meat and bone affected the river ecosystem. When they heard about drownings, they would go to the river to count carcasses. They retrieved dead wildebeests from the water to test what happened to the various parts over time. And they measured nutrients up and downstream from river crossings to see what the wildebeest carcasses added to the water.

“There are some interesting challenges working in this system,” Subalusky says. For instance, in one experiment, she and her colleagues put pieces of wildebeest carcass into mesh bags that went into the river. The plan was that they would retrieve the bags over time and see how quickly or slowly the pieces decomposed. “We spent a couple of days putting the whole thing together and we came back the next day to collect our first set of samples,” she recalls. “At least half the bags with wildebeest meat were just gone. Crocodiles and Nile monitors had plucked them off the chain.”
The researchers determined that the wildebeests’ soft tissue decomposes in about two to 10 weeks. This provides a pulse of nutrients — carbon, nitrogen and phosphorus — to the aquatic food web as well as the nearby terrestrial system. Subalusky and her colleagues are still working out the succession of scavengers that feast on the wildebeests, but vultures, marabou storks, egg-laying bugs and things that eat bugs are all on the list.
Once the soft tissue is gone, the bones remain, sometimes piling up in bends in the river or other spots downstream. “They take years to decompose,” Subalusky says, slowly leaching out most of the phosphorus that had been in the animal. The bones can also become covered in a biofilm of algae, fungi and bacteria that provides food for fish.

What initially looks like a short-lived event actually provides resources for seven years or more, Subalusky and her colleagues report June 19 in the Proceedings of the National Academy of Sciences.

The wildebeest migration is the largest terrestrial migration on the planet, and others of its kind have largely disappeared as humans have killed off animals or cut off their migration routes.

Only a few hundred years ago, for instance, millions of bison roamed the western United States. There are accounts in which thousands of bison drowned in rivers, similar to what happens with wildebeests. Those rivers may have fundamentally changed after bison were nearly wiped out, Subalusky and her colleagues contend.

We’ll never know if that was the case, but there are still some places where scientists may be able to study the effects of mass drownings on rivers. A large herd of caribou reportedly drowned in Canada in the 1980s, and there are still some huge migrations of animals, such as reindeer. Like the wildebeests, these animals might be feeding an underwater food web that no one has ever noticed.

How earthquake scientists eavesdrop on North Korea’s nuclear blasts

On September 9 of last year, in the middle of the morning, seismometers began lighting up around East Asia. From South Korea to Russia to Japan, geophysical instruments recorded squiggles as seismic waves passed through and shook the ground. It looked as if an earthquake with a magnitude of 5.2 had just happened. But the ground shaking had originated at North Korea’s nuclear weapons test site.

It was the fifth confirmed nuclear test in North Korea, and it opened the latest chapter in a long-running geologic detective story. Like a police examiner scrutinizing skid marks to figure out who was at fault in a car crash, researchers analyze seismic waves to determine if they come from a natural earthquake or an artificial explosion. If the latter, then scientists can also tease out details such as whether the blast was nuclear and how big it was. Test after test, seismologists are improving their understanding of North Korea’s nuclear weapons program.
The work feeds into international efforts to monitor the Comprehensive Nuclear-Test-Ban Treaty, which since 1996 has banned nuclear weapons testing. More than 180 countries have signed the treaty. But 44 countries that hold nuclear technology must both sign and ratify the treaty for it to have the force of law. Eight, including the United States and North Korea, have not.

To track potential violations, the treaty calls for a four-pronged international monitoring system, which is currently about 90 percent complete. Hydroacoustic stations can detect sound waves from underwater explosions. Infrasound stations listen for low-frequency sound waves rumbling through the atmosphere. Radio­nuclide stations sniff the air for the radioactive by-products of an atmospheric test. And seismic stations pick up the ground shaking, which is usually the fastest and most reliable method for confirming an underground explosion.

Seismic waves offer extra information about an explosion, new studies show. One research group is exploring how local topography, like the rugged mountain where the North Korean government conducts its tests, puts its imprint on the seismic signals. Knowing that, scientists can better pinpoint where the explosions are happening within the mountain — thus improving understanding of how deep and powerful the blasts are. A deep explosion is more likely to mask the power of the bomb.
Separately, physicists have conducted an unprecedented set of six explosions at the U.S. nuclear test site in Nevada. The aim was to mimic the physics of a nuclear explosion by detonating chemical explosives and watching how the seismic waves radiate outward. It’s like a miniature, nonnuclear version of a nuclear weapons test. Already, the scientists have made some key discoveries, such as understanding how a deeply buried blast shows up in the seismic detectors.
The more researchers can learn about the seismic calling card of each blast, the more they can understand international developments. That’s particularly true for North Korea, where leaders have been ramping up the pace of military testing since the first nuclear detonation in 2006. On July 4, the country launched its first confirmed ballistic missile — with no nuclear payload — that could reach as far as Alaska.

“There’s this building of knowledge that helps you understand the capabilities of a country like North Korea,” says Delaine Reiter, a geophysicist with Weston Geophysical Corp. in Lexington, Mass. “They’re not shy about broadcasting their testing, but they claim things Western scientists aren’t sure about. Was it as big as they claimed? We’re really interested in understanding that.”

Natural or not
Seismometers detect ground shaking from all sorts of events. In a typical year, anywhere from 1,200 to 2,200 earthquakes of magnitude 5 and greater set off the machines worldwide. On top of that is the unnatural shaking: from quarry blasts, mine collapses and other causes. The art of using seismic waves to tell one type of event from the others is known as forensic seismology.

Forensic seismologists work to distinguish a natural earthquake from what could be a clandestine nuclear test. In March 2003, for instance, seismometers detected a disturbance coming from near Lop Nor, a dried-up lake in western China that the Chinese government, which signed but hasn’t ratified the test ban treaty, has used for nuclear tests. Seismologists needed to figure out immediately what had happened.

One test for telling the difference between an earthquake and an explosion is how deep it is. Anything deeper than about 10 kilometers is almost certain to be natural. In the case of Lop Nor, the source of the waves seemed to be located about six kilometers down — difficult to tunnel to, but not impossible. Researchers also used a second test, which compares the amplitudes of two different kinds of seismic waves.

Earthquakes and explosions generate several types of seismic waves, starting with P, or primary, waves. These waves are the first to arrive at a distant station. Next come S, or secondary, waves, which travel through the ground in a shearing motion, taking longer to arrive. Finally come waves that ripple across the surface, including those called Rayleigh waves.
In an explosion as compared with an earthquake, the amplitudes of Rayleigh waves are smaller than those of the P waves. By looking at those two types of waves, scientists determined the Lop Nor incident was a natural earthquake, not a secretive explosion. (Seismology cannot reveal the entire picture. Had the Lop Nor event actually been an explosion, researchers would have needed data from the radionuclide monitoring network to confirm the blast came from nuclear and not chemical explosives.)

For North Korea, the question is not so much whether the government is setting off nuclear tests, but how powerful and destructive those blasts might be. In 2003, the country withdrew from the Treaty on the Nonproliferation of Nuclear Weapons, an international agreement distinct from the testing ban that aims to prevent the spread of nuclear weapons and related technology. Three years later, North Korea announced it had conducted an underground nuclear test in Mount Mantap at a site called Punggye-ri, in the northeastern part of the country. It was the first nuclear weapons test since India and Pakistan each set one off in 1998.

By analyzing seismic wave data from monitoring stations around the region, seismologists concluded the North Korean blast had come from shallow depths, no more than a few kilometers within the mountain. That supported the North Korean government’s claim of an intentional test. Two weeks later, a radionuclide monitoring station in Yellowknife, Canada, detected increases in radioactive xenon, which presumably had leaked out of the underground test site and drifted eastward. The blast was nuclear.

But the 2006 test raised fresh questions for seismologists. The ratio of amplitudes of the Rayleigh and P waves was not as distinctive as it usually is for an explosion. And other aspects of the seismic signature were also not as clear-cut as scientists had expected.

Researchers got some answers as North Korea’s testing continued. In 2009, 2013 and twice in 2016, the government set off more underground nuclear explosions at Punggye-ri. Each time, researchers outside the country compared the seismic data with the record of past nuclear blasts. Automated computer programs “compare the wiggles you see on the screen ripple for ripple,” says Steven Gibbons, a seismologist with the NORSAR monitoring organization in Kjeller, Norway. When the patterns match, scientists know it is another test. “A seismic signal generated by an explosion is like a fingerprint for that particular region,” he says.

With each test, researchers learned more about North Korea’s capabilities. By analyzing the magnitude of the ground shaking, experts could roughly calculate the power of each test. The 2006 explosion was relatively small, releasing energy equivalent to about 1,000 tons of TNT — a fraction of the 15-kiloton bomb dropped by the United States on Hiroshima, Japan, in 1945. But the yield of North Korea’s nuclear tests crept up each time, and the most recent test, in September 2016, may have exceeded the size of the Hiroshima bomb.
Digging deep
For an event of a particular seismic magnitude, the deeper the explosion, the more energetic the blast. A shallow, less energetic test can look a lot like a deeply buried, powerful blast. Scientists need to figure out precisely where each explosion occurred.

Mount Mantap is a rugged granite mountain with geology that complicates the physics of how seismic waves spread. Western experts do not know exactly how the nuclear bombs are placed inside the mountain before being detonated. But satellite imagery shows activity that looks like tunnels being dug into the mountainside. The tunnels could be dug two ways: straight into the granite or spiraled around in a fishhook pattern to collapse and seal the site after a test, Frank Pabian, a nonproliferation expert at Los Alamos National Laboratory in New Mexico, said in April in Denver at a meeting of the Seismological Society of America.

Researchers have been trying to figure out the relative locations of each of the five tests. By comparing the amplitudes of the P, S and Rayleigh waves, and calculating how long each would have taken to travel through the ground, researchers can plot the likely sites of the five blasts. That allows them to better tie the explosions to the infrastructure on the surface, like the tunnels spotted in satellite imagery.

One big puzzle arose after the 2009 test. Analyzing the times that seismic waves arrived at various measuring stations, one group calculated that the test occurred 2.2 kilometers west of the first blast. Another scientist found it only 1.8 kilometers away. The difference may not sound like a lot, Gibbons says, but it “is huge if you’re trying to place these relative locations within the terrain.” Move a couple of hundred meters to the east or west, and the explosion could have happened beneath a valley as opposed to a ridge — radically changing the depth estimates, along with estimates of the blast’s power.

Gibbons and colleagues think they may be able to reconcile these different location estimates. The answer lies in which station the seismic data come from. Studies that rely on data from stations within about 1,500 kilometers of Punggye-ri — as in eastern China — tend to estimate bigger distances between the locations of the five tests when compared with studies that use data from more distant seismic stations in Europe and elsewhere. Seismic waves must be leaving the test site in a more complicated way than scientists had thought, or else all the measurements would agree.
When Gibbons’ team corrected for the varying distances of the seismic data, the scientists came up with a distance of 1.9 kilometers between the 2006 and 2009 blasts. The team also pinpointed the other explosions as well. The September 2016 test turned out to be almost directly beneath the 2,205-meter summit of Mount Mantap, the group reported in January in Geophysical Journal International. That means the blast was, indeed, deeply buried and hence probably at least as powerful as the Hiroshima bomb for it to register as a magnitude 5.2 earthquake.

Other seismologists have been squeezing information out of the seismic data in a different way — not in how far the signals are from the test blast, but what they traveled through before being detected. Reiter and Seung-Hoon Yoo, also of Weston Geophysical, recently analyzed data from two seismic stations, one 370 kilometers to the north in China and the other 306 kilometers to the south in South Korea.

The scientists scrutinized the moments when the seismic waves arrived at the stations, in the first second of the initial P waves, and found slight differences between the wiggles recorded in China and South Korea, Reiter reported at the Denver conference. Those in the north showed a more energetic pulse rising from the wiggles in the first second; the southern seismic records did not. Reiter and Yoo think this pattern represents an imprint of the topography at Mount Mantap.

“One side of the mountain is much steeper,” Reiter explains. “The station in China was sampling the signal coming through the steep side of the mountain, while the southern station was seeing the more shallowly dipping face.” This difference may also help explain why data from seismic stations spanning the breadth of Japan show a slight difference from north to south. Those differences may reflect the changing topography as the seismic waves exited Mount Mantap during the test.

Learning from simulations
But there is only so much scientists can do to understand explosions they can’t get near. That’s where the test blasts in Nevada come in.

The tests were part of phase one of the Source Physics Experiment, a $40-million project run by the U.S. Department of Energy’s National Nuclear Security Administration. The goal was to set off a series of chemical explosions of different sizes and at different depths in the same borehole and then record the seismic signals on a battery of instruments. The detonations took place at the nuclear test site in southern Nevada, where between 1951 and 1992 the U.S. government set off 828 underground nuclear tests and 100 atmospheric ones, whose mushroom clouds were seen from Las Vegas, 100 kilometers away.

For the Source Physics Experiment, six chemical explosions were set off between 2011 and 2016, ranging up to 5,000 kilograms of TNT equivalent and down to 87 meters deep. The biggest required high-energy–density explosives packed into a cylinder nearly a meter across and 6.7 meters long, says Beth Dzenitis, an engineer at Lawrence Livermore National Laboratory in California who oversaw part of the field campaign. Yet for all that firepower, the detonation barely registered on anything other than the instruments peppering the ground. “I wish I could tell you all these cool fireworks go off, but you don’t even know it’s happening,” she says.

The explosives were set inside granite rock, a material very similar to the granite at Mount Mantap. So the seismic waves racing outward behaved very much as they might at the North Korean nuclear test site, says William Walter, head of geophysical monitoring at Livermore. The underlying physics, describing how seismic energy travels through the ground, is virtually the same for both chemical and nuclear blasts.
The results revealed flaws in the models that researchers have been using for decades to describe how seismic waves travel outward from explosions. These models were developed to describe how the P waves compress rock as they propagate from large nuclear blasts like those set off starting in the 1950s by the United States and the Soviet Union. “That worked very well in the days when the tests were large,” Walter says. But for much smaller blasts, like those North Korea has been detonating, “the models didn’t work that well at all.”
Walter and Livermore colleague Sean Ford have started to develop new models that better capture the physics involved in small explosions. Those models should be able to describe the depth and energy release of North Korea’s tests more accurately, Walter reported at the Denver meeting.

A second phase of the Source Physics Experiment is set to begin next year at the test site, in a much more rubbly type of rock called alluvium. Scientists will use that series of tests to see how seismic waves are affected when they travel through fragmented rock as opposed to more coherent granite. That information could be useful if North Korea begins testing in another location, or if another country detonates an atomic bomb in fragmented rock.

For now, the world’s seismologists continue to watch and wait, to see what the North Korean government might do next. Some experts think the next nuclear test will come at a different location within Mount Mantap, to the south of the most recent tests. If so, that will provide a fresh challenge to the researchers waiting to unravel the story the seismic waves will tell.

“It’s a little creepy what we do,” Reiter admits. “We wait for these explosions to happen, and then we race each other to find the location, see how big it was, that kind of thing. But it has really given us a good look as to how [North Korea’s] nuclear program is progressing.” Useful information as the world’s nations decide what to do about North Korea’s rogue testing.

Neutrino experiment may hint at why matter rules the universe

A new study hints that neutrinos might behave differently than their antimatter counterparts. The result amplifies scientists’ suspicions that the lightweight elementary particles could help explain why the universe has much more matter than antimatter.

In the Big Bang, 13.8 billion years ago, matter and antimatter were created in equal amounts. To tip that balance to the universe’s current, matter-dominated state, matter and antimatter must behave differently, a concept known as CP, or “charge parity,” violation.

In neutrinos, which come in three types — electron, muon and tau — CP violation can be measured by observing how neutrinos oscillate, or change from one type to another. Researchers with the T2K experiment found that muon neutrinos morphed into electron neutrinos more often than expected, while muon antineutrinos became electron antineutrinos less often. That suggests that the neutrinos were violating CP, the researchers concluded August 4 at a colloquium at the High Energy Accelerator Research Organization, KEK, in Tsukuba, Japan.

T2K scientists had previously presented a weaker hint of CP violation. The new result is based on about twice as much data, but the evidence is still not definitive. In physicist parlance, it is a “two sigma” measurement, an indicator of how statistically strong the evidence is. Physicists usually require five sigma to claim a discovery.

Even three sigma is still far away — T2K could reach that milestone by 2026. A future experiment, DUNE, now under construction at the Sanford Underground Research Laboratory in Lead, S.D., may reach five sigma. It is worth being patient, says physicist Chang Kee Jung of Stony Brook University in New York, who is a member of the T2K collaboration. “We are dealing with really profound problems.”

A new tool could one day improve Lyme disease diagnosis

A new testing method can distinguish between early Lyme disease and a similar tick-borne illness, researchers report. The approach may one day lead to a reliable diagnostic test for Lyme, an illness that can be challenging to identify.

Using patient blood serum samples, the test accurately discerned early Lyme disease from the similar southern tick‒associated rash illness, or STARI, up to 98 times out of 100. When the comparison also included samples from healthy people, the method accurately identified early Lyme disease up to 85 times out of 100, beating a commonly used Lyme test’s rate of 44 of 100, researchers report online August 16 in Science Translational Medicine. The test relies on clues found in the rise and fall of the abundance of molecules that play a role in the body’s immune response.
“From a diagnostic perspective, this may be very helpful, eventually,” says Mark Soloski, an immunologist at Johns Hopkins Medicine who was not involved with the study. “That’s a really big deal,” he says, especially in areas such as the mid-Atlantic where Lyme and STARI overlap.

In the United States, Lyme disease is primarily caused by an infection with the bacteria Borrelia burgdorferi, which is spread by the bite of a black-legged tick. An estimated 300,000 cases of Lyme occur nationally each year. Patients usually develop a rash and fever, chills, fatigue and aches. Black-legged ticks live in the northeastern, mid-Atlantic and north-central United States, and the western black-legged tick resides along the Pacific coast.

An accurate diagnosis can be difficult early in the disease, says immunologist Paul Arnaboldi of New York Medical College in Valhalla, who was not involved in the study. Lyme disease is diagnosed based on the rash, symptoms and tick exposure. But other illnesses have similar symptoms, and the rash can be missed. A test for antibodies to the Lyme pathogen can aid diagnosis, but it works only after a patient has developed an immune response to the disease.

STARI, spread by the lone star tick, can begin with a rash and similar, though typically milder, symptoms. The pathogen responsible for STARI is still unknown, though B. burgdorferi has been ruled out. So far STARI has not been tied to arthritis or other chronic symptoms linked to Lyme, though the lone star tick has been connected to a serious allergy to red meat (SN: 8/19/17, p. 16). Parts of both ticks’ ranges overlap, adding to diagnosis difficulties.

John Belisle, a microbiologist at Colorado State University in Fort Collins, and his colleagues had previously shown that a testing method based on small molecules related to metabolism could distinguish between early Lyme disease and healthy serum samples. “Think of it as a fingerprint,” he says. The method takes note of differences in the abundancy of metabolites, such as sugars, lipids and amino acids, involved in inflammation.
In the new work, Belisle and colleagues measured differences in the levels of metabolites in serum samples from Lyme and STARI patients. The researchers then developed a “fingerprint” based on 261 small molecules to differentiate between the two illnesses. To determine the accuracy, they tested another set of samples from patients with Lyme and STARI as well as those from healthy people. “We were able to distinguish all three groups,” says Belisle.

As a diagnostic test, “I think the approach has promise,” says Arnaboldi. But more work will be necessary to see if the method can sort out early Lyme disease, STARI and other tick-borne diseases in patients with unknown illnesses.

Having information about the metabolites abundant in STARI may also help researchers learn more about this disease, says Soloski. “This is going to spur lots of future studies.”

Confusion lingers over health-related pros and cons of marijuana

No one knows whether chronic marijuana smoking causes emotional troubles or is a symptom of them…. This dearth of evidence has a number of explanations: serious lingering reactions, if they exist, occur after prolonged use, rarely after a single dose; marijuana has no known medical use, unlike LSD, so scientists have had little reason to study the drug…. Also, marijuana has been under strict legal sanctions … for more than 30 years. – Science News, October 7, 1967

In 29 states and in Washington, D.C., marijuana is now commonly prescribed for post-traumatic stress disorder and chronic pain. But the drug’s pros and cons remain hazy. Regular pot use has been linked to psychotic disorders and to alcohol and drug addiction (SN Online: 1/12/17). And two recent research reviews conclude that very little high-quality data exist on whether marijuana effectively treats PTSD or pain. Several large-scale trials are under way to assess how well cannabis treats these conditions.

Body clock mechanics wins U.S. trio the Nobel Prize in physiology or medicine

Discoveries about the clocklike ups and downs of daily life have won Jeffery C. Hall, Michael Rosbash and Michael W. Young the Nobel Prize in physiology or medicine.

Circadian rhythms are daily cycles of hormones, gene activity and other biological processes that govern sleep, body temperature and metabolism. When thrown out of whack, there can be serious health consequences, including increased risk of diabetes, heart and Alzheimer’s diseases.

Hall and Rosbash discovered the first molecular gear of the circadian clockworks: A protein called Period increases and decreases in abundance on a regular cycle during the day. Young discovered that another protein called Timeless works with Period to drive the clock. Young also discovered other circadian clockworks.

The Arecibo Observatory will remain open, NSF says

The iconic Arecibo Observatory has survived a hurricane and dodged deep budget cuts. On November 16, the National Science Foundation, which funds the bulk of the observatory’s operating costs, announced that they would continue funding the radio telescope at a reduced level.

It’s not clear yet who will manage the observatory in the long run, or where the rest of the funding will come from. But scientists are celebrating. For example:
Arecibo, a 305-meter-wide radio telescope located about 95 kilometers west of San Juan, is the second largest radio telescope in the world. It has been instrumental in tasks as diverse as monitoring near-Earth asteroids, watching for bright blasts of energy called fast radio bursts and searching for extraterrestrial intelligence.

But the NSF, which covers $8.3 million of the observatory’s nearly $12 million annual budget, has been trying to back away from that responsibility for several years. After Hurricane Maria hit Puerto Rico on September 20, damaging the telescope’s main antenna, the observatory’s future seemed unclear (SN: 9/29/17).

On November 16, the NSF released a statement announcing it would continue science operations at Arecibo “with reduced agency funding,” and would search for new collaborators to cover the rest.、
“This plan will allow important research to continue while accommodating the agency’s budgetary constraints and its core mission to support cutting-edge science and education,” the statement says.

Hormone replacement makes sense for some menopausal women

Internist Gail Povar has many female patients making their way through menopause, some having a tougher time than others. Several women with similar stories stand out in her mind. Each came to Povar’s Silver Spring, Md., office within a year or two of stopping her period, complaining of frequent hot flashes and poor sleep at night. “They just felt exhausted all the time,” Povar says. “The joy had kind of gone out.”

And all of them “were just absolutely certain that they were not going to take hormone replacement,” she says. But the women had no risk factors that would rule out treating their symptoms with hormones. So Povar suggested the women try hormone therapy for a few months. “If you feel really better and it makes a big difference in your life, then you and I can decide how long we continue it,” Povar told them. “And if it doesn’t make any difference to you, stop it.”
At the follow-up appointments, all of these women reacted the same way, Povar recalls. “They walked in beaming, absolutely beaming, saying, ‘I can’t believe I didn’t do this a year ago. My life! I’ve got my life back.’ ”

That doesn’t mean, Povar says, that she’s pushing hormone replacement on patients. “But it should be on the table,” she says. “It should be part of the discussion.”

Hormone replacement therapy toppled off the table for many menopausal women and their doctors in 2002. That’s when a women’s health study, stopped early after a data review, published results linking a common hormone therapy to an increased risk of breast cancer, heart disease, stroke and blood clots. The trial, part of a multifaceted project called the Women’s Health Initiative, or WHI, was meant to examine hormone therapy’s effectiveness in lowering the risk of heart disease and other conditions in women ages 50 to 79. It wasn’t a study of hormone therapy for treating menopausal symptoms.

But that nuance got lost in the coverage of the study’s results, described at the time as a “bombshell,” a call to get off of hormone therapy right away. Women and doctors in the United States heeded the call. A 2012 study in Obstetrics & Gynecology showed that use plummeted: Oral hormone therapy, taken by an estimated 22 percent of U.S. women 40 and older in 1999–2000, was taken by fewer than 12 percent of women in 2003–2004. Six years later, the number of women using oral hormone therapy had sunk below 5 percent.
Specialists in women’s health say it’s time for the public and the medical profession to reconsider their views on hormone therapy. Research in the last five years, including a long-term follow-up of women in the WHI, has clarified the risks, benefits and ideal ages for hormone therapy. Medical organizations, including the Endocrine Society in 2015 and the North American Menopause Society in 2017, have released updated recommendations. The overall message is that hormone therapy offers more benefits than risks for the relief of menopausal symptoms in mostly healthy women of a specific age range: those who are under age 60 or within 10 years of stopping menstruation.

“A generation of women has missed out on effective treatment because of misinformation,” says JoAnn Pinkerton, executive director of the North American Menopause Society and a gynecologist who specializes in menopause at the University of Virginia Health System in Charlottesville. It’s time to move beyond 2002, she says, and have a conversation based on “what we know now.”

End of an era
Menopause, the final menstrual period, signals the end of fertility and is confirmed after a woman has gone 12 months without having a period. From then on she is postmenopausal. Women reach menopause around age 51, on average. In the four to eight years before, called perimenopause, the amount of estrogen in the body declines as ovarian function winds down. Women may have symptoms related to the lack of estrogen beginning in perimenopause and continuing after the final period.

Probably the best-known symptom is the hot flash, a sudden blast of heat, sweating and flushing in the face and upper chest. These temperature tantrums can occur at all hours. At night, hot flashes can produce drenching sweats and disrupt sleep.

Hot flashes arise because the temperature range in which the body normally feels comfortable narrows during the menopause transition, partly in response to the drop in estrogen. Normally, the body takes small changes in core body temperature in stride. But for menopausal women, the slightest uptick in degree can be a trigger for the vessels to dilate, which increases blood flow and sweating.

About 75 to 80 percent of menopausal women experience hot flashes and night sweats, on and off, for anywhere from a couple of years to more than a decade. In a study in JAMA Internal Medicine in 2015, more than half of almost 1,500 women enrolled at ages 42 to 52 reported frequent hot flashes — occurring at least six days in the previous two weeks — with symptoms lasting more than seven years.

A sizable number of women have moderate or severe hot flashes, which spread throughout the body and can include profuse sweating, heart palpitations or anxiety. In a study of 255 menopausal women, moderate to severe hot flashes were most common, occurring in 46 percent of women, during the two years after participants’ last menstrual period. A third of all the women still experienced heightened hot flashes 10 years after menopause, researchers reported in 2014 in Menopause.

Besides hot flashes and night sweats, roughly 40 percent of menopausal women experience irritation and dryness of the vulva and vagina, which can make sexual intercourse painful. These symptoms tend to arise after the final period.

Alarm bells
In the 1980s and ’90s, researchers observed that women using hormone therapy for menopausal symptoms had a lower risk of heart disease, bone fractures and overall death. Some doctors began recommending the medication not just for symptom relief, but also for disease prevention.

Observational studies of the apparent health benefits of hormone therapy spurred a more stringent study, a randomized controlled trial, which tested the treatment’s impact by randomly assigning hormones to some volunteers and not others. The WHI hormone therapy trials assessed heart disease, breast cancer, stroke, blood clots, colorectal cancer, hip fractures and deaths from other causes in women who used the hormones versus those who took a placebo. Two commonly prescribed formulations were tested: a combined hormone therapy — estrogen sourced from horses plus synthetic progesterone — and estrogen alone. (Today, additional U.S. Food and Drug Administration–approved formulations are available.)
The 2002 WHI report in JAMA, which described early results of the combined hormone therapy, shocked the medical community. The study was halted prematurely because after about five years, women taking the hormones had a slightly higher risk of breast cancer and an overall poor risk-to-benefit ratio compared with women taking the placebo. While the women taking hormones had fewer hip fractures and colorectal cancers, they had more breast cancers, heart disease, blood clots and strokes. The findings were reported in terms of the relative risk, the ratio of how often a disease happened in one group versus another. News of a 26 percent increase in breast cancers and a 41 percent increase in strokes caused confusion and alarm.

Women dropped the hormones in droves. From 2001 to 2009, the use of all hormone therapy among menopausal women, as reported by physicians based on U.S. office visits, fell 52 percent, according to a 2011 study in Menopause.

But, researchers say, the message that hormone therapy was bad for all was unwarranted. “The goal of the WHI was to evaluate the balance of benefits and risks of menopausal hormone therapy when used for prevention of chronic disease,” says JoAnn Manson, a physician epidemiologist at Harvard-affiliated Brigham and Women’s Hospital in Boston and one of the lead investigators of the WHI. “It was not intended to evaluate its role in managing menopausal symptoms.”

Along with the focus on prevention, the WHI hormone therapy trials were largely studies of older women — in their 60s and 70s. Only around one-third of participants started the trial between ages 50 and 59, the age group more likely to be in need of symptom relief. Hormone therapy “was always primarily a product to use in women entering menopause,” says Howard Hodis, a physician scientist who focuses on preventive medicine at the University of Southern California’s Keck School of Medicine in Los Angeles. “The observational studies were based on these women.”

Also lost in the coverage of the 2002 study results was the absolute risk, the actual difference in the number of cases of disease between two groups. The group on combined hormone therapy had eight more cases of breast cancer per 10,000 women per year than the group taking a placebo. Hodis notes that that absolute risk translates to less than one extra case for every 1,000 women, which is classified as a rare risk by the Council for International Organizations of Medical Sciences, a World Health Organization group. There was also less than one additional case for every 1,000 women per year for heart disease and for stroke in the hormone-treated women compared with those on placebo.

In 2004, researchers published results of the WHI study of estrogen-only therapy, taken for about seven years by women who had had their uteruses surgically removed. (Progesterone is added to hormone therapy to protect the uterus lining from a risk of cancer seen with estrogen alone.) The trial, also stopped early, reported a decreased risk of hip fractures and breast cancer, but an increased risk of stroke. The study didn’t change the narrative that hormone therapy wasn’t safe.

Timing is everything
Since the turn away from hormone therapy, follow-up studies have brought nuance not initially captured by the first two reports. Researchers were finally able to tease out the results that applied to “the young women — and I love saying this — young women 50 to 59 who are most apt to present with symptoms of menopause,” says Cynthia Stuenkel, an internist and endocrinologist at the University of California, San Diego School of Medicine in La Jolla.

In 2013, Manson and colleagues reported data from the WHI grouped by age. It turned out that absolute risks were smaller for 50- to 59-year-olds than they were for older women, especially those 70 to 79 years old, for both combined therapy and estrogen alone. For example, in the combined hormone therapy trial, treated 50- to 59-year-olds had five additional cases of heart disease and five more strokes per 10,000 women annually compared with the same-aged group on placebo. But the treated 70- to 79-year-olds had 19 more heart disease cases and 13 more strokes per 10,000 women annually than women of the same age taking a placebo. “So a lot more of these events that were of concern were in the older women,” Stuenkel says.

Story continues below graphs
A Danish study reported in 2012 of about 1,000 recently postmenopausal women, ages 45 to 58, also supported the idea that timing of hormone treatment matters. The randomized controlled trial examined the use of different formulations of estrogen (17β-estradiol) and progesterone than the WHI. The researchers reported in BMJ that after 10 years, women taking hormone therapy — combined or estrogen alone — had a reduced risk of mortality, heart failure or heart attacks, and no added risk of cancer, stroke or blood clots compared with those not treated.

These findings provide evidence for the timing hypothesis, also supported by animal studies, as an explanation for the results seen in younger women, especially in terms of heart disease and stroke. In healthy blood vessels, more common in younger women, estrogen can slow the development of artery-clogging plaques. But in vessels that already have plaque buildup, more likely in older women, estrogen may cause the plaques to rupture and block an artery, Manson explains.

Recently, Manson and colleagues published a long-term study of the risk of death in women in the two WHI hormone therapy trials — combined therapy and estrogen alone — from the time of trial enrollment in the mid-1990s until the end of 2014. Use of either hormone therapy was not associated with an added risk of death during the study or follow-up periods due to any cause or, specifically, death from heart disease or cancer, the researchers reported in JAMA in September 2017. The study provides reassurance that taking hormone therapy, at least for five to seven years, “does not show any mortality concern,” Stuenkel says.

Both the Endocrine Society and the North American Menopause Society state that, for symptom relief, the benefits of FDA-approved hormone therapy outweigh the risks in women younger than 60 or within 10 years of their last period, absent health issues such as a high risk of breast cancer or heart disease. The menopause society position statement adds that there are also benefits for women at high risk of bone loss or fracture.

Today, the message about hormone therapy is “not everybody needs it, but if you’re a candidate, let’s talk about the pros and cons, and let’s do it in a science-based way,” Pinkerton says.

Hormone therapy is the most effective treatment for hot flashes, night sweats and genital symptoms, she says. A review of randomized controlled trials, published in 2004, reported that hormone therapy decreased the frequency of hot flashes by 75 percent and reduced their severity as well.

More than 50 million U.S. women will be older than 51 by 2020, Manson says. Yet today, many women have a hard time finding a physician who is comfortable prescribing hormone therapy or even just managing a patient’s menopausal symptoms, she says.

Stuenkel, who says many younger doctors stopped learning about hormone therapy after 2002, is trying to play catch up. When she teaches medical students and doctors about treating menopausal symptoms, she brings up three questions to ask patients. First, how bothersome are the symptoms? Some women say “fix it, get me through the day and the night, put me back in order,” Stuenkel says. Other women’s symptoms are not as disruptive. Second, what does the patient want? Third, what is safe for this particular woman, based on her health? If a woman’s health history doesn’t support the use of hormone therapy, or she just isn’t interested, there are nonhormonal options, such as certain antidepressants, and also nondrug lifestyle approaches.

Menopause looms large for many women, Povar says, and discussing a patient’s expectations as well as whether hormone therapy is the right approach becomes a unique discussion with each patient, she says. “This is one of the most individual decisions a woman makes.”

When it’s playtime, many kids prefer reality over fantasy

Young children travel to fantasy worlds every day, packing just imaginations and a toy or two.

Some preschoolers scurry across ocean floors carrying toy versions of cartoon character SpongeBob SquarePants. Other kids trek to distant universes with miniature replicas of Star Wars robots R2-D2 and C-3PO. Throngs of youngsters fly on broomsticks and cast magic spells with Harry Potter and his Hogwarts buddies. The list of improbable adventures goes on and on.

Parents today take for granted that kids need toys to fuel what comes naturally — outlandish bursts of make-believe. Kids’ flights of fantasy are presumed to soar before school and life’s other demands yank the youngsters down to Earth.
Yet some researchers call childhood fantasy play — which revolves around invented characters and settings with no or little relationship to kids’ daily lives — highly overrated. From at least the age when they start talking, little ones crave opportunities to assist parents at practical tasks and otherwise learn how to be productive members of their cultures, these investigators argue.

New findings support the view that children are geared more toward helping than fantasizing. Preschoolers would rather perform real activities, such as cutting vegetables or feeding a baby, than pretend to do those same things, scientists say. Even in the fantastical realm of children’s fiction books, reality may have an important place. Young U.S. readers show signs of learning better from human characters than from those ever-present talking pigs and bears.
Studies of children in traditional societies illustrate the dominance of reality-based play outside modern Western cultures. Kids raised in hunter-gatherer communities, farming villages and herding groups rarely play fantasy games. Children typically play with real tools, or small replicas of tools, in what amounts to practice for adult work. Playgroups supervised by older children enact make-believe versions of what adults do, such as sharing hunting spoils.
These activities come much closer to the nature of play in ancient human groups than do childhood fantasies fueled by mass-produced toys, videos and movies, researchers think.
Handing over household implements to toddlers and preschoolers and letting them play at working, or allowing them to lend a hand on daily tasks, generates little traction among Western parents, says psychologist Angeline Lillard of the University of Virginia in Charlottesville. Many adults, leaning heavily on adult-supervised playdates, assume preschoolers and younger kids need to be protected from themselves. Lillard suspects that preschoolers, whose early helping impulses get rebuffed by anxious parents, often rebel when told to start doing household chores a few years later.

“Kids like to do real things because they want a role in the real world,” Lillard says. “Our society has gone overboard in stressing the importance of pretense and fantasy for young children.”

Keep it real
Lillard suspects most preschoolers agree with her.

More than 40 years of research fails to support the widespread view that playing pretend games generates special social or mental benefits for young children, Lillard and colleagues wrote in a 2013 review in Psychological Bulletin. Studies that track children into their teens and beyond are sorely needed to establish any beneficial effects of pretending to be other people or acting out imaginary situations, the researchers concluded.

Even the assumption that kids naturally gravitate toward make-believe worlds may be unrealistic. When given a choice, 3- to 6-year-olds growing up in the United States — one of many countries saturated with superhero movies, video games and otherworldly action figures — preferred performing real activities over pretending to do them, Lillard and colleagues reported online June 20 in Developmental Science.
One hundred youngsters, most of them white and middle class, were tested either in a children’s museum, a preschool or a university laboratory. An experimenter showed each child nine pairs of photographs. Each photo in a pair featured a boy or a girl, to match the sex of the youngster being tested. One photo showed a child in action. Depicted behaviors included cutting vegetables with a knife, talking on a telephone and bottle-feeding a baby. In the second photo, a different child pretended to do what the first child did for real.

When asked by the experimenter whether they would rather, say, cut real vegetables with a knife like the first child or pretend to do so like the second child, preschoolers chose the real activity almost two-thirds of the time. Among the preschoolers, hard-core realists outnumbered fans of make-believe, the researchers found. Whereas 16 kids always chose real activities, only three wanted to pretend on every trial. Just as strikingly, 48 children (including seven of 26 of the 3-year-olds) chose at least seven real activities of the nine depicted. Only 14 kids (mostly the younger ones) selected at least seven pretend activities.

Kids often said they liked real activities for practical reasons, such as wanting to learn how to feed babies to help mom. Hands-on activities also got endorsed for being especially fun or novel. “I’ve never talked on the real phone,” one child explained. Reasons for choosing pretend activities centered on being afraid of the real activity or liking to pretend.

In a preliminary follow-up study directed by Lillard, 16 girls and boys, ages 3 to 6, chose between playing with 10 real objects, such as a microscope, or toy versions of the same objects. During 10-minute play periods, kids spent an average of about twice as much time with real items. That preference for real things increased with age. Three-year-olds spent nearly equal time playing with genuine and pretend items, but the older children strongly preferred the real deal.

Lillard’s findings illustrate that kids want and need real experiences, says psychologist Thalia Goldstein of George Mason University in Fairfax, Va. “Modern definitions of childhood have swung too far toward thinking that young children should live in a world of fantasy and magic,” she maintains.

But pretend play, including fantasy games, still has value in fostering youngsters’ social and emotional growth, Goldstein and Matthew Lerner of Stony Brook University in New York reported online September 15 in Developmental Science. After participating in 24 play sessions, 4- and 5-year-olds from poor families were tested on empathy and other social skills. Those who played dramatic pretend games (being a superhero, animal or chef, for instance) were less likely than kids who played with blocks or read stories to become visibly upset upon seeing an experimenter who the kids believed had hurt a knee or finger, the researchers found. Playing pretend games enabled kids to rein in distress at seeing the experimenter in pain, the researchers proposed.

It’s not known whether fantasy- and reality-based games shape kids’ social skills in different ways over the long haul, Goldstein says.

True fiction
Even on the printed page, where youngsters gawk at Maurice Sendak’s goggle-eyed Wild Things and Dr. Seuss’ mustachioed Lorax, the real world exerts a special pull.

Consider 4- to 6-year-olds who were read either a storybook about a little raccoon that learns to share with other animals or the same storybook with illustrations of human characters learning to share. Both versions told of how characters felt better after giving some of what they had to others. A third set of kids heard an illustrated storybook about seeds that had nothing to do with sharing. Each group consisted of 32 children.

Only kids who heard the realistic story displayed a general willingness to act on its message, reported a team led by psychologist Patricia Ganea of the University of Toronto in a paper published online August 2 in Developmental Science. On a test of children’s willingness to share any of 10 stickers with a child described as unable to participate in the experiment, listeners to the tale with human characters forked over an average of nearly three stickers, about one more than the kids had donated before the experiment.

Children who heard stories with animal characters became less giving, sharing an average of 1.7 stickers after having originally donated an average of 2.3 stickers. Sticker sharing declined similarly among kids who heard the seed story. These results fit with several previous studies showing that preschoolers more easily apply knowledge learned from realistic stories to the real world, as opposed to information encountered in fantasy stories.

Even for fiction stories that are highly unrealistic, youngsters generally favor realistic endings, say Boston University psychologist Melissa Kibbe and colleagues. In a study from the team published online June 15 in Psychology of Aesthetics, Creativity and the Arts, an experimenter read 90 children, ages 4 to 6, one of three illustrated versions of a story. In the tale, a child gets lost on the way to a school bus. A realistic version was set in a present-day city. A futuristic science fiction version was set on the moon. A fantasy version occurred in medieval times and included magical characters. Stories ended with descriptions and illustrations of a child finally locating either a typical school bus, a futuristic school bus with rockets on its sides or a magical coach with dragon wings.
When given the chance, 40 percent of kids inserted a typical school bus into the ending for the science fiction story and nearly 70 percent did so for the fantasy tale. “Children have a bias toward reality when completing stories,” Kibbe says.
Hands on
Outside Western cultures, children’s bias toward reality takes an extreme turn, especially during play.

Nothing keeps it real like a child merrily swinging around a sharp knife as adults go about their business. That’s cause for alarm in Western households. But in many foraging communities, children play with knives and even machetes with their parents’ blessing, says anthropologist David Lancy of Utah State University in Logan.

Lancy describes reported instances of youngsters from hunter-gatherer groups playing with knives in his 2017 book Raising Children. Among Maniq foragers inhabiting southern Thailand’s forests, for instance, one researcher observed a father looking on approvingly as his baby crawled along holding a knife about as long as a dollar bill. The same investigator observed a 4-year-old Maniq girl sitting by herself cutting pieces of vegetation with a machete.

In East Africa, a Hadza infant can grab a knife and suck on it undisturbed, at least until an adult needs to use the tool. On Vanatinai Island in the South Pacific, children freely experiment with knives and pieces of burning wood from campfires.

Yes, accidents happen. That doesn’t mean hunter-gatherer parents are uncaring or indifferent toward their children, Lancy says. In these egalitarian societies, where sharing food and other resources is the norm, parents believe it’s wrong to impose one’s will on anyone, including children. Hunter-gatherer adults assume that a child learns best through hands-on, sometimes risky, exploration on his or her own and in groups with other kids. In that way, the adults’ thinking goes, youngsters develop resourcefulness, creativity and determination. Self-inflicted cuts and burns represent learning opportunities.

In many societies, adults make miniature tools for children to play with or give kids cast-off tools to use as toys. For instance, Inuit boys have been observed mimicking seal hunts with items supplied by parents, such as pieces of sealskin and miniature harpoons. Girls in Ecuador’s Conambo tribe mold clay balls provided by their mothers into various shapes as a first step toward becoming potters.
Childhood games and toys in foraging groups and farming villages, as in Western nations, reflect cultural values. Hunter-gatherer kids rarely engage in rough-and-tumble or competitive games. In fact, competition is discouraged. These kids concoct games with no winners, such as throwing a weighted feather in the air and flicking the feather back up as it descends. Children in many farming villages and herding societies play basic forms of marbles, in which each player shoots a hard object at similar objects to knock the targets out of a defined area. The rules change constantly as players decide among themselves what counts and what doesn’t.

Children in traditional societies don’t invent fantasy characters to play with, Lancy says. Consider imaginative play among children of Aka foragers in the Central African Republic. These kids may pretend to be forest animals, but the animals are creatures from the children’s surroundings, such as antelope. The children aim to take the animals’ perspective to determine what route to follow while exploring, says anthropologist Adam Boyette of Duke University. Aka youngsters sometimes pretend to be spirits that adults have told the kids about. In this way, kids become familiar with community beliefs and rituals.
Aka childhood activities are geared toward adult work, Boyette says. Girls start foraging for food within the first few years of life. Boys take many years to master dangerous tasks, such as climbing trees to raid honey from bees’ nests (SN: 8/20/16, p. 10). By around age 7, boys start to play hunting games and graduate to real hunts as teenagers.

In 33 hunter-gatherer societies around the world, parents typically take 1- to 2-year-olds on foraging expeditions and give the youngsters toy versions of tools to manipulate, reported psychologist Sheina Lew-Levy of the University of Cambridge and her colleagues in the December Human Nature. Groups of children at a range of ages play make-believe versions of what adults do and get in some actual practice at tasks such as toolmaking. Youngsters generally become proficient food collectors and novice toolmakers between ages 8 and 12, the researchers conclude. Adults, but not necessarily parents, begin teaching hunting and complex toolmaking skills to teens. For the report, Lew-Levy’s group reviewed 58 papers on childhood learning among hunter-gatherers, most published since 2000.

“There’s a blurred line between work and play in foraging societies because children are constantly rehearsing for adult roles by playing,” Boyette says.

Children in Western societies can profitably mix fantasy with playful rehearsals for adult tasks, observes George Mason’s Goldstein, who was a professional stage actor before opting for steadier academic work. “My 5-year-old son is never happier than when he’s helping to check us out at the grocery store,” she says. “But he also likes to pretend to be a robot, and sometimes a robot who checks us out at the grocery store.”

Not too far in the future, preschoolers pretending to be robots may encounter real robots running grocery-store checkouts. Playtime will never be the same.

310-million-year-old fossil blobs might not be jellyfish after all

What do you get when you flip a fossilized “jellyfish” upside down? The answer, it turns out, might be an anemone.

Fossil blobs once thought to be ancient jellyfish were actually a type of burrowing sea anemone, scientists propose March 8 in Papers in Palaeontology.

From a certain angle, the fossils’ features include what appears to be a smooth bell shape, perhaps with tentacles hanging beneath — like a jellyfish. And for more than 50 years, that’s what many scientists thought the animals were.
But for paleontologist Roy Plotnick, something about the fossils’ supposed identity seemed fishy. “It’s always kind of bothered me,” says Plotnick, of the University of Illinois Chicago. Previous scientists had interpreted one fossil feature as a curtain that hung around the jellies’ tentacles. But that didn’t make much sense, Plotnick says. “No jellyfish has that,” he says. “How would it swim?”

One day, looking over specimens at the Field Museum in Chicago, something in Plotnick’s mind clicked. What if the bell belonged on the bottom, not the top? He turned to a colleague and said, “I think this is an anemone.”

Rotated 180 degrees, Plotnick realized, the fossils’ shape — which looks kind of like an elongated pineapple with a stumpy crown — resembles some modern anemones. “It was one of those aha moments,” he says. The “jellyfish” bell might be the anemone’s lower body. And the purported tentacles? Perhaps the anemone’s upper section, a tough, textured barrel protruding from the seafloor.

Plotnick and his colleagues examined thousands of the fossilized animals, dubbed Essexella asherae, unearthing more clues. Bands running through the fossils match the shape of some modern anemones’ musculature. And some specimens’ pointy protrusions resemble an anemone’s contracted tentacles.
“It’s totally possible that these are anemones,” says Estefanía Rodríguez, an anemone expert at the American Museum of Natural History in New York City who was not involved with the work. The shape of the fossils, the comparison with modern-day anemones — it all lines up, she says, though it’s not easy to know for sure.

Paleontologist Thomas Clements agrees. Specimens like Essexella “are some of the most notoriously difficult fossils to identify,” he says. “Jellyfish and anemones are like bags of water. There’s hardly any tissue to them,” meaning there’s little left to fossilize.
Still, it’s plausible that the blobs are indeed fossilized anemones, says Clements, of Friedrich-Alexander-Universität Erlangen-Nürnberg in Germany. He was not part of the new study but has spent several field seasons at Mazon Creek, the Illinois site where Essexella lived some 310 million years ago. Back then, the area was near the shoreline, Clements says, with nearby rivers dumping sediment into the environment – just the kind of place ancient burrowing anemones may have once called home.