Centauri Dreams
Imagining and Planning Interstellar Exploration
Inflatable Technologies for Deep Space

One idea for deep space probes that resurfaces every few years is the inflatable sail. We’ve seen keen interest especially since Breakthrough Starshot’s emergence in 2016 in meter-class sails, perhaps as small as four meters to the side. But if we choose to work with larger designs, sails don’t scale well. Increase sail area and problems of mass arise thanks to the necessary cables between sail and payload. An inflatable beryllium sail filled with a low-pressure gas like hydrogen avoids this problem, with payload mounted on the space-facing surface. Such sails have already been analyzed in the literature (see below).

Roman Kezerashvili (City University of New York), in fact, recently analyzed an inflatable torus-shaped sail with a twist, one that uses compounds incorporated into the sail material itself as a ‘propulsive shell’ that can take advantage of desorption produced by a microwave beam or a close pass of the Sun. Laser beaming also produces this propulsive effect but microwaves are preferable because they do not damage the sail material. Kezerashvili has analyzed carbon in the sail lattice in solar flyby scenarios. The sail is conceived as “a thin reflective membrane attached to an inflatable torus-shaped rim.”

Image: This is Figure 1 from the paper. Credit: Kezerashvili et al.
Inflatable sails go back to the late 1980s. Joerg Strobl published a paper on the concept in the Journal of the British Interplanetary Society, and it received swift follow-up in a series of studies in the 1990s examining an inflatable radio telescope called Quasat. A series of meetings involving Alenia Spazio, an Italian aerospace company based in Turin, took the idea further. In 2018, Claudio Maccone, Greg Matloff, and NASA’s Les Johnson joined Kezerashvili in analyzing inflatable technologies for missions as challenging as a probe to the Oort Cloud.
Indeed, working with Joseph Meany in 2023, Matloff would also describe an inflatable sail using aerograpahite and graphene, enabling higher payload mass and envisioning a ‘sundiver’ trajectory to accelerate an Alpha Centauri mission. The conception here is for a true interstellar ark carrying a crew of several hundred, using a sail with radius of 764 kilometers on a 1300 year journey. So the examination of inflatable sails in varying materials is clearly not slowing down.
The Inflatable Starshade
But there are uses for inflatable space structures that go beyond outer system missions. I see that they are now the subject of a NIAC Phase I study by John Mather (NASA GSFC) that puts a new wrinkle into the concept. Mather’s interest is not propulsion but an inflatable starshade that, in various configurations, could work either with the planned Habitable Worlds Observatory or an Extremely Large Telescope like the 39 m diameter European Extremely Large Telescope now being built in Chile. Starshades are an alternative to coronagraph technologies that suppress light from a star to reveal its planetary companions.
Inflatables may well be candidates for any kind of large space structure. Current planning on the Habitable Worlds Observatory, scheduled for launch no earlier than the late 2030s, includes a coronagraph, but Mather thinks the two technologies offer useful synergies. Here’s a quote from the study summary:
A starshade mission could still become necessary if: A. The HWO and its coronagraph cannot be built and tested as required; B. The HWO must observe exoplanets at UV wavelengths, or a 6 m HWO is not large enough to observe the desired targets; C. HWO does not achieve adequate performance after launch, and planned servicing and instrument replacement cannot be implemented; D. HWO observations show us that interesting exoplanets are rare, distant, or are hidden by thick dust clouds around the host star, or cannot be fully characterized by an upgraded HWO; or E. HWO observations show that the next step requires UV data, or a much larger telescope, beyond the capability of conceivable HWO coronagraph upgrades.
So Mather’s idea is also meant to be a kind of insurance policy. It’s worth pointing out that coronagraphs are well studied and compact, while starshade technologies are theoretically sound but untested in space. But as the summary mentions, work at ultraviolet wavelengths is out of the coronagraph’s range. To get into that part of the spectrum, pairing Habitable Worlds Observatory with a 35-meter starshade seems the only option. This conceivably would allow a relaxation of some HWO optical specs, and thus lower the overall cost. The NIAC study will explore these options for a 35-meter as well as a 60-meter starshade.
I mentioned the possibility of combining a starshade with observations through an Extremely Large Telescope, an eye-widening notion that Mather proposed in a 2022 NIAC Phase I study. The idea here is to place the starshade in an orbit that would match position and velocity with the telescope, occulting the star to render the planets more visible. This would demand an active propulsion system to maintain alignment during the observation, while also making use of the adaptive optics already built into the telescope to suppress atmospheric distortion. The mission is called Hybrid Observatory for Earth-like Exoplanets.
Image: Artist concept highlighting the novel approach proposed by the 2025 NIAC awarded selection of Inflatable Starshade for Earthlike Exoplanets concept. Credit: NASA/John Mather.
As discussed earlier, mass considerations play into inflatable designs. In the HOEE study, Mather referred to his plan “to cut the starshade mass by more than a factor of 10. There is no reason to require thousands of kg to support 400 kg of thin membranes.” His design goal is to create a starshade that can be assembled in space, thus avoiding launch and deployment issues. All this to create what he calls “the most powerful exoplanet observatory yet proposed.”
You can see how the inflatable starshade idea grows out of the hybrid observatory study. By experimenting with designs producing the needed strength, stiffness, stability and thermal requirements and the issues raised by bonding large sheets of materials of the requisite strength, the mass goals may be realized. So the inflatable sail option once again morphs into a related design, one optimized as an adjunct to an exoplanet observatory, provided this early work can lead to a solution that could benefit both astronomy and sails.
The paper on inflatable beryllium sails is Matloff, G. L., Kezerashvili, R. Ya., Maccone, C. and Johnson, L., “The Beryllium Hollow-Body Solar Sail: Exploration of the Sun’s Gravitational Focus and the Inner Oort Cloud”, arXiv:0809.3535v1 [physics.space-ph] 20 Sep 2008. The later Kezerashvili paper is Kezerashvili et al., “A torus-shaped solar sail accelerated via thermal desorption of coating,” Advances in Space Research Vol. 67, Issue 9 (2021), pp. 2577-2588 (abstract). The Matloff and Meany paper on an Alpha Centauri interstellar ark is ”Aerographite: A Candidate Material for Interstellar Photon Sailing,” JBIS Vol. 77 (2024) pp. 142-146. Abstract. Thanks to Thomas Mazanec and Antonio Tavani for the pointer to Mather’s most recent NIAC study.
Odd Couple: Gas Giants and Red Dwarfs
The assumption that gas giant planets are unlikely around red dwarf stars is reasonable enough. A star perhaps 20 percent the mass of the Sun should have a smaller protoplanetary disk, meaning sufficient gas and dust to build a Jupiter-class world are lacking. The core accretion model (a gradual accumulation of material from the disk) is severely challenged. Moreover, these small stars are active in their extended youth, sending out frequent flares and strong stellar winds that should dissipate such a disk quickly. Gravitational instabilities within the disk are one possible alternative.
Planet formation around such a star must be swift indeed, which accounts for estimates as low as 1 percent of such stars having a gas giant in the system. Exceptions like GJ 3512 b, discovered in 2019, do occur, and each is valuable. Here we have a giant planet, discovered through radial velocity methods, orbiting a star a scant 12 percent of the Sun’s mass. Or consider the star GJ 876, which has two gas giants, or the exoplanet TOI-5205 b, a transiting gas giant discovered in 2023. Such systems leave us hoping for more examples to begin to understand the planet forming process in such a difficult environment.
Let me drop back briefly to a provocative study from 2020 by way of putting all this in context before we look at another such system that has just been discovered. In this earlier work, the data were gathered at the Atacama Large Millimeter/submillimeter Array (ALMA), taken at a wavelength of 0.87 millimeters. The team led by Nicolas Kurtovic (Max Planck Institute for Astronomy, Heidelberg) found evidence of ring-like structures in protoplanetary disks that extend between 50 and 90 AU out.
Image: This is a portion of Figure 2 from the paper, which I’m including because I doubt most of us have seen images of a red dwarf planetary disk. Caption: Visibility modeling versus observations of our sample. From left to right: (1) Real part of the visibilities after centering and deprojecting the data versus the best fit model of the continuum data, (2) continuum emission of our sources where the scale bar represents a 10 au distance, (3) model image, (4) residual map (observations minus model), (5) and normalized, azimuthally averaged radial profile calculated from the beam convolved images in comparison with the model without convolution (purple solid line) and after convolution (red solid line). In the right most plots, the gray scale shows the beam major axis. Credit: Kurtovic et al.
Gaps in these rings, possibly caused by planetary embryos, would accommodate planets of the Saturn class, and the researchers find that gaps formed around three of the M-dwarfs in the study. The team suggests that ‘gas pressure bumps’ develop to slow the inward migration of the disk, allowing such giant worlds to coalesce. It’s an interesting possibility, but we’re still early in the process of untangling how this works. For more, see How Common Are Giant Planets around Red Dwarfs?, a 2021 entry in these pages.
Now we learn of TOI-6894 b, a transiting gas giant found as part of Edward Bryant’s search for such worlds at the University of Warwick and the University of Liège. An international team of astronomers confirmed the find using telescopes at the SPECULOOS and TRAPPIST projects. The work appears in Nature Astronomy (citation below). Here’s Bryant on the scope of the search for giant M-dwarf planets:
“I originally searched through TESS observations of more than 91,000 low-mass red-dwarf stars looking for giant planets. Then, using observations taken with one of the world’s largest telescopes, ESO’s VLT, I discovered TOI-6894 b, a giant planet transiting the lowest mass star known to date to host such a planet. We did not expect planets like TOI-6894b to be able to form around stars this low-mass. This discovery will be a cornerstone for understanding the extremes of giant planet formation.”
TOI-6894 b has a radius only a little larger than Saturn, although it has only about half of Saturn’s mass. What adds spice to this particular find is that the host star is the lowest mass star found to have a transiting giant planet. In fact, TOI-6894 is only 60 percent the size of the next smallest red dwarf with a transiting gas giant. Given that 80 percent of stars in the Milky Way are red dwarfs, determining an accurate percentage of red dwarf gas giants is significant for assessing the total number in the galaxy.
Image: Artwork depicting the exoplanet TOI-6894 b around a red dwarf star. This planet is unusual because, given the size/mass of the planet relative to the very low mass of the star, this planet should not have been able to form. The planet is vary large compared to its parent star, shown here to scale. With the known temperature of the star, the planet is expected to only be approximately 420 degrees Kelvin at the top of its atmosphere. This means its atmosphere may contain methane and ammonia, amongst other species. This would make this planet one of the first planets outside the Solar System where we can observe nitrogen, which alongside carbon and oxygen is a key building block for life. Credit: University of Warwick / Mark Garlick.
TOI-6894 b produces deep transits and sports temperatures in the range of 420 K, according to the study. Clearly this world is not in the ‘hot Jupiter’ category. Amaury Triaud (University of Birmingham) is a co-author on this paper:
“Based on the stellar irradiation of TOI-6894 b, we expect the atmosphere is dominated by methane chemistry, which is exceedingly rare to identify. Temperatures are low enough that atmospheric observations could even show us ammonia, which would be the first time it is found in an exoplanet atmosphere. TOI-6894 b likely presents a benchmark exoplanet for the study of methane-dominated atmospheres and the best ‘laboratory’ to study a planetary atmosphere containing carbon, nitrogen, and oxygen outside the Solar System.”
Thus it’s good to know that JWST observations targeting the atmosphere of this world are already on the calendar within the next 12 months. Rare worlds that can serve as benchmarks for hitherto unexplained processes are pure gold for our investigation of where and how giant planets form.
The paper is Bryant et al., “A transiting giant planet in orbit around a 0.2-solar-mass host star,” Nature Astronomy (2025). Full text. The Kurtovic study is Kurtovic, Pinilla, et al., “Size and Structures of Disks around Very Low Mass Stars in the Taurus Star-Forming Region,” Astronomy & Astrophysics, 645, A139 (2021). Full text.
Expansion of the Universe: An End to the ‘Hubble Tension’?
When one set of data fails to agree with another over the same phenomenon, things can get interesting. It’s in such inconsistencies that interesting new discoveries are sometimes made, and when the inconsistency involves the expansion of the universe, there are plenty of reasons to resolve the problem. Lately the speed of the expansion has been at issue given the discrepancy between measurements of the cosmic microwave background and estimates based on Type Ia supernovae. The result: The so-called Hubble Tension.
It’s worth recalling that it was a century ago that Edwin Hubble measured extragalactic distances by using Cepheid variables in the galaxy NGC 6822. The measurements were necessarily rough because they were complicated by everything from interstellar dust effects to lack of the necessary resolution, so that the Hubble constant was not known to better than a factor of 2. Refinements in instruments tightened up the constant considerably as work progressed over the decades, but the question of how well astronomers had overcome the conflict with the microwave background results remained.
Now we have new work that looks at the rate of expansion using data from the James Webb Space Telescope, doubling the sample of galaxies used to calibrate the supernovae results. The paper’s lead author, Wendy Freedman of the University of Chicago, argues that the JWST results resolve the tension. With Hubble data included in the analysis as well, Freedman calculates a Hubble value of 70.4 kilometers per second per megaparsec, plus or minus 3%. This result brings the supernovae results into statistical agreement with recent cosmic microwave background data showing 67.4, plus or minus 0.7%.
Image: The University of Chicago’s Freedman, a key player in the ongoing debate over the value of the Hubble Constant. Credit: University of Chicago.
While the cosmic microwave background tells us about conditions early in the universe’s expansion, Freedman’s work on supernovae is aimed at pinning down how fast the universe is expanding in the present, which demands accurate measurements of interstellar distances. Knowing the maximum brightness of supernovae allows the use of their apparent luminosities to calculate their distance. Type 1a supernovae are consistent in brightness at their peak, making them, like the Cepheid variables Hubble used, helpful ‘standard candles.’
The same factors that plagued Hubble, such as the effect of dimming from interstellar dust and other factors that affect luminosity, have to be accounted for, but JWST has four times the resolution of the Hubble Space Telescope, and is roughly 10 times as sensitive, making its measurements a new gold standard. Co-author Taylor Hoyt (Lawrence Berkeley Laboratory) notes the result:
“We’re really seeing how fantastic the James Webb Space Telescope is for accurately measuring distances to galaxies. Using its infrared detectors, we can see through dust that has historically plagued accurate measurement of distances, and we can measure with much greater accuracy the brightnesses of stars.”
Image: Scientists have made a new calculation of the speed at which the universe is expanding, using the data taken by the powerful new James Webb Space Telescope on multiple galaxies. Above, Webb’s image of one such galaxy, known as NGC 1365. Credit: NASA, ESA, CSA, Janice Lee (NOIRLab), Alyssa Pagan (STScI).
A lack of agreement between the CMB findings and the supernovae data could have been pointing to interesting new physics, but according to this work, the Standard Model of the universe holds up. In a way, that’s too bad for using the discrepancy to probe into mysterious phenomena like dark energy and dark matter, but it seems we’ll have to be looking elsewhere for answers to their origin. Ahead for Freedman and team are new measurements of the Coma cluster that Freedman suggests could fully resolve the matter within years.
As the paper notes:
While our results show consistency with ΛCDM (the Standard Model), continued improvement to the local distance scale is essential for further reducing both systematic and statistical uncertainties.
The paper is Freedman et al., “Status Report on the Chicago-Carnegie Hubble Program (CCHP): Measurement of the Hubble Constant Using the Hubble and James Webb Space Telescopes,” The Astrophysical Journal Vol. 985, No 2 (27 May 2025), 203 (full text).
Megastructures: Adrift in the Temporal Sea
Here about the beach I wander’d, nourishing a youth sublime
With the fairy tales of science, and the long result of Time…
—Tennyson
Temporal coincidence plays havoc with our ideas about other civilizations in the cosmos. If we want to detect them, their society must at least have developed to the point that it can manipulate electromagnetic waves. But its technology has to be of sufficient strength to be noticed. The kind of signals people were listening to 100 years ago on crystal sets wouldn’t remotely fit the bill, and neither would our primitive TV signals of the 1950s. So we’re looking for strong signals and cultures older than our own.
Now consider how short a time we’re talking about. We have been using radio for a bit over a century, which is on the order of one part in 100,000,000 of the lifespan of our star. You may recall the work of Brian Lacki, which I wrote about four years ago (see Alpha Centauri and the Search for Technosignatures). Lacki, now at Oxford, points out how unlikely it would be to find any two stars remotely near each other whose civilization ‘window’ corresponded to our own. In other words, even if we last a million years as a technological civilization, we’re just the blink of an eye in cosmic time.
Image: Brian Lacki, whose work for Breakthrough Listen continues to explore both the scientific and philosophical implications of SETI. Credit: University of Oxford.
Adam Frank at the University of Rochester has worked this same landscape. He thinks we might well find ourselves in a galaxy that at one time or another had flourishing civilizations that are long gone. We are separated not only in space but also in time. Maybe there are such things as civilizations that are immortal, but it seems more likely that all cultures eventually end, even if by morphing into some other form.
What would a billion year old civilization look like? Obviously we have no idea, but it’s conceivable that such a culture, surely non-biological and perhaps non-corporeal, would be able to manipulate matter and spacetime in ways that might simply mimic nature itself. Impossible to find that one. A more likely SETI catch would be a civilization that has had space technologies just long enough to have the capability of interstellar flight on a large scale. In a new paper, Lacki looks at what its technosignature might look like. If you’re thinking Dyson spheres or swarms, you’re on the right track, but as it turns out, such energy gathering structures have time problems of their own.
Lacki’s description of a megaswarm surrounding a star:
These swarms, practically by definition, need to have a large number of elements, whether their purpose is communication or exploitation. Moreover, the swarm orbital belts need to have a wide range of inclinations. This ensures that the luminosity is being collected or modulated in all directions. But this in turn implies a wide range of velocities, comparable to the circular orbital velocity. Another problem is that the number of belts that can “fit” into a swarm without crossing is limited.
Image: Artist’s impression of a Dyson swarm. Credit: Archibald Tuttle / Wikimedia Commons. CC BY-SA 4.0.
Shards of Time
The temporal problem persists, for even a million year ‘window’ is a sliver on the cosmic scale. The L factor in the Drake equation is a great unknown, but it is conceivable that the death of a million-year old culture would be survived by its artifacts, acting to give us clues to its past just as fossils tell us about the early stages of life on Earth. So might we hope to find an ancient, abandoned Dyson swarm around a star close enough to observe?
Lacki is interested in failure modes, the problem of things that break down. Helpfully, megastructures are by definition gigantic, and it is not inconceivable that. Dyson structures of one kind or another could register in our astronomical data. As the paper notes, a wide variety covering different fractions of the host star can be imagined. We can scale a Dyson swarm down or up in size, with perhaps the largest ever proposed being from none other than Nikolai Kardashev, who discusses in a 1985 paper a disk parsecs-wide built around a galactic nucleus (!).
I’m talking about Dyson swarms instead of spheres because from what we know of material science, solid structures would suffer from extreme instabilities. But swarms can be actively managed. We have a history of interest in swarms dating back to 1958, when Project Needles at MIT contemplated placing a ring of 480,000,000 copper dipole antennas in orbit to enhance military communications (the idea was also known as Project West Ford). Although two launches were carried out experimentally, the project was eventually shelved because of advances in communications satellites.
So we humans already ponder enclosing the planet in one way or another, and planetary swarms, as Lacki notes, are already with us, considering the constellations of satellites in Earth orbit, the very early stages of a mini Dyson swarm. Just yesterday, the announcement by SpinLaunch that it will launch hundreds of microsatellites into orbit using a centrifugal cannon gave us another instance. Enclosing a star in a gradually thickening swarm seems like one way to harvest energy, but if such structures were built, they would have to be continuously maintained. The civilization behind a Dyson swarm needs to survive if the swarm itself is to remain viable.
For the gist of Lacki’s paper is that on the timescales we’re talking about, an abandoned Dyson swarm would be in trouble within a surprisingly short period of time. Indeed, collisions can begin once the guidance systems in place begin to fail. What Lacki calls the ‘collisional time’ is roughly an orbital period divided by the covering fraction of the swarm. How long it takes to develop into a ‘collisional cascade’ depends upon the configuration of the swarm. Let me quote the paper, which identifies:
…a major threat to megastructure lifespans: if abandoned, the individual elements eventually start crashing into each other at high speeds (as noted in Lacki 2016; Sallmen et al. 2019; Lacki 2020). Not only do the collisions destroy the crashed swarm members, but they spray out many pieces of wreckage. Each of these pieces is itself moving at high speeds, so that even pieces much smaller than the original elements can destroy them. Thus, each collision can produce hundreds of missiles, resulting in a rapid growth of the potentially dangerous population and accelerating the rate of collisions. The result is a collisional cascade, where the swarm elements are smashed into fragments, that are in turn smashed into smaller pieces, and so on, until the entire structure has been reduced to dust. Collisional cascades are thought to have shaped the evolution of minor Solar System body objects like asteroid families and the irregular satellites of the giant planets (Kessler 1981; Nesvorn.
You might think that swarm elements could be organized so that their orbits reduce or eliminate collisions or render them slow enough to be harmless. But gravitational perturbations remain a key problem because the swarm isn’t an isolated system, and in the absence of active maintenance, its degradation is relatively swift.
Image: This is Figure 2 from the paper. Caption: A sketch of a series of coplanar belts heating up with randomized velocities. In panel (a), the belt is a single orbit on which elements are placed in an orderly fashion. Very small random velocities (meters per second or less) cause small deviations in the elements’ orbits, though so small that the belt is still “sharp”, narrower than the elements themselves (b). The random velocities cause the phases to desynchronize, leading to collisions, although they are too slow to damage the elements (cyan bursts). The collision time decreases rapidly in this regime until the belt is as wide as the elements themselves and becomes “fuzzy” (c). The collision time is at its minimum, although impacts are still too small to cause damage. In panel (d), the belts are still not wide enough to overlap, but relative speeds within the belts have become fast enough to catastrophically damage elements (yellow explosions), and are much more frequent than the naive collisional time implies because of the high density within belts. Further heating causes the density to fall and collisions to become rarer until the belts start to overlap (e). Finally, the belts grow so wide that each belt overlaps several others, with collisions occuring between objects in different belts too (f), at which point the swarm is largely randomized. Credit: Brian Lacki.
Keeping the Swarm Alive
Lacki’s mathematical treatment of swarm breakdown is exhaustive and well above my payscale, so I send you to the paper if you want to track the calculations that drive his simulations. But let’s talk about the implications of his work. Far from being static technosignatures, megaswarms surrounding stars are shown to be highly vulnerable. Even the minimal occulter swarm he envisions turns out to have a collision time of less than a million years. A megaswarm needs active maintenance – in our own system, Jupiter’s gravitational effect on a megaswarm would destroy it within several hundred thousand years. These are wafer-thin time windows if scaled against stellar lifetimes.
The solution is to actively maintain the megaswarm and remove perturbing objects by ejecting them from the system. An interesting science fiction scenario indeed, in which extraterrestrials might sacrifice systems planet by planet to maintain a swarm. Lacki works the simulations through gravitational perturbations from passing stars and in-system planets and points to the Lidov-Kozai effect, which turns circular orbits at high inclination into eccentric orbits at low inclination. Also considered is radiation pressure from the host star and radiative forces resulting from the Yarkovsky effect.
How else to keep a swarm going? From the paper:
For all we know, the builders are necessarily long-lived and can maintain an active watch over the elements and actively prevent collisions, or at least counter perturbations. Conceivably, they could also launch tender robots to do the job for them, or the swarm elements have automated guidance. Admittedly, their systems would have to be kept up for millions of years, vastly outlasting anything we have built, but this might be more plausible if we imagine that they are self-replicating. In this view, whenever an element is destroyed, the fragments are consumed and forged into a new element; control systems are constantly regenerated as new generations of tenders are born. Even then, self-replication, repair, and waste collection are probably not perfectly efficient.
The outer reaches of a stellar system would be a better place for a Dyson swarm than the inner system, which would be hostile to small swarm elements, even though the advantage of such a position would be more efficient energy collection. The habitable zone around a star is perhaps the least likely place to look for such a swarm given the perturbing effects of other planets. And if we take the really big picture, we can talk about where in the galaxy swarms might be likely: Low density environments where interactions with other stars are unlikely, as in the outskirts of large galaxies and in their haloes. “This suggests,” Lacki adds, “that megaswarms are more likely to be found in regions that are sometimes considered disfavorable for habitability.”
Ultimately, an abandoned Dyson swarm is ground into microscopie particles via the collision cascades Lacki describes, evolving into nothing more than dispersed ionized gas. If we hope to find an abandoned megastructure like this in our practice of galactic archaeology, what are the odds that we will find it within the window of time within which it can survive without active maintenance? We’d better hope that the swarm creators have extremely long-lived civilizations if we are to exist in the same temporal window as the swarm we want to observe. A dearth of Dyson structures thus far observed may simply be a lack of temporal coincidence, as we search for systems that are inevitably wearing down without the restoring hand of their creators.
The paper is Lacki, “Ground to Dust: Collisional Cascades and the Fate of Kardashev II Megaswarms,” accepted at The Astrophysical Journal (preprint). The Kardashev paper referenced above is “On the Inevitability and the Possible Structure of Super Civilizations,” in The Search for Extraterrestrial Life: Recent Developments, ed. M. D. Papagiannis, Vol. 112, 497–504.
The Statistically Quantitative Information from Null Detections of Living Worlds: Lack of positive detections is not a fruitless search
It’s no surprise, human nature being what it is, that our early detections of possible life on other worlds through ‘biosignatures’ are immediately controversial. We have to separate signs of biology from processes that may operate completely outside of our conception of life, abiotic ways to produce the same results. My suspicion is that this situation will persist for decades, claim vs. counter-claim, with heated conference sessions and warring papers. But as Alex Tolley explains in today’s essay, even a null result can be valuable. Alex takes us into the realm of Bayesian statistics, where prior beliefs are gradually adjusted as new data come in. We’re still dealing with probabilities, but in a fascinating way, uncertainties are gradually being decreased though never eliminated. We’re going to be hearing a lot more about these analytical tools as the hunt continues with next generation telescopes.
by Alex Tolley
Introduction
The venerable Drake equation’s early parameters are increasingly constrained as our exoplanet observations continue. We now have a good sample of thousands of exoplanets to estimate the fraction of planets in the habitable zone that could support life. This last firms up the term ne, the mean number of planets that could support life per star with planets.
This is now a shift to focus on the fraction of habitable planets with life (fl). The first to confirm a planet with life will likely make the history books.
However, as with the failure of SETI to receive a signal from extraterrestrial intelligence (ETI) since the 1960s, there will be disappointments in detecting extraterrestrial life. The early expectation of Martian vegetation proved incorrect, as did the controversial Martian microbes thought to have been detected by the Viking lander life detection experiments in 1976. More recently, the phosphine biosignature in the Venusian atmosphere has not been confirmed, and now the claimed dimethyl sulfide (DMS) biosignature on K2-18b is also questioned.
While we hope that an unambiguous biosignature is detected, are null results just disappointments that have no value in determining whether life is present in the cosmos, or do they add some value in determining a frequency of habitable planets with life?
Before diving into a recent paper that attempts to answer this question, I want to give a quick introduction to statistics. The most common type of statistics is Fisher statistics, where collected sample data is used to calculate the distribution parameters for the population from which the sample is taken. This approach is used when the sample size is greater than 1 or 2, and is most often deployed in calculating the accuracy of a mean value and 95% range of values as part of a test of significance. This approach works well when the sample contains sufficient examples to represent the population. For binary events, such as heads in a coin test, the Binomial distribution will provide the expected frequencies of unbiased and small biases in coin tosses.
However, a problem arises when the frequency of a binary event is extremely low, so that the sample of events detects no positive events, such as heads, at all. In the pharmaceutical industry, while efficacy of a new drug needs a large sample size for validity, the much larger phase 4 marketing period is used to monitor for rare side effects that are not discoverable in the clinical trials. There have been a number of well known drugs that were withdrawn from the market during this period, perhaps the most famous being thalidomide and its effects on fetal development. In such circumstances, Fisherian statistics are unhelpful in determining probabilities of rare events with sample sizes inadequate to catch these rare events. As we have seen with SETI, the lack of any detected signal provides no value for the probability that ETI exists, only that it is either rare, or that ETI is not signaling. All SETI scientists can do is keep searching with the hope that eventually a signal will be detected.
Bayesian statistics are a different approach that can help overcome the problem of determining the probability of rare events, one that has gained in popularity over the last few decades. It assumes a prior belief, perhaps no more than a guess, of the probability of an event, and then adjusts it with new observed data as they are acquired. For example, one assumes a coin toss is 50:50 heads or tails. If the succeeding tosses show only tails, then the coin toss is biased, and each new resulting tail decreases the probability of a head resulting on the next toss. For our astrobiological example, if life is very infrequent on habitable worlds, Bayesian statistics can be informative to estimate the probability of detection success.
In essence, the Bayesian method updates beliefs in the probability of events, given the new observations of the event. With a large enough number of observations, the true probability of an event value will emerge that will either converge or diverge from the initial expected probability.
I hope it is clear that this Bayesian approach is well-suited to the announcement of detecting a biosignature on a planet, where detections to date have either been absent or controversial. Each detection or lack of detection in a survey will update our expectations of the frequency of life. At this time, the probability of life on a potentially habitable planet ranges from 0 (life is unique to Earth) to 1.0 (some form of life appears wherever it is possible) Beliefs that the abiogenesis of life is extremely hard due to its complexity push the probability of life being detected as close to 0. Conversely, the increasing evidence that life emerges quickly on a new planet, such as within 100 million years on Earth [6], implies that the probability of a habitable planet having life is close to 1.0.
The Angerhausen et al paper I am looking at today (citation below) considers a number of probability distributions depending on beliefs about the probability of life, rather than a single value for each belief. These are shown in Figure 1 and explained in Box 2. I would in particular note the Kerman and Jeffreys distributions that are bimodal with the highest likelihoods for the distributions as the extremes, and reflect the “fine tuning” argument for life by Kipping et al [2] explained in the Centauri Dreams post [3] i.e., either life will be almost absent, or ubiquitous, and not some intermediate probability of appearing on a habitable planet, In other words, the probability is either very close to 0 or close to 1.0, but unlikely to be some intermediate probability. The paper relies on the Beta function [Box 3] that uses the probability of positive and negative events defined by 2 parameters for the binary state of the event, e.g. life detected or not detected. This function can approximate the Binomial distribution, but can handle the different probability distributions.
Figure 1. The five different prior distributions as probability density functions (PDF) used in the paper and explained in Box 2. Note the Kerman and Jeffreys distributions that bias the probabilities at the extremes, compared to the “biased optimist” that has 3 habitable worlds around the sun (Venus, Earth, and Mars), but with only the Earth having life.
The Beta function is adjusted by the number of observations or positive and negative detections of biosignatures. At this point, the positive and negative observations are based on the believed prior distributions which can take any values, from guesses to preliminary observational results, which at this time are relatively few. After all, we are still arguing over whether we have even detected biosignature molecules, let alone confirmed their detection. We then adjust those expectations by the new observations.
What happens when we start a survey and gain events of biosignature detection? Using the Jeffreys prior distribution, let us see the effect of observing no biosignature detections for up to 100 negative biosignature observations.
Figure 2a. The effect of increasing the null observations on a skewed distribution that shows the increasing certainty of the low probability frequencies. While apparently the high probabilities also rise, the increase in null detections implies that the relative frequency of positives declines.
Figure 2b. The increasing certainty that the frequency of life on habitable planets tends towards 0 as the number of null biosignature detections increases. The starting value of 0.5 is taken from the Jeffreys prior distribution. The implied frequency is the new frequency of positives as the null detections reduce the frequency observed and push the PDF towards the lower bound of 0 (see figure 1)
So far, so good. If we can be sure that the biosignature detection is unambiguous and that the inference that life is present or absent can be inferred with certainty based on the observations, then the sampling of up to 100 habitable worlds will indicate whether life is rare or ubiquitous and can be determined with high confidence. If every star system had at least 1 habitable world, this sample would include most stars within 20 ly of Earth. In reality, if we limit our stars to spectral types F, G & K, which represent 5-10% of all stars, and half of these have at least 1 habitable world, then we need to search 2000-4000 star systems, which are well within 100 ly, a tiny fraction of the galaxy.
The informed reader should now balk at the status of this analysis. Biosignatures are not unambiguous [4]. Firstly, detecting a faint trace of a presumed biosignature gas is not certain, as the phosphine on Venus and the DMS/DMDS on TOI-270d detections make clear. They are both controversial. In the case of Venus, we are neither certain that the phosphine signal is present and that the correct identification has been made, nor that there is no abiogenic mechanism to create phosphine in Venus’ very different environment. As discussed in my post on the ambiguity of biosignatures, prior assumptions about biosignatures as unambiguous were reexamined, with the response that astrobiologists built a scale of certainties for assessing whether a planet is inhabited based on the contextual interpretation of biosignature data.[4].
The authors of the paper allow for this by modifying the formula to allow for both false-positive and false-negative biosignature detection rates, and also for interpretation uncertainty of the detected biosignature. The authors also calculate the upper bound at about 3 sigma (99.9%) of the frequency of observations. Figure 3 shows the effect of these uncertainties on the location and size of the maximal probability density function for the Jeffrey’s Bayesian priors.
Figure 3. The effects of sample and interpretation, best fit, and 99.9% uncertainties for null detections. As both sample and interpretation uncertainty increase, the expected number of positive detections increases. The Jeffrey prior’s distribution is used.
Figure 3 implies that with interpretation uncertainty of just 10%, even 100 null observations, the calculated frequency of life increases 2 orders of magnitude from 0.1% to 10%. The upper bound increases from less than 10% to between 20 and 30%. Therefore, even if 100 new observations of habitable planets with no detected biosignatures, the frequency of inhabited planets is between ⅕ and ⅓ of habitable planets at this level of certainty. As one can see from the asymptotes, no amount of further observations will increase the certainty that life is absent in the population of stars in the galaxy. Uncertainty is the gift that allows astrobiologists to maintain hope that there are living worlds to discover.
Lastly, the authors apply their methodology to 2 projects to discover habitable worlds; the Habitable Worlds Observatory [7] and the Large Interferometer for Exoplanets (LIFE} concepts. The analyses are shown in figure 4. The vertical lines indicate the expected number of positive detections by the conceptual methods and the expected frequencies of detections with their associated upper bounds due to uncertainty.
Figure 4. Given the uncertainties, the authors calculate the 99.9% ( > 3 sigma) upper limit on the null hypothesis of no life and matched against data obtained by 2 surveys by Morgan with The Habitable Worlds Observatory (HWO) and 2 by Kammerer with The Large Interferometer for Exoplanets (LIFE) [7, 8].
The authors note that it may be incorrect to use the term “habitable” if water is detected, or “living” if a biosignature[s] is detected. They suggest it would be better to just use the calculation for the detection method, rather than the implication of the detection, that is, that the sample uncertainty, but not the interpretation uncertainty, is calculated. As we see in the popular press, if a planet in the habitable zone (HZ) has about an Earth-size mass and density, this planet is sometimes referred to as “Earth 2.0” with all the implications of the comparison to our planet. However, we know that our current global biosphere and climate are relatively recent in Earth’s history. The Earth has experienced different states from anoxic atmosphere, to extremely hot, and conversely extremely cold periods in the past. It is even possible the world may be a dry desert, like Venus, or conversely a hycean world with no land for terrestrial organisms to evolve.
However, even if life and intelligence prove rare and very sparsely distributed, a single, unambiguous signature, whether of a living world or a signal with information, is detected, the authors state:
Last but not least we want to remind the reader here that, even if this paper is about null results, a single positive detection would be a watershed moment in humankind’s history.
In summary, Bayesian analysis of null detections against prior expectations of frequencies can provide some estimate of the upper limit frequency of living worlds, with many null detections reducing the frequencies and their upper limits. Using Fisherian statistics, many null detections would provide no such estimates, as all the data values would be 0 (null detections). The statistics would be uninformative other than that as the number of null detections increased, the expectation of the frequency of living worlds would qualitatively decrease.
While planetologists and astrobiologists would hope that they would observationally detect habitable and inhabited exoplanets, as the uncertainties are decreased and the number of observations continues to show null results, how long before such activities become a fringe, uneconomic activity that results in lost opportunity costs for other uses of expensive telescope time?
The paper is Angerhausen, D., Balbi, A., Kovačević, A. B., Garvin, E. O., & Quanz, S. P. (2025). “What if we Find Nothing? Bayesian Analysis of the Statistical Information of Null Results in Future Exoplanet Habitability and Biosignature Surveys”. The Astronomical Journal, 169(5), 238. https://doi.org/10.3847/1538-3881/adb96d
References
1. Wikipedia “Drake equation” https://en.wikipedia.org/wiki/Drake_equation. Accessed 04/12/2025
2. Kipping & Lewis, “Do SETI Optimists Have a Fine-Tuning Problem?” submitted to International Journal of Astrobiology (preprint). https://arxiv.org/abs/2407.07097
3. Gilster P. “The Odds on an Empty Cosmos“ Centauri Dreams, Aug 16, 2024 https://www.centauri-dreams.org/2024/08/16/the-odds-on-an-empty-cosmos/
4. Tolley A. “The Ambiguity of Exoplanet Biosignatures“ Centauri Dreams Jun 21, 2024
https://www.centauri-dreams.org/2024/06/21/the-ambiguity-of-exoplanet-biosignatures/
5. Foote, Searra, Walker, Sara, et al. “False Positives and the Challenge of Testing the Alien Hypothesis.” Astrobiology, vol. 23, no. 11, Nov. 2023, pp. 1189–201. https://doi.org/10.1089/ast.2023.0005.
6. Tolley, A. Our Earliest Ancestor Appeared Soon After Earth Formed. Centauri Dreams, Aug 28, 2024 https://www.centauri-dreams.org/2024/08/28/our-earliest-ancestor-appeared-soon-after-earth-formed/
7. Wikipedia “Habitable Worlds Observatory” https://en.wikipedia.org/wiki/Habitable_Worlds_Observatory. Accessed 05/02/2025
8. Kammerer, J. et al (2022) “Large Interferometer For Exoplanets (LIFE) – VI. Detecting rocky exoplanets in the habitable zones of Sun-like stars. A&A, 668 (2022) A52
DOI: https://doi.org/10.1051/0004-6361/202243846
Unusual Skies: Optical Pulses & Celestial Bubbles
Finding unusual things in the sky should no longer astound us. It’s pretty much par for the course these days in astronomy, what with new instrumentation like JWST and the soon to be arriving Extremely Large Telescope family coming online. Recently we’ve had planet-forming disks in the inner reaches of the galaxy and the discovery of a large molecular cloud (Eos by name) surprisingly close to our Sun at the edge of the Local Bubble, about 300 light years out.
So I’m intrigued to learn now of Teleios, which appears to be a remnant of a supernova. The name, I’m told, is classical Greek for ‘perfection,’ an apt description for this evidently perfect bubble. An international team led by Miroslav Filipović of Western Sydney University in Australia is behind this work and has begun to analyze what could have produced the lovely object in a paper submitted to Publications of the Astronomical Society of Australia (citation below). Fortunately for us, Teleios glows at radio wavelengths in ways that illuminate its origins.
Image: Australian Square Kilometre Array Pathfinder radio images of Teleios as Stokes (the Stokes parameters are a set of numbers used to describe the polarization state of electromagnetic radiation). Credit: Filipović et al.
I’m not going to spend much time on Teleios, although its wonderful symmetry sets it apart from most supernova remnants without implying anything other than a chance occurrence in an unusually empty part of space. Its lack of X-ray emissions is a curiosity, to be sure, as the authors point out:
We have made an exhaustive exploration of the possible evolutionary state of the SN based on its surface brightness apparent size and possible distances. All possible scenarios have their challenges, especially considering the lack of X-ray emission that is expected to be detectable given our evolutionary modelling. While we deem the Type Ia scenario the most likely, we note that no direct evidence is available to definitively confirm any scenario and new sensitive and high-resolution observations of this object are needed.
Odd Optical Pulses
So there you are, a celestial mystery. Another one comes from Richard Stanton, now retired but for years a fixture at JPL, where he worked on Voyager, among other missions. These days he runs Shay Meadow Observatory near Big Bear, CA where he deploys a 30-inch telescope coupled with a photometer designed by himself for the task at hand – the search for optical SETI signals. Thus far the indefatigable retiree has observed more than 1300 stars in this quest.
Several unusual things have turned up in his data. What they mean demands further study. The star HD 9389 produced “two fast identical pulses, separated by 4.4s,” according to the paper on his work. That was interesting, but even more so is the fact that looking back over his earlier data, Stanton realized that a pair of similar pulses had occurred in observations of the star HD 217014 that were taken four years before. In the ‘second’ observation, the twin pulses were separated by 1.3 seconds, 3.5 times less than for the HD 89389 event. But Stanton notes that while the separation is less, the pulse shapes and separation are very similar in both events.
Stanton’s angle into optical SETI differs from the norm, as he describes it in a paper in Acta Astronautica. The work is:
…different from that employed in many other optical SETI searches. Some [3,4] look for nanosecond pulses of sufficient intensity to momentarily outshine the host star’s light, as first suggested by Schwartz and Townes [5]. Others search optical spectra of stars for unusual features [6] or emission close to a star that could have been sent from an orbiting planet [7]. The equipment used here is not capable of making any of these measurements. Instead it relies on detecting unexplained changes in a star’s light as indications of intelligent activity. Starting with time samples of 100μs, the search is capable of detecting optical pulses of this duration and longer, and also of finding optical tones in the frequency range ∼0.01–5000Hz.
HD 89389 is an F-class star about 100 light years away from the Solar System. Using the equipment Stanton has been working with, all kinds of things can present a problem, everything from an airplane blocking out starlight, satellites (a growing problem because of the increasing number of Internet access satellites), meteors and birds. Atmospheric scintillation and noise has to be accounted for as well. I’m simplifying here and send you to the paper, where all these factors are painstakingly considered. Stanton’s analysis is thorough.
Here is a photograph which shows the typical star-field during an observation of HD 89389, with the target star in the center of a field that is roughly 15 × 20 arcmin in size. The unusual pulses from this star occurred during this exposure.
Image: The HD 89389 star-field. “A careful examination was made of each photograph to detect any streaks or transitory point images that might have been objects moving through the field. Nothing was found in any of these frames, suggesting that the source of the pulses was either invisible, such as due to some atmospheric effect, or too far away to be detected.” Credit: Richard Stanton.
A closer look at these unusual observations: They consisted of two identical pulses, with the star rapidly brightening, then decreasing in brightness, then increasing again, all in the fraction of a single second. The second pulse followed 4.2 seconds later in the case of HD 89389, and 1.3 seconds later at HD 217014. According to Stanton, in over 1500 hours of searching he had never seen a pulse like this, in which the star’s light is attenuated by about 25 percent.
Note this: “This is much too fast to attribute to any known phenomenon at the star’s distance. Light from a star a million kilometers across cannot be attenuated so quickly.” In other words, something on the scale of a star cannot partially disappear in a fraction of a second, meaning the cause of this effect is not as distant as the star. If the star’s light is modulated without something moving across the field of view, then what process could cause this?
The author argues that the starlight variation in each pulse itself eliminates all the common signals discussed above, from airplanes to meteors. He also notes that unlike what happens when an asteroid or airplane occultation occurs, the star never disappears during the event. The second event, in the light of the star HD 217014, was discovered later, although the data were taken four years earlier. Stanton runs through all the possibilities, including shock waves in the atmosphere, partial eclipses by orbiting bodies, and passing gravity waves.
One way of producing this kind of modulation, Stanton points out, is through diffraction of starlight by a distant body between us and the star. Keep in mind that we are dealing with two stars that have shown the same pattern, with similar pulses. Edge diffraction results when light is diffracted by a straight edge, producing ‘intensity ripples’ that correspond to the pulses. The author gives this phenomenon considerable attention, explaining how the pulses would change with distance but coming up short on a distance to the sources here.
From his conclusion:
The fact that these pulses have been detected only in pairs must surely be a clue to their origin. How can the two detected events separated by years, and from seemingly random directions in the sky, be so similar to each other? Even if the diffraction theory is correct, these data alone cannot determine the object’s distance or velocity.
He goes on to produce a model that could explain the pulses, using the figure below.
This thin opaque ring, located somewhere in the solar system, would sequentially occult the star as it moved across the field. If anything like this were found, it would immediately raise the questions of where it came from and how it could survive millions of years of collisions with other objects. Alternatively, if the measured transverse velocity proved greater than that required to escape our solar system, a different set of questions would arise. Whatever is found, those speculating that our best chance of finding evidence of extraterrestrial intelligence lies within our own solar system [15], might have much to ponder!
If there is indeed some sort of occulting object, observations with widely spaced telescopes could potentially determine its size and distance. Meanwhile, a third double pulse event has turned up in Stanton’s data from January 18, 2025, where light from the star HD 12051 is found to pulse, with the pulses separated by 1.52 seconds. This last observation doesn’t make it into the paper other than as a footnote, but it’s an indication that Stanton may be on to something that is going to continue creating ripples. As in the case of Teleios, we have an unusual phenomenon that demands continued observation.
The paper on the unusual circular object is Filipović et al., “Teleios (G305.4-2.2) — the mystery of a perfectly shaped new Galactic supernova remnant,” accepted at Publications of the Astronomical Society of Australia and available as a preprint. The paper on the pulse phenomenon is Stanton, “Unexplained starlight pulses found in optical SETI searches,” Acta Astronautica Vol. 233 (August 2025), pp. 302-314. Full text. Thanks to Centauri Dreams readers Frank Henriquez and Antonio Tavani for the pointer to this work.