The information provided by an AI model aids astronomers in proposing a novel hypothesis for monitoring far-off planets.

News Sand DC
The Sun peeks above Earth’s horizon as the International Space Station (@ISS) orbits above east China in this image captured by a crew member. Source: NASA

Machine learning models are increasingly being used to supplement human processes, either by executing repetitive tasks more quickly or by offering systematic insight that helps put human knowledge into context. After simulating gravitational microlensing occurrences, astronomers at UC Berkeley were shocked to discover that both occur, leading to the development of a new unified explanation for the phenomena.

When light from far-off stars and other celestial objects bends around a closer one directly between it and the viewer, it gives a brighter — but distorted — image of the distant one for a brief period of time. We can learn a lot about the star, planet, or system that the light is bending around based on how the light bends (and what we know about the distant object).

A brief increase in brightness, for example, indicates a planetary body transiting the line of sight, and this sort of reading anomaly dubbed a "degeneracy" for some reason, has been used to discover hundreds of exoplanets.

It's impossible to quantify these occurrences and objects beyond a few fundamental conceptions like their mass due to the constraints of observation. And degeneracies are commonly seen to fall into one of two categories: the distant light passing closer to either the star or the planet in a particular system, or the distant light passing closer to both. Ambiguities are frequently reconciled with other known facts, such as the fact that the planet is too tiny to generate the scale of distortion witnessed, as we know from other sources.

Keming Zhang, a doctorate student at UC Berkeley, was researching a mechanism to swiftly assess and categorize such lensing phenomena, which are becoming more common as we survey the sky more frequently and in better detail. He and his colleagues built a machine learning model using data from known gravity microlensing events with known causes and configurations before releasing it on a slew of less well-quantified occurrences.

The results were unexpected: in addition to neatly estimating whether an observed event fell into one of the two basic degeneracy classes, it discovered a surprising number of events that did not.

"The two preceding degeneracy hypotheses deal with situations in which the background star seems to pass near to the foreground star or planet. In a Berkeley news release, Zhang noted, "The AI algorithm gave us hundreds of examples from not only these two circumstances but also situations when the star doesn't come close to either the star or the planet and cannot be described by either prior theory."

This might have been caused by a poorly tuned model or a model that wasn't confident enough in its own computations. Zhang, on the other hand, felt sure that the AI had spotted something that human observers had consistently missed.

They ended up presenting a new, "unified" explanation of how degeneracy in these facts may be explained, of which the two existing theories were just the most prevalent examples, after some convincing, as a grad student challenging established orthodoxy is permitted but possibly not encouraged.

Diagram showing a simulation of a 3-lens degeneracy solution. Source: Zhang et al

They examined two dozen recent studies on microlensing events and discovered that astronomers had incorrectly classified what they observed as one of two types, whereas the new explanation suited the data better than both.

"People were witnessing these microlensing events that were actually demonstrating this new degeneracy but didn't know it," says the author. "It was essentially simply machine learning looking at thousands of occurrences where it became hard to miss," said co-author and astronomy professor Scott Gaudi of Ohio State University.

To be clear, the AI did not come up with or suggest the new hypothesis; it was totally up to human brainpower. However, without the AI's meticulous and confident computations, the simpler, less accurate hypothesis would have likely lasted for many more years. We are learning to trust some AI models to generate an intriguing reality free of preconceptions and assumptions, just as humans learnt to trust calculators and later computers – that is, assuming we haven't already written our own preconceptions and assumptions into them.

In a publication published in the journal Nature Astronomy, the new hypothesis and the procedure leading up to it are outlined. It's unlikely that astronomers among our readers are aware of this breakthrough (it was a pre-print last year), but machine learning and general science experts may find it fascinating.

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !