The world is buzzing, if not puzzled, about the increasing use of artificial intelligence. ChatGPT, an AI-powered language model that has been “trained” and given massive amounts of online data, is one of the most popular artificial intelligence (AI) products available to the public today. After processing all of this information, ChatGPT may generate human-like text responses to a given prompt. It can answer questions, discuss a wide range of topics, and produce written material.
It’s not hard to envisage a robot wheeling and dealing on Mars’ surface, factory-wired with ChatGPT or a comparable artificial intelligence language model. This smart bot could be outfitted with a variety of scientific devices. It might examine what its scientific sensors discover “on the fly,” possibly even collating any evidence of past life it discovers almost quickly.
That information might be digested, examined, appraised, and compiled in a scientific manner. The well-paginated result, complete with footnotes, may then be submitted straight from the robot to a scientific journal, such as Science or Nature, for publishing. That manuscript would then be peer-evaluated, possibly by AI/ChatGPT reviewers. Does this seem far-fetched?
I contacted numerous famous researchers and presented this off-Earth, on-Mars situation, and received a variety of responses.
Prone to hallucination: “It could be done, but there may be misleading information,” Sercan Ozcan, a Reader in Innovation and Technology Management at the University of Portsmouth in the United Kingdom, said. “ChatGPT is not 100% accurate, and it is prone to ‘hallucination.'”
Ozcan is unsure whether ChatGPT would be useful if there was no prior volume of work for it to assess and replicate. “I believe that humans can still do better work than ChatGPT, even if it is slower,” he says.
His recommendation is to avoid using ChatGPT “in areas where we cannot accept any error.”
Humans in the loop: Steve Ruff is an associate research professor at Arizona State University’s School of Earth and Space Exploration in Tempe, Arizona.
“My immediate reaction is that ‘on-the-spot’ manuscripts are highly unlikely to be a realistic scenario given how the process involves team debates over observations and their interpretation,” Ruff said. “I’m skeptical that any AI trained on existing observations could be used to confidently interpret new observations without humans in the loop, especially with previously unavailable instrument datasets.” Every such dataset necessitates laborious effort to sort out.”
In the short term, Ruff believes AI may be utilized for rover activities such as selecting targets to examine without involving humans, as well as navigation.
First and foremost: What kind of world do we want to live in? That is possibly the most pressing question, according to Nathalie Cabrol, Director of the Carl Sagan Center for Research at the SETI Institute in Mountain View, California.
“First things first,” stated Cabrol. “AI is a formidable tool that should be used as such to assist humans in their work.” “We do that every day, in some form or another,” she noted, “and improved versions might make things better.”
On the other hand, like any human instrument, they have a double-edged blade and can occasionally lead to individuals thinking “nonsense,” as Cabrol feels is the case here.
“I enjoy writing papers myself.” “It’s a great time because I can see my work coming to fruition and put my ideas on paper,” Cabrol said, describing this as a crucial aspect of her creative process.
“However, suppose I let this algorithm write it for me for a moment.” Then I’m told it’s fine since the paper will be evaluated,” Cabrol explained. “However, by whom? I would presume that if you let algorithms do the work for you, you believe they will be less prejudiced and perform better. That argument leads me to believe that a human is not qualified to review that document.”
Specters of “transhumanism”: Cabrol suspects that the next question will be, “Where do we stop?” What if every researcher requested that AI compose their research grant proposals? What if they do it and don’t tell anyone?
“It depends on which world you want to live in and what part of humanity you want to leave,” Cabrol explained. “We are creative beings who are not perfect,” she went on, “but we learn from our mistakes and that is part of our evolution.” “Mistake and learning are synonyms for ‘adaptation,'” she explained.
By allowing AI to interfere with what makes us human, we are interfering with our own evolution, according to Cabrol, who sees hints of “transhumanism” in all of this. Transhumanism is a loose intellectual movement unified by the notion that the human race may grow beyond its existing physical and mental constraints, particularly via the use of science and technology.
“Of course, that’s not a chip in our brain, and that’s just a piece of paper,” you say. Unfortunately, it is part of a much larger, and really unsettling, conversation on the (mis)application of AI,” Cabrol said. “This is not a minor matter. It is more than simply a piece of paper. It is about becoming who we truly want to be as a species. Personally, I regard AI as a tool, and I intend to use it as such.”
Knowledge limit: “How ironic that we’re still debating the definition of life as we know it, and we’re starting to use a tool in that search that also stretches the definition of life,” Amy Williams, assistant professor of Geological Sciences at the University of Florida in Gainesville, said. She is a scientist on the NASA Curiosity and Perseverance rover missions, which send robots to explore Mars.
In full disclosure mode, Williams reacted to the AI-ChatGPT off-world setting. “The first time I used ChatGPT was to prepare for this response, when I asked it, ‘What organic molecules have the Mars rovers found?'” “The question was based on my specific area of expertise,” she explained to Space.com.
“It was illuminating in the sense that it did a great job of providing me with statements that I would describe as robust and appropriate for a summary that I could give in an outreach talk to the general public about organic molecules on Mars,” Williams said.
However, it also proved to Williams its restriction in that it could only access data from September 2021, in her case, indicating a “knowledge cutoff.”
“As a result, its responses did not cover the full range of published results about organics on Mars that I am aware of since 2021,” she explained.
Despite the fact that Williams is not an expert in AI or machine learning, she believes that future iterations of ChatGPT + AI will be able to incorporate more recent data and generate a thorough synthesis of the most recent discoveries from any given scientific exploration.
“However, I still see these as tools to be used in tandem with humans, rather than in place of humans,” Williams added. “Given the limitations in data uplink and downlink with our current Deep Space Network, it is difficult for me to see a way to upload the knowledge base for something as complex as, for example, the current and historical data and context for the sources, sinks, and fates of organic molecules on Mars so that the onboard AI could generate a manuscript for publication,” she explained.
Putting it into context: Williams believes that cutting-edge planetary study necessitates “retrospection, introspection, and prospection.” We push the boundaries of science by investigating ideas that have never been examined before, she noted.
“Right now, my experience with ChatGPT has shown me that it is excellent at conducting a literature search and converting the results into an annotated bibliography.” It could undoubtedly save me time while looking for basic information. It told me what we already knew — and written it up well! — but it wasn’t something a Mars organic geochemistry doctoral student couldn’t tell me.”
Finally, Williams stated that, while ChatGPT + AI is a powerful tool that can improve the process of conveying information and new discoveries, “I don’t see it replacing the human-driven process of synthesizing new information and putting it into context to generate new insights into science.” However, if every AI sci-fi film I’ve seen predicts the future, I could be incorrect!”