ai @ work and the future of science, February 26
I’m a newsletter person. I like to read little snapshots of what’s going on in the world, different thoughts from different people straight to my inbox, without seeking it out. I'm subscribed to the New York Times Global for international news, the arXiv newsletter for new planetary sciences papers, and two daily newsletters: TL;DR (on the tech industry) and Payload (on the space industry).
As everyone is these days, the tech and space industry talk about AI frequently. TL;DR in particular is full of think pieces on how to use AI best, when and how AI will replace our jobs, and the latest AI research. I disagree or agree with these articles to varying degrees, but I’m consuming this content daily.
I personally feel obligated to test and try out all the latest tools. I’m a PhD student, and I want to stay in academia, and I’m very aware of the reality of publish-or-perish. The faster I am at producing models and papers, the better.
At ETH (where I work), AI comes up far less frequently in conversation with colleagues than it appears in my brain. They talk about using ChatGPT (no mention of other models or tools) to write, or to code. But there’s little talk on the nuances of being replaced by AI, how AI will change the field, and how to use AI best. I know these discussions are happening at ETH, but the ones I’m privy to on the future of the field focus on future telescopes, how the field will mature with the discovery of thousands more exoplanets, and how to better integrate with geology, chemistry, and biology.
This disconnect is jarring. AI is constantly on my mind, and I want to understand what the scientists I work with daily think it means for the future of the field. Specifically, I personally contemplate what it means to be replaced. To be an excellent researcher, I believe I need to work hard, be good at problem-solving, foster good collaborations, and have creative research ideas.
For now, AI is helping me work more efficiently by writing code and helping revise papers. It can even help me solve problems, although I still rely on my own experience and intuition to solve particularly intractable issues, and turn to experts and my peers for conversation when this fails. In terms of problem solving, I do worry that using AI will slowly erode my brainpower. Working well with others and communicating scientific ideas still falls solely on me.
What concerns me most is whether AI will one day be able to generate creative research ideas. I know that if I one day want to become a professor, I will have to come up with a truly original idea and contribute something new to the field. I personally keep track of my ideas with a document. I jot down some notes as I notice weird things in the data I work with, or hear interesting talks, or come across something in a paper, or something just zaps into my brain.
Currently, while working, I can describe the model I want to an AI and with some iteration it will create it; after all, it is already creating entire apps. In physics, I do still need to read papers to get equations, thermodynamics, and physics ideas, and check its work; AI is not quite there yet at doing this work (I’ve tried). When the work is done, I still need to rely on my memory to think of useful papers and interpret the data (I’ve tried using AI; it’s garbage so far). However, we are shortly coming to a point where these ideas could be turned into models and then into papers by AI with less input from me.1
But I’m still the person who comes up with the idea. What happens if AI can come up with ideas, too? For now, I’m still the originator of the work, but I don’t know how this will last and how this will change my abilities as a researcher.
1 We already have to worry about garbage papers; this problem will get worse.