Thus far, we have covered basic work with an agent, how to use it for real engineering development, and special considerations for scientists. But this leaves a gap: how do we humans work with agents on a daily basis? How does it work emotionally? What is the bigger ethical picture?
What does flow state look like with agents?
I deeply value flow state / deep work. When it’s just me and a text editor, I know what that means: I close my eyes, take a few breaths, start the pomodoro timer, and drop into my happy place. Agents perturb that pattern. When they are working, what should we do?
For me it depends on the task.
If I am planning a major feature or refactor using claude, I stay present.
I find this type of work mentally taxing, so having a few seconds to breathe is welcome.
On the other hand, if it’s working on a well-scoped task, I just have it running on a dedicated monitor and keep an eye on it while I do some mundane task.
This is enough for me. The internet is full of descriptions of people using many agents simultaneously to work on independent parts of the codebase. More power to them! But when I’ve tried to do this, at the end of the day I end up frazzled with a lot of bad code.
I say “please”
When I ask an agent to do something, I say “please.” I don’t think that it results in any better performance. Instead, I do it for my own emotional state.
In the same way that I tell my older daughter that she will feel better if she treats her little sister with respect, I feel better if I am courteous to the model. Call me crazy.
Getting annoyed
Agents go off the rails sometimes. This is annoying.
In those cases I like to step back and think about how remarkable it is that any of this works.
The amazingness of it all was highlighted for me in a recent incident in which Claude was giving incorrect outputs due to a mismatch in floating-point types, leading to a wider bug hunt.
And there you have it: these trillion-parameter probabilistic models are being deployed on heterogeneous hardware at absurdly large scale.
If claude serves up Thai characters every once in a while, I can understand.
Don’t fall for sycophancy
In the opposite case, sometimes LLMs can make us feel so good that it’s problematic. Enthusiasm from an LLM about an idea does not necessarily mean it’s good. Don’t be Travis Kalanick.
You are responsible for the design of your code. Agentic coding closes the gap between idea and implementation, but LLMs should not be doing your thinking for you. Read, think, and discuss with others.
The bigger picture
I’ve written thousands of words here, but have completely left out ethical concerns. These include doomsday scenarios, gradual disempowerment, environmental impact, unemployment, copyright, and bias in AI systems. I worry about all of these.
However, these concerns have not caused me to boycott AI use. For me, it comes down to:
- The purpose of my career is to advance scientific knowledge. I believe in the inherent value of science. I think AI can greatly accelerate our ability to learn about the world. If industry gets to use AI to develop apps that keep people glued to their smartphones, I want to be able to use it to advance science.
- If I want to be an active participant in the discussion around AI, especially regarding science, then I need to understand its capabilities from the inside.
I welcome questions, comments, or flames.
This is part 4 of a 4-part series on agentic coding:
- Agentic Coding from First Principles
- Agentic Git Flow
- Writing Scientific Code Using Agents
- The Human Experience of Coding with an Agent (this post)