← Blog

On not knowing something

By Sam Ruston · 10 May 2026

I feel like in the last few years, the friction between not knowing something and knowing something has become so thin that it’s practically gone for me. If I ever don’t know something then I can type my malformed query into an LLM and it will give me exactly what I asked for. Good bot. The transaction is complete, my brain’s thread is unblocked and I carry on. The more I do this, the more I dislike that I do this. It’s just too darn efficient and I think I’m missing out on the all the good mess in between. That good mess can be:

There’s a thing that happens, and I’m sure it has a name, that mathematicians often do their best work before they’re 40. That’s because by the time they’re 40, they are cursed with knowledge. They’ve read enough papers, and gone down enough analytical alleys, that they know what routes not to take. When you’re young, you don’t know what you don’t know, and you can walk the same walk that the people before you did but maybe you’ll find something for yourself, along the way. I think that’s kind of how I feel here. ChatGPT has read everything and so it’s walking those paths for us, and we’re missing out on the good mess in between.

I guess I’m partially romanticising this, often finding the answer can be difficult and in non-interesting ways. E.g. trying to find the answer on a website that is 95% ad space. If you pose the question correctly, then LLMs could perhaps open up new questions that you didn’t think of. I don’t know the answer here, I just don’t like how “ingrained” it has become for me to reach for a chatbot to answer a question.