On not knowing something
I feel like in the last few years, the friction between not knowing something and knowing something has become so thin that it’s practically gone for me. If I ever don’t know something then I can type my malformed query into an LLM and it will give me exactly what I asked for. Good bot. The transaction is complete, my brain’s thread is unblocked and I carry on. The more I do this, the more I dislike that I do this. It’s just too darn efficient and I think I’m missing out on the all the good mess in between. That good mess can be:
- Finding the information myself (admittedly this can just be finding the right snippet on Wikipedia) and along the way finding out other, perhaps more interesting information. Such as Moonbow.
- Not finding out right now, and instead thinking about what it could be, often conversing with someone else. I find that the actual answer is often the most boring answer. Because now you know. The genie is out of the bottle. You have deprived yourself on all the imaging that could have been, and exchanged it for a little tid bit of information that you’ll likely cast away from your mind because it has nothing of value to you now.
- Just deciding that the friction is too rough right now and then forgetting about it because I don’t actually care, and it’s not worth breaking the flow of conversation to look up this arbitrary information.
There’s a thing that happens, and I’m sure it has a name, that mathematicians often do their best work before they’re 40. That’s because by the time they’re 40, they are cursed with knowledge. They’ve read enough papers, and gone down enough analytical alleys, that they know what routes not to take. When you’re young, you don’t know what you don’t know, and you can walk the same walk that the people before you did but maybe you’ll find something for yourself, along the way. I think that’s kind of how I feel here. ChatGPT has read everything and so it’s walking those paths for us, and we’re missing out on the good mess in between.
I guess I’m partially romanticising this, often finding the answer can be difficult and in non-interesting ways. E.g. trying to find the answer on a website that is 95% ad space. If you pose the question correctly, then LLMs could perhaps open up new questions that you didn’t think of. I don’t know the answer here, I just don’t like how “ingrained” it has become for me to reach for a chatbot to answer a question.