Before the era of AI, if you needed to solve a problem, you would do a search, find documentation or a blog post or a StackOverflow answer, and start reading. Most of the time, you could find something that was kind of in the ballpark of the problem you were having, but not quite exactly what you were looking for. Maybe you could use that information to solve your problem. Maybe you needed to keep looking. But as you looked for a solution to your problem, you ran across a lot of solutions to other problems. Maybe it would immediately register with you: “Oh, that’s cool. I need to keep that in mind.” Maybe you’d get annoyed that it wasn’t quite what you needed right now, but it would still get tucked away in some dark corner of your brain.

That stuff you learned along the way? That was Collateral Knowledge. Stuff you weren’t looking for, but you learned as part of the adventure.

When you know how to ask the right questions, AI makes that quest for knowledge a thing of the past. You can describe your problem to AI, and if the solution was something you could have found on the Internet, it will explain the solution cleanly in the context of your specific problem. No more spending hours reading docs looking for the specific thing you needed. No more reading through blog posts that solve a problem that’s adjacent to the one your solving. It’s a huge time saver in the moment, but it means you missed out on the collateral knowledge you would have gained the old way.

Now, I’m not convinced this is a net loss. Most of the time collateral knowledge came in handy when I had some new problem I was trying to solve, and I could skip most of the searching and reading I would have done otherwise. If I can get to solutions my problems quickly with AI, I don’t need to rely as much on things I might have picked up on during past searches.

But sometimes collateral knowledge leads to a paradigm shift in how I go about solving a problem. Rather than simply saving some time searching, it puts the problem in a fundamentally different framing that leads to a very different solution than I would have arrived at otherwise.

By way of example: When I started Rivet.cloud, there were a few other teams trying to scale Ethereum Nodes by creating a shared database that multiple nodes could share access to, separating compute from storage and avoiding the reprocessing of blocks. The problem these projects kept running into was that network latency in accessing the database made block processing too slow to keep up with the network, so nodes that took that approach fell behind. But because of some collateral knowledge I’d gained through a conversation with a friend for a different problem earlier, I thought it might work better to use Streaming Replication to process the data once and distribute it to a pool of servers that could handle requests. Variations on that solution served us well for years, and I’m not sure I would have reached it without the collateral knowledge of streaming replication.

That said, sometimes you can get a fundamentally different solution out of AI by asking the right question. Stack Overflow has long discussed the XY Problem ; the notion that if you’re trying to solve problem X and you think the solution is Y, you should ask about how to solve X instead of asking about how to do Y. If Y is a good solution, someone will tell you how to do it. If there exists some better solution, Z, someone will tell you how to do that instead. But if you asked about Y, you were only going to get answers about Y, and not learn how Z is a better solution to X. This pattern applies to modern AI as well; you should ask AI how to solve the problem you’re facing, rather than asking it about the solution you’re assuming is best. It may still miss some of the more innovative solutions, but it will generally lead you to industry best practices for frequently solved problems.

I also find that sometimes AI leads to collateral knowledge anyway. Sometimes I might be asking it about one particular topic and it mentions something tangentially related. If that seems interesting, I’ll ask for more detail about it just because it seems worth learning about.

By no means am I telling you not to use AI to find solutions. But be aware of the collateral knowledge you might be sacrificing as a result, and look for opportunities to supplement what you’re learning by digging deeper on things that may not seem immediately relevant; you never know when it might lead to your next innovation.