Breder.org

Technical Knowledge Both Illuminates And Blinds

I believe in knowing a few things very deeply, “depth over breadth”.

As one concrete example, I've built over the years computer knowledge from high-level web applications, to systems and network programming, to microcontroller low-level code, and all the way down to logic gate programming.

I can say with full confidence that I do not know everything about any of these areas, but I know the fundamentals and the boundaries of what is possible in each. This has granted me all the pieces to understand the modern computer system stack at any level of detail necessary for the task at hand.

What one realizes is that the same fundamental problems from lower levels reoccur again, in different clothes, at higher levels.

Caching (or indexing) versus computing is a storage vs latency trade off present in almost every level.

Functional programming is conceptually the same model as algebraic circuits in digital RTL design.

Likewise, state-keeping and advancing the state (e.g. over time or when a signal is triggered) from sequential digital circuits is exactly the same problem to be solved by almost every application-level code.

Knowing a few things very deeply has paid off by providing prior knowledge of what are the well-known classical solutions from another context or from another perspective.

But, as a counterpoint to that, for the most part I've become the embodiment of “for someone with a hammer, everything looks like a nail”.

Having a pre-conception of how something was solved in the past does give you a head start if the solution is similar or derivative. In other ways, though, it funnels you away from novel and bleeding edge explorations.

For one, I didn't see computers ever being able of writing intelligent-sounding prose or create seemingly-creative work as Large Language Models (LLMs) and Generative AI (GenAI) are demonstrably able to do.

On my own mental model, computers just carry out well-specified instructions according to human-engineered code and the (often untrusted) input data.

And indeed, at the lowest level, that's all they do.

Notwithstanding, with just the correct (and large amount of) model weights, this purely mechanistic object is able to do intelligent writing , indistinguishable from human writing, alongside a superhuman breadth of knowledge.

I remain surprised and attentive to what the implications of that may be.