Jim Nielsen’s Blog
Preferences
Theme: This feature requires JavaScript as well as the default site fidelity (see below).
Fidelity:

Controls the level of style and functionality of the site, a lower fidelity meaning less bandwidth, battery, and CPU usage. Learn more.

LLMs, Intuition, and Working With Computers

I recently watched Simon’s talk around practical use of LLMs (and took notes). This slide stood out:

For the best [prompt] results, combine:

  • Domain knowledge of the thing you're working on
  • Understanding how the models work
  • Intuition gained from playing around with them a lot

I am by no means on the leading edge of LLMs. However, one thing I’ve noticed listening to people who are closer to the leading edge than I, is this idea that nobody quite knows why LLMs give the results they do — and the results can’t be repeated either (which is why experience and intuition are key to using them effectively).

In science, you say you “understand” something when you can describe how it works and reliably predict (and even manipulate) its outcomes.

The term “computer science” makes sense in this context, but LLMs seem to be introducing a shift away from the kind of determinism I’m familiar with in computers.

In programming, you learn the rules which allow you to manipulate the computer and get consistent, repeatable, (and debug-able) results.

In prompting, you play with the computer, see how it responds, and through experimentation and experience learn how to get roughly what you want but never in a repeatable way.

I almost find it ironic because we can be so “data-driven” in our approach to making software — “We can’t do anything unless we scientifically prove with repeatable results that this thing is a positive net-gain” — but with LLMs we’re just like, “Nah, it’s ok that we don’t fully understand why it works the way it does and because of that we can’t get consistent, repeatable results. Go ahead and release it to the world.”