Linked List: January 2, 2025

Speaking of Simon Willison, I greatly enjoyed this post from last week, with some of the self-imposed principles he follows writing his excellent eponymous blog. Amongst them:

  • I always include the names of the people who created the content I am linking to, if I can figure that out. Credit is really important, and it’s also useful for myself because I can later search for someone’s name and find other interesting things they have created that I linked to in the past. If I’ve linked to someone’s work three or more times I also try to notice and upgrade them to a dedicated tag. [...]
  • If the original author reads my post, I want them to feel good about it. I know from my own experience that often when you publish something online the silence can be deafening. Knowing that someone else read, appreciated, understood and then shared your work can be very pleasant.
  • A slightly self-involved concern I have is that I like to prove that I’ve read it. This is more for me than for anyone else: I don’t like to recommend something if I’ve not read that thing myself, and sticking in a detail that shows I read past the first paragraph helps keep me honest about that.

Every step of the way, I found myself nodding my head, thinking to myself, I do that too! — right down to creating tags for people after I’ve mentioned their work or simply credited their bylines a few times. (The difference is that Willison seemingly isn’t a procrastinator, and I am, so my decades of tagging aren’t yet exposed to anyone but me.)

Then I got to this:

There are a lot of great link blogs out there, but the one that has influenced me the most in how I approach my own is John Gruber’s Daring Fireball. I really like the way he mixes commentary, quotations and value-added relevant information.

And now it doesn’t seem quite as amazing that I was nodding my head in agreement with each of his guidelines. But, call me biased, it’s still a hell of a good start to a blogging rulebook.

‘Things We Learned About LLMs in 2024’ 

Simon Willison:

A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments. [...]

I think telling people that this whole field is environmentally catastrophic plagiarism machines that constantly make things up is doing those people a disservice, no matter how much truth that represents. There is genuine value to be had here, but getting to that value is unintuitive and needs guidance.

Those of us who understand this stuff have a duty to help everyone else figure it out.

Nobody is doing a better job of that than Willison. I learned so much from reading this piece — I bet you will too.

Update: Anil Dash:

I think everyone who has an opinion, positive or negative, about LLMs, should read how @simonwillison has summed up what’s happened in the space this year. He’s the most credible, most independent, most honest, and most technically fluent person watching the space.

Couldn’t say it better myself.