

It’s Texas.


It’s Texas.
Guess NASA didn’t want them writing SMTP rules while circling the moon.
On this day of global reconciliation, they should split the difference and go for ipv5.



Remembered I had an unopened Pi5-8GB on the shelf.
The Macbook has 128GB RAM so it can run some beefy local models. The Qwen family of models are pretty good, but I use LMStudio to switch around. For experimenting, kt’s good to have a large SSD drive. None of the local models are as good or fast as the big/centralized ones, but the code stays on the machine, and you don’t have to pay monthly or token fees.
For serious dev work, you’ll still want at least one of the big ones, in case the local one gets into a loop.
The Macbook has slightly better speakers, but I mainly use bluetooth headphones to listen to music or videos.
I used to develop iOS apps professionally. For a solid 5-6 years, used nothing more than a Macbook Air. It was light, easy to take to a coffee shop, and I could run VMware or Parallels on it for Linux and Windows development. Worked great, especially if you could connect it to an external monitor so each window could be its own OS. The two things you can’t change are RAM and built-in SSD storage. I’d take those higher if you can afford it.
My current machine is a Macbook Pro, but that’s because I run local LLMs and databases on it. If I was only doing mobile development, it would be way overkill. Not sure if the Neos have enough power to run Xcode.


It’s not just the compiler. It’s all the libraries.
Was jokingly going to suggest a WASM version. But then: https://github.com/ffmpegwasm/ffmpeg.wasm


EFF supporter for years. Have so many of their t-shirts (amazing designs, btw). Cindy Cohn is the real deal. Anyone online should go pay attention to them.

Everybody with a college course in Thermodynamics.


The problem with CAG is not just that it hogs memory, but to keep it fresh you have to keep re-indexing. If the corpus is large and dynamic, it can easily fall out of date and, at runtime, blow out the context window.
GraphRAG has some promise. NVidia has a playbook for converting text into a knowledge graph: https://build.nvidia.com/spark/txt2kg
It’ll probably have the same issues with reindexing, but that will be a common problem, until someone comes up with better incremental training/indexing.


Looks interesting. Will give it a whirl on my home server.
In this article, they talk about bringing up a local RAG system to let people run an LLM off a large document corpus: https://en.andros.dev/blog/aa31d744/from-zero-to-a-rag-system-successes-and-failures/
Wonder if this, connected to something like that, and wrapped in an easy end-user friendly script or UI could be a good combination for a local, domain-specific, grounded knowledge-base?
There was a cheapo Japanese restaurant downtown. Plastic everything. Went there for lunch a while back. Worst Bento box ever.
Six months later. Hmm, Bento box sounds good. Go to this Japanese restaurant. Halfway through the awful meal, remember I’d been there! Swore never to go back. Again.
This cycle repeated SIX times.
What broke it was the whole building burning to the ground because of a grease fire.
Point is… hmm… Bento for lunch sounds good.



Not unusual, but surprising how many times it has come handy in group gatherings.
Says so right on the box.
A U.S.-made Robot, designed to play sports.