Warts, Data Models, and Why AI Won't Make You Rich: October's Reading List

Although I am still working my way through a rather dense book on philosophy recommended to me two months ago, work brought me to dive into three technical books on UI design in the hope to fix some shortcomings to my current project. I already read two of them:

Refactoring UI - Adam Wathan and Steve Shoger
This book is highly rated and its formatting feels luxurious, yet the experience of reading it left me seriously wanting. It merely contains an endless series of “you must do this” and “you must not do this” with very little guidance as to “why”. Or, rather, the “why” given is “because our current aesthetic norms say so” which feels arbitrary and short-sighted. Perhaps more annoyingly, none of the recommendations felt fresh compared to my own past experience! And thus, doubts as to whether it would help me improve my designs.
Practical UI - Adham Dannaway
This book is slightly less well rated than the previous one, yet just as luxuriously formatted. I started reading this after I had been disappointed by the previous one, and the first few chapters felt like “more of the same” so the disappointment grew further. I was so irritated! I started skimming, expecting to close the book midway. But then, I ran into chapter 5 (“typography”) and there I started actually learning really good stuff (somewhat surprisingly — my skills with typography were already above average). All the chapters afterwards were also pretty good (I had not expected to receive major insights about something as mundane as button color and placement!), and I even took notes by hand to better memorize the stuff. So this one is a clear recommendation from me.

❦❦❦

Perhaps surprisingly given how busy October was, I was able to squeeze a lot of interstitial reading into it.

Starting with some operational and technical insights.

I was happy to add Seeing like a software company by Sean Goedecke to stuff I will probably recommend in my mentoring. This author translates Seeing like a State (James C. Scott) to the world of corporations and outlines how corporations seek legibility above other things that individuals may care about, like quality or productivity. I can think of it as an eye opener for folk selling their labor in exchange for money and psychological safety, but I especially think this angle is also useful to foster the right mindset when selling a B2B product to a larger customer.

At a more technical level, it was fun to see in Memory access is O(N^[1/3]) (Vitalik Buterin) a very compact version of a fundamental idea I ran into during my PhD research: that latency in systems is a cubic root function of the storage size because of basic topology. It's really obvious in hindsight (also after you spend enough time looking at physical circuits), but it took me a few years to fully internalize and I still come across seasoned experts who don't fully grasp why the “memory hierarchy” looks the way it does. The author here also provides a few rules of thumb to use when designing systems, and I think these are accurate.

Also, in My Approach to Building Large Technical Projects, Mitchell Hashimoto (who co-founded HashiCorp) presents the main tricks that he uses to keep focused and motivated when working on something big. In a nutshell, it boils down to “keep the feedback loop tight”. I like it because it clearly acknowledge our basic human instincts while keeping his advice very pragmatic and actionable. It also happens to be quite relevant to my work currently!

Finally, on a transversal note, in Emails for Humans, Carter Mark and Jordan Gonen teach us how to write messages that make people feel valued and excited to connect. I experienced reading this as a no-nonsense and empathetic demystification of cold networking.

❦❦❦

At a more strategic level, I feel the following helped.

In You Want Technology With Warts, Christoffer Stjernlöf points out that “warts” (irregularities, special cases) most commonly appear in systems in order to preserve some behavior that was important to a customer in the past. Therefore, observing warts in an established product is a sign that the product designer cares a lot about their customers and so it is strategically advantageous to select product with warts. To this, I would personally add that excessive warts is also a sign of a product designer who does not care enough, but I do agree with the basic point and I also feel intuitively that a product that looks “too polished” is likely not useful enough to be adopted.

At a separate level, in Your data model is your destiny Matt Brown articulates the idea that a business' data model is a representation of how it understands the needs of its customers: a business is successful either when it develops a new data model that hadn't been envisioned earlier (it discovers/creates an untapped market), or when it extends a known data model with innovative operators (and competes effectively with other businesses in the same market). At first, I felt confused while reading this, because it seemed like the author was simply creating isomorphic language to “you should try to deliver new value to your market, and barring that cheaper value” which is really strategy 101. The actual powerful angle here is that the author also gives us some litmus tests:

If you’ve already built a product, you can audit how powerful and correct your data model is. Open your database schema and see which table has the most foreign keys pointing to it. Is that the atomic unit your customers actually think in? List your product’s core actions. Do they all strengthen one central object, or are you building a feature buffet? What would break if you deleted your second-most important table? If the answer is “not much,” you probably have the wrong data model.

Test whether your data model creates compound advantages. When you add a new feature or product, does it automatically become more powerful because of data you’re already capturing? If your answer is “we’d need to build that feature from scratch with no inherited context,” you don’t have a compounding data model—you have a product suite. The right model creates natural expansion paths that feel obvious in retrospect but were invisible to competitors.

❦❦❦

In the “AI science” category, I was especially intrigued by Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples by Alexandra Souly et al. Anthropic, who sponsored this research, published an article that summarizes this finding: A small number of samples can poison LLMs of any size. In a nutshell, no matter how large and complex language models become (based on the transformer architecture), just a few pages/paragraphs in their input training set can “poison” their output; that is, trick them to produce particular outputs that contradict the rest of their training data.

This finding gels well with this other article I reviewed last January, and where the author posited that we have a moral duty to produce content to influence the LLMs through training. With the new science, it turns out that not much writing will be necessary to preserve our human impact!

❦❦❦

Meanwhile, I have not talked to anyone recently in the tech industry who is still reluctant to consider generative AI in their work. But I know they exist. To those folk, I would recommend Why you need to adopt AI (as a software engineer) by Jamie Lawrence. The gist of this is a game-theoretic argument: if you don't invest, you risk little gain and a big loss. If you do, you risk little loss and a big gain. Besides, “you cannot adequately question something you haven't spent the time trying to understand.”

On the other side of that coin, Simon Willison is a slow but confident adopter and shared two field notes this month that raised my eyebrows. In Embracing the parallel coding agent lifestyle, he explains the opportunities and benefits to let multiple AI agents do work at the same time. Subsequently, he offers us a new term, “vibe engineering” (to contrast with “vibe coding”) that designates the work of a seasoned expert who drives AI agents to deliver work off specifications (written by the human) and apply strict and industry-established quality controls on their output. I'm not sure I like it, but I do agree we need a word for that.

❦❦❦

Transitioning to more macro-economic views.

One angle I took some time to try to understand was AI Will Not Make You Rich by Jerry Neumann (a retired investor). If I understand it correctly, the argument there is articulated in two parts. The first part is an observation that although technical innovation always follows an S-curve, there are two kinds of S-curves: those that make a bunch of money to many investors (e.g. the invention of line production) and those that make a bunch of money to a few business operator owners and very little money to investors (e.g. railways, transport containers). In that first part, the author also points out that each of these two categories have “tells” that we can pattern match while they are happening. Then, in the second part of the argument, the author points out that the current AI technology wave doesn't seem to have the properties required for it to belong to the first category. (Noah Smith—an economics professor—reaches similar conclusions in America's future could hinge on whether AI slightly disappoints, albeit in a more urgent / alarmist tone.)

Jerry Neumann then concludes that AI tech might make some entrepreneurs rich but it will fail to make investors rich. There is a lot of science linked at the bottom of that article that I am still studying.

On a relatively tangential note, it was amusing to watch If Not Bubble... Why Bubble Shaped? by the team behind the podcast/channel “How Money Works”. The argument here is essentially “sure, all these AI companies inflate their numbers by passing their money around in circles, but it's their own money that they had saved up 10 years ago, so it's not retail investor money, so it's maybe not a bubble”.

To me, it seems that the structure of the US retirement fund industry pokes holes into that argument (as some commenters under the video were also quick to point out), but the video's authors also suggest that if all this money sloshing around materializes into infrastructure that's broadly useful (e.g. data centers, nuclear plants), that infrastructure itself might get bailed out by the government in the end. This reminds me more-or-less of what happened with railways at the turn of the 20th century. The lesson here, if there is any to take away, would be to keep investments broad.

In a different direction, yet also tangentially related, OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week (Louse Matsakis writing for Wired; archive link). The macro-economic story here is that the integration of LLM chat bots into society at large is creating new health hazards that we now need to deal with. Expect strain to the economy at large (like leaded petrol did in the 1970s by literally making everyone dumber) and new pressures on the mental healthcare industry (that is already woefully inadequate). This is bound to erode many of the gains the AI industry may bring otherwise.

❦❦❦

Moving on and away from AI news, a large part of my volunteering time is helping others look at the world and identify trends—towards identifying in which ways the world remains the same over time, and in which ways things change between generations.

Towards this, I discovered a few gems. In The Decline of Deviance, Adam Mastroianni shows, though lots of data, how the amount of diversity in human civilization (as seen through the lenses of art, culture, science, innovation, etc) has reduced steadily over the last hundred years. The choice quote:

Whenever you notice some trend in society, especially a gloomy one, you should ask yourself: “Did previous generations complain about the exact same things?” If the answer is yes, you might have discovered an aspect of human psychology, rather than an aspect of human culture.

[...] So as far as I can tell, the decline of deviance is not just a perennial complaint. People worrying about their culture being dominated by old stuff—that’s new.

Overall, this author judges this evolution to be a “good thing”. Declining deviance also means declining crime, declining incurable diseases, and fewer major crises that upheave many people's lives at once.

However, they also point out that this may also lead to a lower rate of innovation. This includes linking to The flight of the Weird Nerd from academia, a good piece by Ruxandra Teslo which resonates a lot with my personal experience and explains a point that is dear to me: the trend where emergent corporate forces capture all the “good” nerd intelligence into “bad” offices with golden handcuffs (or addictive algorithmic feeds). What this means, at macro level, is that we might simply not be able to respond to our major crises on time because smart people might not even notice them (or understand how they could help).

In a different direction, I was curious to read about this bit of anthropological and political analysis: Addiction and Liberty. There, the author Matthew B. Lawrence identifies how addiction and liberty are fundamentally at odds (as I understand it, mainly because addiction robs individuals of free will, and also because addiction has increasingly become a weapon wielded by the wealthy to control the poor); and that therefore, we morally should feel compelled to push for a right to freedom from addiction as a foundation for democracy. The “call to action”, should there be one, would be to push our elected representatives to develop legal protections suitable for this new digital age.

❦❦❦

There were also bits of very uplifting news this month.

A bit of medical news that attracted my attention was Intercellular communication in the brain through a dendritic nanotubular network (Minhyeok Chang et al.). This finding suggests that there is more communication in the brain happening than what we thought was possible (i.e. more than just through synapse-dendrite connections). If confirmed, this would be the first major shift in our understanding of the basic function of the brain in decades. In the authors' own words:

The discovery of this parallel neural network opens up entirely new avenues for research into brain connectivity, intercellular communication, and the fundamental processes driving neurological disease.

Meanwhile, a funny yet powerful political thing that happened was summarized in this Politico article: One-man spam campaign ravages EU ‘Chat Control’ bill (reporter: Sam Clark). In a nutshell, there was a bad thing happening at the EU level (ignorant politicians mistakenly trying to pass a law that would restrict individual freedoms), and “some guy” in Denmark made a web site that teaches EU citizens how to complain loudly about it to their representatives. A bunch of people used his advice, the representatives took notice, and the bad thing was postponed. Democracy: 1-0. Of course, some ire was initially directed at the web site and the Danish person behind it, but 1) that person was clever enough to remain anonymous 2) politically, this retaliation looks just too much like “shooting the messenger” and so is unlikely to go anywhere. I find this story supremely inspiring, because it provides a template we can follow again and again if need be.

The last bit of news I'd share today, mostly because it stuck a smile on my face for a full day, was the report Global Electricity Mid-Year Insights 2025 (Malgorzata Wiatros-Motyka, Kostantsa Rangelova). According to the think tank that commissioned the analysis (Ember Energy), “Solar and wind outpaced demand growth in the first half of 2025, as renewables overtook coal’s share in the global electricity mix.” There were three points in particular that I was happy about:

  • the increase in solar and wind matched 109% of the increase of demand worldwide (i.e., renewable production is now growing faster than our need for more energy).
  • as a result, renewables' share of global electricity rose to 34.3% (from 32.7%), while coal's share fell to 33.1% (from 34.2%).
  • CO₂ emissions from power generation fell slightly worldwide (by 12 Mt) in the first half of 2025—despite a large increase in energy demand.

It is still far from what we need to achieve collectively, but it is something, and we now have good evidence to serve to the doomsayers that used to say “trying to reduce fossil fuel usage and CO₂ production for power generation is impossible and/or will be the doom of society.” We're doing it, and the sky is not falling down.

❦❦❦

In closing, a few parting thoughts that can dignify our lives.

The one that most resonated with my mood this month was You Could Just Choose Optimism (Carter Mark and Jordan Gonen). The world has too many complainers, and I feel we are often stuck in place because we give complainers too big of a platform to complain. It really kills the mood and our collective ambitions. The authors' offer, if you choose to take it, is to continue upholding your standards but not vocalize them as much. To choose optimism. The upside? “It's a refreshing feeling to be anywhere (at work, traveling, at home, at a bar) around a group of people who are jolly and optimistic. The world feels new.”

Another thought I am keeping close at heart is that stated in That's what AI can't replace by Tom Youngs, reminding us of what AI won't take away from the next generation: “identity, community, and a place to feel like they belong bonded by values, beliefs and interests.”. (The other “slides” in that IG post are also cute.) As long as we continue to foster these spaces, we will continue to feel empowered.

And one way to move in this direction is to take action today to reclaim our attention, to free it towards endeavors that involve other people. In Smartphones and being present, Herman Martinus teaches us how to reduce screen time from an average of 4-6 hours per day to 30 minutes or less, and how to become “a more present, less distracted, and more optimistic person.” Try it!

❦❦❦

References: