Currently reading multiple books at once, so progress on each of them is slow. There's just one that I somewhat completed this month:
- Thinking, Fast and Slow - Daniel Kahneman
I read this because it showed up numerous times as a reference in other things I was reading last year. The main message is Daniel Kahneman's model of our mind: that there are two “systems”, that he calls “system 1” and “system 2”, which he says are working together: the first thinks fast, but is intuitive and easily mistaken. The second one thinks slow, but more methodical and rigorous. The book presents many experiments that illustrate the relationship between these two systems, what we can expect from them, and suggestions as to how to best utilize system 1 while reducing the mistakes it makes.
After reading this, I would say the model is intriguing, and some of the suggestions made are perhaps worth investigating further. One thing that bothers me is that some of the key experiments that the author uses to support his model could not be reproduced since. There are also not many citations to other studies that the work is based on. It feels somewhat less scientific than other things I've read around the functions of the mind.
Would I recommend reading this? I believe the ideas from this book are assumed to be common knowledge in certain circles nowadays, so it probably classifies as “general culture”. Some of my friends speak highly of it. But perhaps reading an abridged version would be sufficient. I was skimming towards the end.
❦❦❦
Otherwise, there's a few things around the topic of business and operations that I felt were teaching me something last month.
In The value of Pareto’s bottom 80% Byron Sharp and Charles Graham point out that the research data shows something different than what people believe to know about the Pareto ratio (80/20). It turns out that the bottom 80% of the customers (sorted by revenue per customer) are responsible for 40% of the sales, not 20%! In the traditional view of Pareto's law, the bottom 80% are not “worth” catering to, because it is believed (incorrectly, it turns out) that marketing properly to just 20% of the customers will secure 80% of the revenue anyway.
Unrelatedly, in Strong Opinions Loosely Held Might be the Worst Idea in Tech, Michael Natkin denounces a behavior that seems prevalent (this agrees with my own experience): people choosing to express opinions as fact in environments where diverse people are present, to the point that less-experienced or other-experienced people feel uncomfortable about it. I had my own intuition about this already (to the point that I already developed the same approach as the one recommended in this article, independently), but I had felt isolated in this view before and so I now feel relieved to see this spelled out by someone else.
Finally, in You Didn't See It Coming, Aishwarya Goel points out that we have lost the art of sharing authentic stories about what we do and how our achievements came to be. The truth is edited a posteriori to make it seem like the achievement was planned from the start; that we always wanted what we end up obtaining; and that things turned out the way we had prepared for. All lies of course, and they are insidious: it gives wrong impressions to younger, or less experienced people, about what a life path looks like, and makes them develop unreasonable expectations for themselves.
This is connected to the idea of lemon markets. In Selling lemons, Frank Chimero explains to us how information asymmetry between sellers and buyers tends to pull the quality of products downwards. Choice quote:
If a buyer can’t distinguish between good and bad, everything gets priced somewhere in the middle. If you’re selling junk, this is fantastic news—you’ll probably get paid more than your lemon [a bad old car] is worth. If you’re selling a quality used car, this price is insultingly low. As a result, people with good cars leave the market to sell their stuff elsewhere, which pushes the overall quality and price down even further, until eventually all that’s left on the market are lemons.
This relates to my point above as follows: in an online world where people craft appealing but bad stories about themselves, when others don't know the entire truth, the “better”, authentic relationships start developing outside of the online world, without a need for public online profiles. The “good” people use online profiles less and less, such that the only online profiles that remain are the fakest.
Oddly, this point resonates with two other views I came across last month. In The Case Against Social Media is Stronger Than You Think, Nathan Witkin explains in excruciating numerical detail how social media has pushed nuanced, constructive discourse outside of the public online sphere. So much so that a lot of people are noticing, including the creators: in The Last Days Of Social Media, James O'Sullivan teaches us that the public platforms are currently trending down (literally!), because they still haven't found how to capture trust between people and the true quality of connections, and people who care about the world don't like to contribute that problem anymore.
❦❦❦
Moving on to the latest LLM science news.
In Beyond Orthogonality: How Language Models Pack Billions of Concepts into 12,000 Dimensions, Nicholas Yoder explain an important (and interesting) mathematical result: that you can “compress” knowledge about data in fewer parameters than simple intuitions about algebra would allow us to expect. The explanation is short and effective, and teaches us things about why LLMs “work” without being too hamfisted about it.
Meanwhile, in OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws, Gyana Swain shares some thoughts about a recent article by OpenAI researchers (Why Language Models Hallucinate, from Adam T. Kalai et al). In a nutshell, this is mathematical proof that hallucinations are a fundamental behavior of LLMs, which means we won't ever be able to remove them entirely. I still sometimes meet non-technical people who are optimistic that “someone, in the future, will fix hallucinations” so I am glad to now have a link to teach this view away.
❦❦❦
Shifting view to LLM applications, I was curious to discover Getting AI to Work in Complex Codebases by Dexter Horthy. In there, the author suggests multiple strategies to compress the complexity of an existing, large project within the limited abilities of agentic AI. Several of the suggestions there match my own experience, so I will probably explore these findings further.
This also connects well to the ideas put forth in The AI coding trap by Chris Loy. There are very pretty and elucidating graphs in there that capture the zeitgeist of commonly held beliefs about “AI in tech” in 2025. The “trap” identified in the title is when people who practice “vibe coding” inevitably get blocked when their project reaches a certain degree of complexity. The suggestion given is to emphasize the role of specifications, monitoring (and review), documentation, and modular design throughout the process. These are the mechanistic components of the strategies identified by Dexter Horthy above; I full vouch for their effectiveness (I practice this already).
❦❦❦
I also want to regularly step back and reflect on how technology shapes our societies. I ran across two thought-provoking pieces on this topic.
In Reverse Centaurs, famous sci-fi author and renowned transparency activist Cory Doctorow uses a story of gross AI misuse in a corporate setting to teach us a lesson about “centaurs” (people who do stuff, while assisted by technology) and “reverse centaurs” (technology that does stuff, assisted by people). This text shone bright after I had just read Greatest irony of the AI age: Humans being increasingly hired to clean AI slop (Satyen K Bordoloi).
Cory Doctorow actually nailed the main choice that we face:
AI hucksters, desperate to keep their stock bubble inflated, will tell you that there is only one way that this technology can be used: to fire a whole ton of workers and make the survivors do their job at frantic Lucy-in-the-chocolate-factory cadence. While it’s true that this is the only way that their companies could possibly be worth the hundreds of billions of dollars that have been pumped into them (so far), there’s no iron law that says that investors in tech bubbles should always turn a profit (indeed, anyone who’s lived through this century knows that the opposite is far more likely).
The fact that the only way that AI investors can recoup their investment is by turning us all into reverse-centaurs is not our problem. We are under no obligation to arrange our affairs to ensure their solvency. In 1980, Margaret Thatcher told us, “There is no alternative.” In 1982, Bill Gibson refuted her thus: “The street finds its own uses for things.”
❦❦❦
At a more macro level, I liked to read this mini essay by the famous and prolific mathematician Terence Tao. The part of this argument that most resonated with my lived experience is this one:
My tentative theory is that the systems, incentives, and technologies in modern world have managed to slightly empower (many) individuals, and massively empower large organizations, but at the significant expense of small organizations, whose role in the human societal ecosystem has thus shrunk significantly, with many small organizations either weakening in influence or transitioning to (or absorbed by) large organizations. While this imbalanced system does provide significant material comforts (albeit distributed rather unequally) and some limited feeling of agency, it has led at the level of the individual to feelings of disconnection, alienation, loneliness, and cynicism or pessimism about the ability to influence future events or meet major challenges, except perhaps through the often ruthless competition to become wealthy or influential enough to gain, as an individual, a status comparable to a small or even large organization.
This is one of the key problems that my current project is trying to tackle (more news on this later).
To close, I encourage you to watch the latest U.N. speech by Finland's president. This is a poignant yet extremely well researched and argued explanation of how nations are currently battling over how to divide power in the world.
❦❦❦
References:
- Daniel Kahneman - Thinking, Fast and Slow
- Byron Sharp and Charles Graham - The value of Pareto’s bottom 80%
- Michael Natkin - Strong Opinions Loosely Held Might be the Worst Idea in Tech
- Aishwarya Goel - You Didn't See It Coming
- Frank Chimero - Selling lemons
- Nathan Witkin - The Case Against Social Media is Stronger Than You Think
- James O'Sullivan - The Last Days Of Social Media
- Nicholas Yoder - Beyond Orthogonality: How Language Models Pack Billions of Concepts into 12,000 Dimensions
- Gyana Swain - OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
- Adam T. Kalai et al - Why Language Models Hallucinate
- Dexter Horthy - Getting AI to Work in Complex Codebases
- Chris Loy - The AI coding trap
- Cory Doctorow - Reverse Centaurs
- Satyen K Bordoloy - Greatest irony of the AI age: Humans being increasingly hired to clean AI slop
- Terence Tao - Mathtodon Thread
- Alexander Stubb - Speech to the United Nation