What I Am Reading:
Who do you trust to tell you what's good?
We are drowning in recommendations and somehow more lost than ever. Algorithms have gotten very good at predicting what you might click next based on what you already clicked, but that is not the same thing as taste. This piece makes a compelling case that the most valuable signal is still just a real person with a genuine point of view sharing something they love, and that the personal context behind a recommendation is what makes it worth anything at all. It left me thinking about how much I trust the people I actually know over any platform that claims to know me.
Freedom is not the highest form of wealth
The author spent two years with genuine freedom and came back with a counterintuitive conclusion: meaning is the highest form of wealth, not freedom. His argument is that the pursuit of freedom provides its own meaning, but once freedom is actually achieved it loses that meaning entirely. What you're left with is an existential "now what" that nobody warned you about. I found this one genuinely thought-provoking, especially for people at a stage where they've already won the financial game and are figuring out what the next chapter is actually for.
The human work behind humanoid robots is being hidden
This MIT Technology Review piece should be required reading for anyone investing in physical AI. The argument is that the humanoid robotics industry is quietly obscuring how much human labor is still required to make these machines look autonomous, and the parallel to Tesla's early Autopilot branding is hard to ignore. Workers spend weeks in VR headsets and motion capture suits performing repetitive tasks just to generate training data, and remote operators step in when robots get stuck in ways the demo videos never show. The gap between what these companies are demonstrating and what is actually happening operationally is significant.
Morgan Stanley warns an AI breakthrough is coming in 2026 — and most of the world isn't ready
Morgan Stanley's "Intelligence Factory" report is one of the more substantive pieces of Wall Street research I've seen on AI. The core finding is that scaling laws are still holding, the compute accumulation at top labs is unprecedented, and the models already scoring at or above human expert level on economically valuable benchmarks. What I found most useful is the infrastructure analysis: the US is facing a 9 to 18 gigawatt power shortfall through 2028, and the people who figure out the energy problem first are going to have a significant competitive advantage. The AI buildout is a power story as much as it is a software story.
Defense tech roadmap: Five frontiers for 2026
Bessemer's annual defense tech roadmap is always worth reading and this year's is particularly good. Their view is that the sector has advanced more in the past two years than in the previous three decades, and that we are now past the proof-of-concept phase. The five frontiers they lay out are autonomy moving from concept to actual combat deployment, AI permeating DoD workflows, advanced manufacturing to close the munitions gap, edge network resilience against jamming, and materials and energy independence. The fact that Palantir is trading at 65x forward revenue tells you that defense tech investors now believe you can build a real software business here.
Neutral Atom Quantum Computing: 2026's Big Leap
The race to build a fault-tolerant quantum computer is finally getting real. This IEEE Spectrum piece explains why neutral atom qubits are emerging as the architecture of choice, with Microsoft and Atom Computing targeting a 50-logical-qubit machine called Magne by early 2027 and QuEra already delivering a 37-logical-qubit machine to Japan's national research institute. The milestone that matters here is logical qubits with real error correction, not just raw qubit counts. Getting to level-two quantum computers is the bridge between science project and actual commercial application.
Consciousness researchers need Spinoza
I have been thinking about consciousness a lot lately, partly because of where AI is heading and partly because the question of what awareness actually is feels more urgent as the machines get better at simulating it. This piece argues that Spinoza's panpsychist framework, the idea that matter and mind are two aspects of the same underlying reality, is having a serious scientific comeback through integrated information theory. It connects 17th century philosophy directly to contemporary neuroscience in a way that actually holds up. Worth reading even if you bounce off the academic framing at first.
The framing shift here is important. The longevity science community is moving away from looking for a single silver bullet intervention and toward a systems biology view of aging as a loss of coordination between metabolic, immune, mitochondrial, and microbial systems. The question is no longer just "what breaks" but "how does the conversation between these systems break down, and can we preserve it." This is a more honest and probably more productive frame than the one-molecule approaches that have defined the field for the past decade.
Stoicism productivity: the ancient philosophy that actually fixes modern focus problems
This one surprised me. The essay makes a serious case that Stoicism is not a productivity hack but the actual operating system that cognitive behavioral therapy is built on, since Albert Ellis explicitly credited Epictetus when developing CBT. The most useful reframe for me was the argument that procrastination is an emotion regulation failure, not a time management failure. You don't need a better calendar. You need a better relationship with uncertainty and discomfort. The section on memento mori as an anti-procrastination tool is particularly sharp.
What I Am Listening to:
Your Brain | Podcast NOVA Remix
NOVA's audio remix of their "Your Brain" series is a genuinely unsettling listen in the best way. The central question is whether we are actually in control of our own minds, and the scientists they bring in make a compelling case that most of what we think of as conscious decision-making is a story the brain tells itself after the fact. The coverage of split-brain research, unconscious processing, and the neuroscience of memory is accessible without being dumbed down. There are more connections in the human brain than there are stars in the Milky Way. That fact keeps hitting differently the more I learn about what those connections are actually doing.
Jensen Huang: NVIDIA — The $4 Trillion Company and the AI Revolution | Lex Fridman Podcast
This is the conversation I have been waiting for Lex to do with Jensen and it delivered. Two hours on how NVIDIA evolved from chips to full-stack AI infrastructure, the geopolitics of compute, why Jensen has 60 direct reports and does not do one-on-ones, and his surprisingly direct claim that we have already achieved AGI if you define it as AI that can autonomously create billion-dollar businesses. The section on what he calls extreme co-design, building GPU, CPU, memory, networking, power, and cooling as a single integrated system rather than individual components, is the clearest explanation I have heard of why NVIDIA's moat is so hard to replicate.
The Best Vitality and Health Protocols | Dr. Rhonda Patrick — Huberman Lab
Three and a half hours with Dr. Rhonda Patrick on how to actually build a longevity protocol that works. The data point that got me was that just nine minutes per day of vigorous exercise broken into three-minute bouts is associated with a 40 percent reduction in all-cause mortality and a 50 percent reduction in cardiovascular mortality. The conversation also covers the supplement stack she uses personally, continuous glucose monitoring, sauna protocols, and why the biggest benefit she has noticed from training is not physical but the way it changes how her brain handles stress. Practical and evidence-based in a way a lot of wellness content pretends to be but isn't.
What I Am Watching:
Optimus 102: How To Train Your Optimus
Tesla's own deep dive into how they train the Optimus robot is fascinating for anyone paying attention to where physical AI is actually heading. The video shows the teleoperation approach where humans in motion capture suits perform tasks that get recorded and replicated, the sim-to-real pipeline where thousands of reinforcement learning iterations in digital environments get transferred to the physical robot, and how the same neural network architecture from Full Self-Driving is being repurposed for real-world manipulation and navigation. Watching this alongside the MIT Technology Review piece on hidden human labor gives you the full picture of where the gap between demo and reality still lives.
The Internet Was Weeks Away From Disaster and No One Knew | Veritasium
This documentary-style Veritasium video has over 12 million views and deserves every one of them. It tells the story of the XZ Utils backdoor, a multi-year supply chain attack in which an attacker using the alias Jia Tan spent years building trust as an open source maintainer before injecting a backdoor that would have given unauthorized access to millions of servers running OpenSSH. The entire internet's server infrastructure was weeks away from being compromised and it was discovered by a Microsoft engineer who noticed a 500 millisecond delay in SSH logins. The implications for infrastructure security, the fragility of the open source ecosystem, and the sophistication of state-level cyber operations are all explored here with real depth.