AI Learning Digest.

AI Learnings - January 8, 2026

AI Learnings - January 8, 2026

Overview

Discussions spanning Claude Code & Workflows, AI Agents & Orchestration, Vibe Coding, and Models & Capabilities.

Claude Code & Workflows

  • @thekitze: "hell yeah bro my @clawdbot now has a dedicated phone number and can call me or text me for urgent things 🤓"
  • @0xidanlevin: "The state of AI right now:"
  • @DanielleFong: "what is happening is that people are embyronically making largery and larger repos that are effectively continual learning"
  • @ibab: "I suspect the reason Claude Code doesn’t work as well for large codebases is that they post-trained it mostly on smaller repos (big corp sized repos are rare)"
  • @belindmo: "Did you know that Claude Code is so powerful now that it can fine-tune models for you"

AI Agents & Orchestration

  • @theodormarcu: "After 6 months of watching engineers use coding agents, I found the most productive users have something in common"
  • @levie: "A deeply under-appreciated economic benefit of AI agents is the ability to experiment and throw away things at near 0 cost"
  • @milichab: "In the last month, 100,000 people played IsoCity"
  • @doodlestein: "The big unlock was when I created Agent Mail in October and started using 10+ agents at the same time in a single project"
  • @koylanai: "Most agent designs try to model what people know"

Vibe Coding

Models & Capabilities

The Future of Development

  • @Devon_Eriksen_: "A vast number of humans, probably a majority, aren't people"
  • @tekbog: "software engineers after automating software engineering and having no jobs https://t"

Other Highlights

  • @aarondfrancis: "2021: it can’t even autocomplete a line"
  • @wojakcodes: "when someone says 'My work is too complex for AI',"
  • @nurijanian: "every Product Manager must have a prompt library now"
  • @martinfowler: "Fragments: How AI is changing Anthropic's internal development, a detailed account of using LLM to program a knowledge management tool, obvi"
  • @jarrodwatts: "claude is doing my groceries for me rn https://t"

Key Takeaways

1. Claude Code continues to reshape how developers approach coding

2. Agent orchestration patterns are maturing with new tools and frameworks

3. Vibe coding is evolving from meme to legitimate methodology

---

Curated from 20 posts

Source Posts

A
Aaron Francis @aarondfrancis ·
2021: it can’t even autocomplete a line 2022: it can’t even write a whole function 2023: it can’t even pass a coding interview 2024: it can’t even build an app 2025: it can’t even handle complex projects 2026: oh no
Y
Yuchen Jin @Yuchenj_UW ·
Jevons paradox in coding: People thought AI would replace programmers, instead: - everyone is a coder now (hello, vibe coders) - people who stopped coding are coding again - 10x engineers just became 100x engineers Coding is more addictive than ever.
I
Igor Babuschkin @ibab ·
I suspect the reason Claude Code doesn’t work as well for large codebases is that they post-trained it mostly on smaller repos (big corp sized repos are rare). To perform really well at large codebases you probably also need continual learning or at least finetuning on your repo, otherwise RAG and manually reading files becomes a bottleneck. For now it helps to split code into smaller modules with clear API boundaries (which is good practice anyway).
D
Devon Eriksen @Devon_Eriksen_ ·
A vast number of humans, probably a majority, aren't people. They are large language models. I'm not saying this as a generality, as a clever or funny way of saying, "they are stupid". No. I mean something very concrete and specific, and there are a lot of people who appear very intelligent, maybe even win awards for writing good poetry or something, who are nevertheless not people, not fully sapient, just a large language model walking around in a human body. First, you have to understand what a large language model is. It's a computer (organic or inorganic), which has been trained on a data set consisting solely of language (written or spoken), and rewarded for producing language that sounds like the data set, and is relevant to a prompt. That's all there is in there. This is why ChatGPT and Grok lie to you constantly. It's not because they are somehow just indifferent to the truth — they actually do not understand the concept of "truth" at all. For something to be a "lie", or an "inaccuracy", there has to be a mismatch between the meaning of words, and the state of reality. And there's the critical difference. You see, in order to identify a mismatch between the state of reality, and the meaning of a sentence, you have to have a model of reality. Not just one model, of language. This is why Grok and ChatGPT hallucinate and tell you lies. Because, for them, everything is language, and there is no reality. So when I say someone is a large language model, I do not mean he is "stupid". He might be very facile at processing language. He might, in fact, be eloquent enough to give great speeches, get elected president, win the Nobel Peace Prize, and so on. What I mean is that humans who are large language models do not have a robust world-object model to counterweight their language model. They are able to manipulate symbols, sometimes adroitly, but they are on far shakier ground when trying imagine the objects those symbols represent. Which brings us to this woman. Most conservatives understand her behavior in terms of concepts like "suicidal empathy", or "brainwashing", or an "information bubble", interpreted as reasons why she is delusional, but the truth is far worse than that. To delusional is to have an object model of the world that is deeply and profoundly wrong. But to have an object model of the world that is deeply and profoundly wrong... you have to have one in the first place. To sapient humans, words are symbols, grounded in object model of reality, that we use to communicate ideas about that reality. We need those words because we don't come equipped with a hologram projector, or telepathic powers. But for another type of human, that object model isn't very large or robust at all. It consists only of a grass hut or two with a few sticks of furniture, and it can never be matched up with the palaces in the air which she weaves out of words. And so, to her, there is no reality. Or at least very little. Reality consists only of her and her immediate surroundings in time and space, and words referring to anything bigger or more complicated are not descriptions of reality... they are magic spells which will make other humans drop loot or give her social approval. You cannot correct her worldview with contradictory evidence, because there is no worldview to correct. You cannot confront her with the logical inconsistencies in her worldview, because her object model doesn't actually have any, it's not complex enough for that. The relevant parts of her world-object model can be summed up as follows: "If I say Goodthing, I get headpats and cookies from all the people like me." That model is simply not big or complicated enough to contain notions like self-defense or vehicular assault. She has no theory of mind for a man whose job includes violence. She cannot explain or predict his behavior. It is too far away from her daily experience to fit into her reality at all. And if she can't imagine things like these, how can she possibly imagine concrete meanings for vast and complex ideas like demographic replacement, culture shift, and western civilization? This is not about intelligence or lack of it. This is about what her brain is trained to do. Her upbringing, education, and life did not force, or even encourage, her to develop a robust world-object model. It wasn't necessary for her to get safety, approval, or cookies. She just had to be glib. So it really didn't matter if she had an IQ of 125, or whatever, because if she did, then she was just an IQ-125-large-language-model, and only used that brain capacity for writing clever poetry, and saying things that aligned her to her local social matrix. She couldn't actually understand the world no matter how smart she was, because her brain was trained up wrong. I don't know if this is correctable, or if there was some critical developmental phase that was missed, but it doesn't matter, because once the LLM-humans are adults, they won't sit still for corrective therapy, percussive or not. What's important is that they can't be taught things. They can be programmed to repeat stuff, and if you win a culture war, you can even program them to say the sensible stuff. But even then, they will just be saying it for headpats and cookies. They will never truly understand the sense of what they are repeating, because they don't understand things. They are just Large Language Models. And we have to figure out some way to take the vote away from them.
L Lauren Chen @TheLaurenChen

I just figured out why the Minnesota ICE death is bothering me so much. This liberal woman was willing to take on federal agents, to disrupt ICE operations, in order to protect criminal Somalis. Obviously, she probably didn't imagine she would be killed. But surely, she must have known that, at the very least, she could be arrested. She has three kids. So she was willing to be separated from her kids to protect criminal Somalis. Speaking as a mother, this is insanity. This is not rational thinking. What it is, instead, is the result of liberal brainrot that convinces progressive women they have more of a duty to nurture and protect poor, brown (criminal!) strangers than their own country, and hell, even their own children. I am praying for this woman's soul and for her family. But I mean it when I say this type of thinking is almost wholly responsible for the decline of Western civilization.

i
idan levin @0xidanlevin ·
The state of AI right now: There’s a huge gap between what developers on the frontier are using - Claude Code, Opus 4.5 - and what the rest of the world is doing, which is still figuring out the most basic ChatGPT usage. This gap will take years to close. The average person will only start using the kinds of things developers are doing today with Claude Code - autonomous, complex tasks - years from now. 2026-27 for most of the world will be about making simple tasks work on AI platforms like ChatGPT. Commerce, embedding apps into it, adding basic UX elements. Using it for your medical data. Learning when you can trust it, and how much. Simple interactions, at scale.
‎Wojak Codes @wojakcodes ·
when someone says "My work is too complex for AI", I tell them to wait just a few more months.
G
George from 🕹prodmgmt.world @nurijanian ·
every Product Manager must have a prompt library now and here’s the most advanced prompt library for PMs 😘 🔗 https://t.co/vFGI8OTv6A https://t.co/PwoWAmUVAm
J
Jeffrey Emanuel @doodlestein ·
The big unlock was when I created Agent Mail in October and started using 10+ agents at the same time in a single project. And things continue to accelerate… this is January so far, after one week: https://t.co/K1zuc6eqDx
J
Jake @JustJake ·
If you're a stellar builder, the economics of your labor just changed Any organization that isn't empowering you to move at "Agentic speed" will destroy your economic value You need to join orgs that are going all in and rebuilding all their internal processes around this
k
kitze 🛳️ @thekitze ·
hell yeah bro my @clawdbot now has a dedicated phone number and can call me or text me for urgent things 🤓 poke would never chatgpt would never sama would never satya would never dario would never everyone is toast clawdbot is literally agi https://t.co/g0fPcglVFk
D
Danielle Fong 🔆 @DanielleFong ·
what is happening is that people are embyronically making largery and larger repos that are effectively continual learning. "compound engineering" @danshipper called this, a great video here with the two founding eng for claude code, @bcherny @_catwu nonlinear gains by making the knowledge and tools accessible to the Agent itself. logging discussions. visual memory. access to all my tweets, annotated and placed in parallel indices. i've done it myself and blown away with the potential. each new capability improves the ability to make the next ability. memory constraints are being stretched and transcended https://t.co/LFqg5tfU5J
B
Belinda @belindmo ·
Did you know that Claude Code is so powerful now that it can fine-tune models for you? We made a Claude Code skill using @thinkymachine's Tinker to fine-tune models ->
M
Martin Fowler @martinfowler ·
Fragments: How AI is changing Anthropic's internal development, a detailed account of using LLM to program a knowledge management tool, obvious-easy-possible buckets for interfaces, specs can't be complete, & lightweight tools to work with LLMs https://t.co/ceaoUE6Dxp
M
Muratcan Koylan @koylanai ·
Most agent designs try to model what people know. The real unlock is capturing how they decide. Since I started sharing my work on context engineering skills and AI persona creation from tacit knowledge, I've met incredible people on X who are working on this from different fronts; from philosophy PhDs researching how to turn human knowledge into AI twins to engineers building solutions like 24/7 screen recording to understand decision-making patterns. What I keep seeing is that Skills formalize the "how" into something agents can actually use. We can map this into layers: - Discovery layer (how agents find relevant skills) - Context layer (what gets loaded into working memory) - Execution layer (tool protocols) - Learning layer (traces that improve future execution) Right now most Skills implementations focus on the middle two. The discovery and learning layers are underspecified. This is why execution traces matter more than static knowledge bases. The next trend is activity/decision retrieval, understanding not just what exists, but what happened, in what order, and what changed between steps. It's all coming together after scaling LLMs to a level where they can use tools and evolve without any intervention: observation (connectors) → retrieval (indexes) → relationships (graphs) → persistence (memory). The stack is finally maturing. The builders who get this sequencing right will own the next generation of agents.
J
Jarrod Watts @jarrodwatts ·
claude is doing my groceries for me rn https://t.co/e26VhYCU3a
A
Aaron Levie @levie ·
A deeply under-appreciated economic benefit of AI agents is the ability to experiment and throw away things at near 0 cost. Most projects traditionally get stuck on a one way train based on initial decisions that get made early on. Restarting or testing multiple ideas early on is usually completely cost prohibitive. Now you can explore the solution space far more than you would have otherwise because there’s no cost to starting over. You’ll just have multiple agents running in parallel for most tasks and just choose the best work. This could be for coding, writing a legal briefing, building a marketing campaign, doing research, or anything else.
T
Theodor Marcu @theodormarcu ·
After 6 months of watching engineers use coding agents, I found the most productive users have something in common The best ones use what I call "Socratic mode" Instead of just telling the agent what to do, they start with questions that force it to load the right files and actually understand the abstractions. They keep going (and correcting the agent) until they're confident both they and the agent understand the shape of the problem and the goals The benefit here is that instead of guessing at a plan upfront, you're helping both yourself and the agent truly understand the codebase first before starting to make any changes By the time you ask it to do something, all the context is already "built" and the path forward is clear
t
terminally onλine εngineer @tekbog ·
software engineers after automating software engineering and having no jobs https://t.co/72S3FHkPMf
A
Andrew Milich @milichab ·
In the last month, 100,000 people played IsoCity. Incl 5k active coop games. So, I built a new game: Age of Isos. You don't need friends. Play against agents, who get a JSON file with game state and tools to play. I used about 500m tokens per day - all on @cursor_ai. Often, I ran one task in parallel on 8 agents at once - judging how each would make visual changes - better lakes, sprawling mountains, dense forests, etc.
M
Muratcan Koylan @koylanai ·
I wish I had known this before. I'm a huge fan of SpecStory now. It's a plugin that automatically converts your conversation histories, including tool calls and reasoning traces, into Markdown format. When Opus 4.5 approaches to context compaction limit, run Gemini 3 in the history document and improve the coding models' context without bloating it.