Seedance 2.0 Rattles the Film Industry While Agent Memory Systems Race Forward
Daily Wrap-Up
The two stories worth remembering from today sit at opposite ends of the creative spectrum. On the engineering side, the agent orchestration problem is crystallizing into something specific and solvable: how do you give a new agent session all the context it needs without manual handoff? Multiple people are shipping real solutions to this, from persistent context layers that sync across agent types to elaborate memory pipelines with vector databases and hourly summarization cron jobs. The fact that several independent builders converged on similar architectures on the same day suggests this is the next frontier that tooling will standardize around. If you're building anything with coding agents, context management is no longer optional.
On the creative side, Seedance 2.0 out of China demonstrated capabilities that genuinely spooked people. This wasn't the usual "look at this cool clip" moment. Users reported uploading scripts and getting back fully produced scenes with VFX, voice acting, sound effects, and music, all edited together. The ability to upload frames from existing films and generate new scenes in that style raises obvious legal questions, which may explain why it's not available outside China. Indie filmmakers there are already producing full AI-generated movies with it. The gap between "impressive demo" and "production tool" just collapsed for video in a way it hasn't before.
The recurring theme tying everything together was velocity. Multiple posts reflected on how the pace of progress is becoming difficult to even imagine, let alone keep up with. The practical advice that cut through the noise came from @backseats_eth: ignore the AI theater, skip the vibe coders who learned last month, and spend your time following experienced engineers who are updating their actual processes with AI tools. The most practical takeaway for developers: invest time in building persistent context and memory systems for your AI workflows now, whether that's a tool like OneContext or a custom solution. The teams that solve agent memory first will compound their productivity advantages faster than everyone else.
Quick Hits
- @elonmusk announced SpaceX has shifted priority to building a self-growing Moon city, achievable in under 10 years, with Mars efforts beginning in 5-7 years. Moon launches every 10 days vs. Mars every 26 months means dramatically faster iteration.
- @SawyerMerritt summarized the SpaceX pivot: "The overriding priority is securing the future of civilization and the Moon is faster."
- @elonmusk shared the Starlink Super Bowl ad promoting affordable internet anywhere.
- @exolabs previewed what they call the future of local AI architecture: separate specialized chips for prefill and decode phases, splitting inference across purpose-built silicon.
- @inductionheads flagged RLMs (Reasoning Language Models) as a breakthrough worth understanding, without elaboration. One to watch.
- @thdxr announced improved traffic routing for Kimi K2.5 on Zen, calling the speed "something different." Rate limits may follow as they scale.
- @andrewmccalip teased a first acquisition offer for their app. No details, but building things that get acquisition interest remains a viable strategy.
- @ashebytes posted what every weekend agent builder was feeling: the mood when you're orchestrating agents at 11pm on a Saturday.
Agent Orchestration and Context Management
The single biggest pain point in agentic development right now isn't model capability. It's memory. Every developer running coding agents hits the same wall: you spin up a new session and it knows nothing about your project, your preferences, or what the last session accomplished. Today, multiple builders showed they're actively solving this in different ways.
@LLMJunky highlighted OneContext, a persistent context layer that sits above coding agents and automatically syncs context across sessions. The pitch is simple: "any new agent you spin up already knows everything about your project." It works across Claude Code, Codex, Gemini, and others, and even supports sharing context between team members via a link. The project earned its creator @JundeMorsenWu a cold email from Google's Gemini team, proving that building in public and sharing your work still opens doors.
On the more DIY end, @PerceptualPeak shared a detailed breakdown of solving context transfer between pre-compacted and post-compacted states in their Clawdbot system. Their approach layers multiple strategies: hourly cron jobs maintaining running memory files, injection of 24 hours of memory summaries into post-compaction context, a persistent JSONL conversation log that survives compaction, and a vector database populated by a bi-hourly sub-agent that extracts and embeds learnings. They also built a user prompt submit hook that embeds queries and retrieves relevant memories in under 300ms before the model even starts processing. The result? "Literally ZERO noticeable knowledge loss."
"None of us want 1 agent now. We want 1 agent who runs teams of agents. Be interesting to see who solves and ships something with truly delightful UX." - @ryancarson
Ryan's observation captures the direction of the field. He argues the winning solution won't come from a single lab but will be "a clever mix of closed/open source models + deterministic orchestration." @ScriptedAlchemy's AI orchestration system getting recognition from Kent C. Dodds further validates that this space is heating up. The pattern is clear: the tooling layer above the models is where the real competitive advantage is being built right now.
The Acceleration Narrative and Career Impact
A cluster of posts today reflected growing unease about the pace of AI progress and what it means for employment. The takes ranged from measured concern to near-panic, but the volume of people engaging with this topic on the same day is notable.
@chatgpt21 referenced Gabriel, described as one of the leads for Sora at OpenAI, warning that "this is the last time to get employment before the fast takeoff." The framing is provocative, but the advice boils down to: lock in your current position and prepare for rapid change. @kimmonismus echoed this with conviction: "2026 will be the year everything changes, the take-off will be felt by everyone."
"I spend quite a bit of time these days trying to imagine. Imagining AI progress continuing at its current velocity is already difficult. I must confess that imagining it relentlessly accelerating over the foreseeable future is almost beyond me." - @deredleritt3r
The career angle showed up from multiple directions. @cgtwts landed a sharp one-liner: "Engineers' worst nightmare has come true, they all have to become product managers." @hkarthik simply noted the implications of code costs falling to zero, leaving the conclusion to the reader. The counterpoint came from @backseats_eth, who offered the most grounded advice of the day: ignore the AI theater, skip content from vibe coders who learned last month, and focus on experienced engineers updating their real processes with AI. "My best improvements come from process articles, trying it to make something real." In a feed full of existential dread, that's the signal worth following.
Seedance 2.0 and the End of Traditional Film
The most visceral reactions today came from AI video, specifically Seedance 2.0. @EHuanglu posted two threads that together paint a picture of a tool that has leapfrogged everything else in the space.
"Seedance 2.0 is the only model that makes me so scared. Literally every job in film industry is gone. You upload a script, it generates scenes with VFX, voice, SFX, music all nicely edited. We may not even need editors anymore." - @EHuanglu
What makes Seedance different from previous video models isn't just quality. It's the workflow integration. Users can upload screenshots or storyboard frames from any movie and get back full scenes that feel like they came from the original production. Another feature lets you upload existing film clips and edit anything: swap characters, add VFX, change backgrounds, adjust color grading. The legal implications are staggering, which @EHuanglu suspects is exactly why it's not available outside China.
The impact is already visible. Chinese indie filmmakers have gone, as @EHuanglu put it, "FULL INSANE MODE" and started producing 100% AI-generated movies with the tool. Meanwhile, @SpecialSitsNews offered a lighter take, noting that Will Smith eating spaghetti remains the true benchmark for AI video progress, a callback to the infamous early Sora demo that became a meme. The gap between that meme and what Seedance 2.0 is producing tells you everything about how far video generation has come in under two years.
Building with AI Agents
While the orchestration crowd debated architecture, several builders just shipped things, demonstrating what's possible when you sit down with current tools and grind.
@martin_casado posted an update after 8 hours of development time on what appears to be a multiplayer game with impressive scope: item layers, object interactions, multi-world portals, live world editing, persistent backends with NPC management, and reactive multiplayer state. Built with Cursor, Codex 5.2, and Opus 4.6. The detail that stands out is the live editing capability: admins can modify the world without restarts, and all changes propagate reactively to other players. That's a non-trivial architectural decision that the AI tools apparently handled well.
"After ripping through a billion tokens in 8 hours I can attest this is the future. Pay attention." - @garybasin
@steipete noted that even the AMP team (historically skeptical of IDE-integrated AI) has fallen for Codex, adding a pointed aside about VS Code agent sidebars: "I know exactly one guy that uses a VS Code agent sidebar. Burn it." The terminal-first, headless agent approach continues to win converts over sidebar-style copilots. The builders producing real output today aren't using AI as an autocomplete. They're using it as a junior team member that can burn through a billion tokens of iteration overnight.
Source Posts
OH MY FKING GODDDDDD 😱😱😱 indie filmmakers in china have already gone FULL INSANE MODE and started making movies using Seedance 2.0.. 100% AI https://t.co/ljUg7tTbjn
by popular demand, here are my agent coding tips and tricks that YOU MUST know or be LEFT BEHIND FOREVER: 1⃣the best model is task dependent. codex 5.2/5.3 has been consistently much better at AI, pytorch, ML. opus 4.5/4.6 is more pragmatic and obviously fast. at your actual task, model capabilities and styles may be wildly different. figure out what works for you rapidly. given the above... 2⃣dual wield two models in whatever harness works for you. come up with a workflow where you can shift between models easily when one gets stuck. for me this looks like claude code in the terminal and cursor with codex 5.3. don't sleep on cursor, it has a very good harness that is battle tested across models. at times where a third model (or a cheaper model like K2.5) is in the arena, it can be very helpful to be able to flip back and forth in a normal agentic chat environment. but also... if you're comfortable with what you use, stick with it. workflow optimization is the enemy of productivity. thus... 3⃣minimize skills, mcp, rules as much as possible and add them slowly if at all. i use no skills, no mcp in any of my workflows. treat your context window like a life bar and have respect for the core competency of the models. over time tool use, capabilities will continue to improve and you'll be wasting time explaining skills or tools that can natively be used by the model. there are exceptions to this and sometimes its fun to experiment with a prompt someone else has made (this is all a skill is). in the long term, i can imagine skills being a great way to, for example, inject some of the latest updates and knowledge of the most recent next js capabilities into a model without that inherent knowledge. or to copy a prompt from someone who has had great results in a particular task. however, generally... avoid loading up here. 4⃣ have something to actually build. the more time you spend optimizing without a target the less effective you are. the most aggressive breakthrough moments for me were about obsessing over a problem. these are the times that your workflow gets rebuilt, but you will have an actual metric internal to build intuition against: is all of this actually helping me get things done faster or not? 5⃣ add measurement to kill noise. as "orchestration" methods and other "infinite agent loop" structures re-emerge, treat them all with suspicion. they may work very well for your use case and they can be super fun to try out esp for a side hobby project. but when you're working in production or on a serious goal, try to build some minimal measurement to keep yourself honest. it can feel like you're making progress in the short term very rapidly. this might as simple as writing down how much time you're actually spending checking in / correcting the bot that's running "autonomously" versus if you just sat down and hand prompted over an hour. additionally, use straight forward, verifiable tests to better understand if your agents are making progress or not against the goal. very simple, nothing ground breaking but easy to get lost in the sauce with ralph loops etc. and then finally, most important: 🚨ignore the noise🚨 there will always be a HOT NEW TRICK to OPTIMIZE YOUR PRODUCTIVITY x2. ignore them. hate them. banish them. just do work. do more work. every minute you spend watching a youtube tutorial is a minute you could have been screaming at the computer to do its job better. the models will change, the behaviors will get trained in, orchestration will get trained in. the tips and the tricks of today are not always going to translate. build the intuition on what works personally for you now and then use YOUR criteria to judge the next new thing, not someone else's.
@jacobmparis @ScriptedAlchemy is creating the state of the art
Episode 10 of Raising An Agent with @sqs and @thorstenball is out! There's no better summary than this quote: "We will be killing our editor extension, the Amp VS Code extension. We're going to be killing it. And we're going to be killing it because we think it's no longer the future. We think the sidebar is dead. Let's walk through why." Topics in this episode: - The new deep mode in Amp - Balancing developer experience for humans & agents - Killing the VSCode extension & shift away from traditional editors - Pi & OpenClaw, two wonderful projects - Importance of reinventing yourself in AI Enjoy! And happy hacking! Timestamps: 01:00 Deep Mode 10:30 Optimizing the codebase for agents 15:00 Feature Preview: which Skills does your team use? 18:00 Balancing DX for humans & agents 21:35 Killing the Amp editor extension 28:00 The future of software and what it means 33:00 You need to stay agile 36:00 Pi & OpenClaw 39:00 Text editors holding companies back 44:00 Is manual context management coming to an end? 49:00 New concept for Threads 50:00 Amp, the business & the art installation
a new ai video model Seedance 2.0 is beta testing in china.. this is going to blow ur mind https://t.co/upBeN2SOOR
Introducing OneContext. I built it for myself but now I can’t work without it, so it felt wrong not to share. OneContext is an Agent Self-Managed Context Layer across different sessions, devices, and coding agents (Codex / Claude Code). How it works: 1. Open Claude Code/Codex inside OneContext as usual, it automatically manages your context and history into a persistent context layer. 2. Start a new agent under the same context, it remembers everything about your project. 3. Share the context via link, anyone can continue building on the exact same shared context. Install with: npm i -g onecontext-ai And open with: onecontext Give it a try!
LET ME COOK!! DGX Spark + Mac Studio + MBP + @exolabs + Mac mini M4 as my orchestrator. https://t.co/YwbVdCDLt6
One thing I’m missing from many takes on AI’s impact is that we’re still so, so early. I expect that we’ll continue to see dramatically better models, and even more dramatically better products on top of them. I believe the pace will surprise even many of those who are fully bought in and staying close to AI. Regardless of whether you think AI is overhyped or civilization-altering, I find it a useful exercise to imagine models that are 10x faster, smarter, and more capable in specific domains. Then, repeat the same exercise with the interfaces and products built on top of them. If nothing else, it’ll be super helpful in understanding the mindset of some of us at the frontier labs, even if you disagree with the speed we’re expecting. It’ll explain that weird 100-yard stare for 2026 some of us have.