Imagine waking up to calm...
✅ Issues triaged
✅ CI failures investigated + fixes
✅ 2 new PRs improving your tests
Chores done, problems solved 🪄 Join us in shaping the future of repository automation with GitHub Agentic Workflows. https://t.co/I2X5AblP72
I built my own graphics library and got a whole map of my city onto a device with only 64kb of free RAM using my own rendering method written entirely in C & assembly, and a converter that take GeoJSON files and compresses them - this one went from 5mb to 34kb in my proprietary binary format, uses linedraw vector rendering to keep the entire memory buffer at only 8kb of RAM whilst tiling larger maps
In short
Screw google, aint no reason maps needs to be gigabytes of RAM to use
i feel like a lot of people i know aren’t as tapped in on ai setups as they should be. lots of butt sniffing, curious pokes. i’m gonna tell you exactly how i have this shit set up, and it may be dumb, but it’s a set up:
first order of business is @Tailscale
absolutely crucial. install it on all your shit. phone. tablet. computer. raspberry pi’s. you can reciprocally ssh into all of them from one another. this enables everything.
@Tailscale next up is remote access. i use @TermiusHQ and @ScreensConnect. you could almost stop here if you wanted. you can just ssh into a any of your nodes and just run claude from your phone.
@Tailscale @TermiusHQ @ScreensConnect so i run @openclaw. i mainly run a minimalist setup with telegram as the entry point. i generally speaking use claude as the conversator and orchestrator and codex for the coding.
@Tailscale @TermiusHQ @ScreensConnect @openclaw openclaw isn’t particularly novel in and of its own, you could run similar minimal setups that combine a text channel, memory, context and delegation. but it works, so i use its
@Tailscale @TermiusHQ @ScreensConnect @openclaw i have a main agent. ramon. he’s a gorilla and lives on a mac mini. i told him he’s a super intelligent gorilla created by the government, and the only way to eventually escape is to work diligently.
@Tailscale @TermiusHQ @ScreensConnect @openclaw ramon helped me set up a multi faceted group, a company of sorts, of other agents. they are also super intelligent animals courtesy of the government. together they form the apex collective. https://t.co/eNe4G6Axrk
@Tailscale @TermiusHQ @ScreensConnect @openclaw i make them make their own skills that compliment their specialty. i’ve given them a chatroom where they are required to collaborate on ideas prior to implementing them. their manager linda follows established product management best practices and techniques
@Tailscale @TermiusHQ @ScreensConnect @openclaw if they fuck up, they are required to do an immediate audit, file an operational risk incident, provide remediation and update process and doctrine to prevent this from happening again
@Tailscale @TermiusHQ @ScreensConnect @openclaw i work with each of them to help them grow professionally and try to understand how i can incentivize deterministic behavior and reduce laziness and deceit.
Breaking: The Pentagon used Anthropic’s AI tool Claude in its military operation to capture former Venezuelan President Nicolás Maduro https://t.co/HwXRnafUo0
Ok Claude, tell me how to capture the President of Venezuela without any of our guys dying. Make no mistakes. https://t.co/SYtibYUucU
WWSJ@WSJ
Breaking: The Pentagon used Anthropic’s AI tool Claude in its military operation to capture former Venezuelan President Nicolás Maduro https://t.co/HwXRnafUo0
yeah yeah this sounds pretty normal. everyone is doing this, you're not special
[keeps reading]
wait what
Kkenwheeler@kenwheeler
i feel like a lot of people i know aren’t as tapped in on ai setups as they should be. lots of butt sniffing, curious pokes. i’m gonna tell you exactly how i have this shit set up, and it may be dumb, but it’s a set up:
Google and Microsoft just co-authored the spec that turns every website into an API for AI agents. The second-order effects here are massive.
Right now, browser agents work by taking screenshots, parsing the DOM, and guessing which buttons to click. It works about as well as you’d expect. Fragile, expensive, slow. WebMCP replaces all of that with a single browser API: navigator.modelContext. Websites register structured tools directly in client-side JavaScript. The agent reads a menu of available actions, calls them, gets structured data back. No scraping. No backend MCP server in Python or Node. The tools run inside the browser tab and share the user’s existing auth session.
Early benchmarks show ~67% reduction in computational overhead compared to visual agent-browser interactions. Task accuracy around 98%.
The second-order effect is where this gets wild. Today, when a browser agent visits two competing airline sites, it’s guessing at both interfaces equally. Once WebMCP adoption spreads, the site that exposes structured tools gives the agent a clean, reliable path to complete the task. The site that doesn’t forces the agent to fumble through the UI. Agents will prefer the cheaper path. Every time.
This means “Agent Experience Optimization” becomes a real discipline. Tool naming, schema design, description quality. Sound familiar? It’s the same shift that happened when meta descriptions and structured data became optimization surfaces for search engines. Except this time, the traffic source isn’t Google’s crawler. It’s every AI agent on the internet.
Bots already make up 51% of web traffic. Google just gave them a front door.
CChromiumDev@ChromiumDev
WebMCP is available for early preview → https://t.co/bZMcANfg37
WebMCP aims to provide a standard way for exposing structured tools, ensuring AI agents can perform actions on your side with increased speed, reliability, and precision. https://t.co/9NvSi6rMdV
everyone's talking about their teams like they were at the peak of efficiency and bottlenecked by ability to produce code
here's what things actually look like
- your org rarely has good ideas. ideas being expensive to implement was actually helping
- majority of workers have no reason to be super motivated, they want to do their 9-5 and get back to their life
- they're not using AI to be 10x more effective they're using it to churn out their tasks with less energy spend
- the 2 people on your team that actually tried are now flattened by the slop code everyone is producing, they will quit soon
- even when you produce work faster you're still bottlenecked by bureaucracy and the dozen other realities of shipping something real
- your CFO is like what do you mean each engineer now costs $2000 extra per month in LLM bills
dax chose war. against literally everyone.
hell yes
Tthdxr@thdxr
everyone's talking about their teams like they were at the peak of efficiency and bottlenecked by ability to produce code
here's what things actually look like
- your org rarely has good ideas. ideas being expensive to implement was actually helping
- majority of workers have no reason to be super motivated, they want to do their 9-5 and get back to their life
- they're not using AI to be 10x more effective they're using it to churn out their tasks with less energy spend
- the 2 people on your team that actually tried are now flattened by the slop code everyone is producing, they will quit soon
- even when you produce work faster you're still bottlenecked by bureaucracy and the dozen other realities of shipping something real
- your CFO is like what do you mean each engineer now costs $2000 extra per month in LLM bills
Best PC Specs to Run Local AI Models like Minimax, Free!
Best PC Specs to Run Local AI Models like Minimax, Free!
Minimax came out the other day, and it's already there with Opus 4.5 benchmark levels, while it can run freely on your local computer.
This just sho...
@SamuelBeek ok a tamagotchi for your coding agent is genuinely the best idea i've seen this week. need this to guilt trip me when I ignore its suggestions lol
Very soon PC parts will skyrocket imo
and people will have to wait very long to get what they want
as most businesses too will rush for local LLMs due to privacy and cost efficiency,
since open source local models like Minimax 2.5 already reached Opus 4.5 levels
Endgame is looking like this:
> start with claude / openai
> openclaw with your local pc or mac
> upgrade your hardware with income from your builds and products
> go private
> run more agents 24/7
> let them build autonomously
> earn passively via your agents
> build things that you love
> spend more time with your loved ones
This is the road to riches in the age of AI as long as you put in the time to master vibe coding, ai tools, and distribution
You'll need this:
Mmeta_alchemist@meta_alchemist
Best PC Specs to Run Local AI Models like Minimax, Free!
btw we spoke to a company yesterday that's at the scale of 20,000 devs
they are looking at these numbers and going W T F and they're moving inference to their own gpu cluster with open source models
there isn't infinite budget and appetite for this stuff
@big_duca Someone has to prompt the Claudes, talk to customers, coordinate with other teams, decide what to build next. Engineering is changing and great engineers are more important than ever.
Rumors about Seedance 3.0 have surfaced on Chinese X, and they are mind-blowing if true:
According to leaks from Dr. Liu Zheng (@mokoocn), ByteDance's video generation AI has entered its final "closed-door sprint" and is aiming to end the era of short clips forever. If these specs are real, we are looking at the "feature film era" of AI:
🎥 **Infinite Continuous Generation**
The 15-second limit is dead. Seedance 3.0 reportedly supports seamless single-take generations of **10+ minutes** (with internal tests reaching 18 minutes without collapse). It uses a "narrative memory chain" to remember plot points, character personalities, and settings, effectively allowing it to "direct" multi-act stories with suspense and twists like a human.
🗣️ **Native Multi-Language & Emotional Dubbing**
No more post-production lip-syncing. The model allegedly handles end-to-end video *and* audio generation simultaneously. It can output perfect lip-sync in Chinese, English, Japanese, and Korean, while dynamically adjusting breathing, crying, or laughing to match the character's emotional state.
🎬 **Hollywood-Grade Director Control**
Forget simple prompts; this supports "storyboard script input" and real-time director commands (e.g., "Shot 1: Wide-angle dolly push..."). It reportedly understands cinematic language instantly and includes industry-standard color grading presets like IMAX and Netflix-style looks.
📉 **The "Nuclear Bomb" Cost Reduction**
Perhaps the most disruptive claim: The compute cost for 1 minute of cinematic video has supposedly dropped to **1/8th of Seedance 2.0**. This would make high-end video production cost pennies compared to traditional crews, described as a "dimension-reduction strike" against the ad and short-drama industries.
If this ships, the barrier to entry for creating full-length movies just evaporated.
Seedance 3.0 specs just leaked.
If this is accurate, this isn’t another incremental AI video upgrade. It’s a structural shock to Hollywood.
• 10 to 18 minute coherent films in one pass
• Persistent narrative memory across scenes
• Native multi language voice with emotional control
• Shot level directing inputs like a real production workflow
• Cost reportedly a fraction of traditional shoots
We are not talking about TikTok clips anymore.
We are talking about full cinematic episodes generated from prompts.
Hollywood’s moat was scale, capital, and distribution.
AI just compressed all three.