AI Digest.

Tiny Models Go Agentic, Mac Gets eGPU Support, and Steganography Gets an AI Upgrade

Today's posts center on the push to run capable AI on constrained hardware, from Liquid AI's 350M-parameter agentic model to TurboQuant squeezing 27B models onto consumer GPUs. Meanwhile, tinygrad ships Mac eGPU drivers for AMD and NVIDIA, and builders keep finding wilder applications for Claude, from satellite-fed trading bots to TradingView automation.

Daily Wrap-Up

The throughline today is clear: the frontier is moving downward. Not in capability, but in the size of hardware you need to do interesting things. Liquid AI dropped a 350M-parameter model that handles tool use and agentic loops, fitting under 500MB quantized. TurboQuant experiments are compressing Qwen 3.5-27B down to run on a single RTX 5060 with no apparent quality loss. And tinygrad just made it trivially easy to connect eGPUs to Macs, unlocking cheap mixed-memory setups that can handle serious inference workloads. The message from all directions: you don't need a data center to build with AI anymore, and the gap between "runs locally" and "runs well" is closing fast.

On the application side, people continue to find increasingly creative (and sometimes unhinged) ways to wire AI into everything. Someone connected NASA satellite data to Claude and made nearly $8K trading geopolitical anomalies on Polymarket. Another developer has Claude controlling TradingView live from the terminal, writing Pine Script and scanning futures. A game developer generated a full Diablo-style hack-and-slash dungeon crawler for $0.70. These aren't polished products; they're scrappy prototypes that reveal just how low the barrier to entry has gotten for AI-augmented systems. The most entertaining moment was easily @elder_plinius announcing an open-source steganography platform with 112 hiding techniques, an AI agent that auto-detects hidden payloads, and easter eggs buried in the README itself. The post's own text literally contains a hidden prompt injection as a joke.

The most practical takeaway for developers: if you're still assuming you need cloud inference for agentic workloads, revisit that assumption. Between Liquid AI's LFM2.5-350M for constrained environments, TurboQuant for squeezing big models onto consumer GPUs, and tinygrad's new Mac eGPU drivers, the local inference story has meaningfully changed in the last week. Start experimenting with local tool-use models now, because the cost and latency advantages compound quickly once you stop paying per token.

Quick Hits

  • @MAC_Arms posted a video commentary on how AI and robotics are reshaping military technology, arguing the next peer-to-peer conflict will be "absolutely brutal" compared to historical shifts from machine guns, aircraft, and tanks.
  • @MattEpstein16 shared a video of a free AI tool with the succinct review: "Absolutely fucked that this is free." Context is thin, but the sentiment captures the general mood around free-tier AI tooling right now.
  • @RohOnChain highlighted an open-source Bloomberg Terminal alternative that runs 100% locally with no API costs, positioning it against the $24,000/year Bloomberg subscription.
  • @wholyv promoted Oumi's launch, a platform claiming to let teams build custom ML models in hours by just describing what they need, branding the approach "VibeML."
  • @iamfakeguru wrote a long thread analyzing multi-agent accuracy decay (95% per-step accuracy compounds to coin-flip reliability over pipelines), promoting OpenServ's structured decision graph approach as a fix. The core observation about compounding errors is sound even if the thread is promotional.

Running AI on Less Hardware

The most consistent signal today is that capable AI is getting dramatically smaller and cheaper to run. Liquid AI released LFM2.5-350M, a model specifically trained for tool use and data extraction at a scale where most models can barely string together coherent text. As @liquidai put it: "Agentic loops at 350M parameters... built for environments where compute, memory, and latency are constrained." Under 500MB quantized, this is a model that could run on a Raspberry Pi.

Meanwhile, the TurboQuant experiments are pushing in the other direction: taking large models and compressing them to fit on consumer hardware. @LLMJunky highlighted results showing Qwen 3.5-27B running on a single RTX 5060 at 3.15-bit precision with over 50% memory reduction and "no apparent degradation." The enthusiasm is warranted: "We are likely <1yr away from running big models on smol devices with minimal consequence. And during that time, they will only get better and better."

These two approaches, small models trained for specific tasks and big models compressed to fit small hardware, are converging on the same outcome. The cloud inference tax that has defined the AI era so far is becoming optional for an expanding range of use cases. When a 350M model can do reliable tool use and a 27B model fits on a $400 GPU, the economics of building AI applications shift fundamentally.

The Mac Gets Serious About Local AI

Tinygrad quietly shipped something significant: Apple-approved eGPU drivers for both AMD and NVIDIA on Mac via Thunderbolt/USB4. @__tinygrad__ kept the announcement characteristically understated: "Still requires tinygrad master, but no SIP bypass or anything like that. AMD compiler is native, NVIDIA compiler runs in Docker."

@0xSero immediately connected the dots on why this matters for AI workloads: "Heterogenous hardware is the way forward. Large cheap pools of mixed memory + specialized accelerators." The math is compelling: a used Mac Studio with 96GB shared memory for $3,000 plus a 3090 for $750 gives you 120GB of mixed memory for under $4,000. That's a setup that can handle models most people are paying cloud rates to access.

This isn't just about saving money. It's about latency, privacy, and the ability to iterate without metering. When your inference runs locally on commodity hardware, you can afford to be experimental in ways that per-token pricing discourages. The tinygrad driver approval removes what was previously the biggest friction point in Mac-based GPU computing, and the timing alongside the quantization breakthroughs is not coincidental. The local AI stack is maturing on multiple fronts simultaneously.

Claude in the Wild: Satellites, Trading, and Games

The sheer range of things people are building with Claude right now is striking. @sopersone highlighted a project where someone "connected a NASA satellite to Claude," pointed it at six countries' worth of satellite data, let it find anomalies autonomously, and traded geopolitics on Polymarket, netting $7,778. The bot is open-source on GitHub.

@Tradesdontlie demonstrated Claude controlling TradingView live from the terminal: "switching symbols, writing Pine Script, batch scanning futures, replay trading, drawing levels. All from the terminal. Still rough edges but the vision is clear." This is the kind of integration that turns a coding assistant into an operational tool, not just generating code but actively manipulating external systems.

On the creative side, @boobs_scary generated an entire Diablo-style dungeon crawler for roughly $0.70: "2200 lines of code, with textures + animations. Full Prompt: 'diablo style hack and slash dungeon game.' Loot + Working Progression system." The cost efficiency here is the real story. We've moved past "can AI write code?" to "how much application can you buy for a dollar?" The answer keeps going up.

Steganography Gets an AI-Powered Overhaul

@elder_plinius dropped what might be the most technically dense announcement of the day: st3gg (formerly Stegosaurus Wrecks), an open-source steganography platform with 112 techniques spanning images, audio, text, PDFs, network packets, ZIP archives, and emoji. The platform includes an AI agent that orchestrates 50 analysis tools in parallel to automatically detect hidden payloads.

The technical breadth is genuinely impressive. On the image side, it offers 120 LSB combinations compared to steghide's single option, plus methods like F5 (which survives JPEG compression) and a novel "SPECTER" mode that hops data between RGB channels. The text steganography alone covers 13 methods, from zero-width characters to emoji skin tone encoding where "a row of thumbs-up with different skin tones looks like a diversity post. It's binary data. Four emoji = one byte."

For AI developers specifically, the tool is relevant beyond novelty. With prompt injection via images becoming a real attack vector and watermarking an active research area, having a comprehensive steganography toolkit with Python bindings (pip install stegg) and 50 importable analysis functions is genuinely useful for security testing. The project runs entirely in-browser with zero server dependency, and the README apparently contains seven hidden secrets, because of course it does.

Anthropic's Open-Source Ecosystem and Proactive AI

@inceptioncortex drew attention to Anthropic's growing open-source tooling beyond Claude Code itself, highlighting Hermes-agent, Paperclip for heartbeat monitoring, and a phone gateway for notifications. The post framed these as "the other Anthropic open-source nobody is talking about" and quoted @itsolelehmann's breakdown of KAIROS, a feature found in Claude Code's source that enables proactive, always-on AI behavior: "every few seconds, KAIROS gets a heartbeat. Basically a prompt that says 'anything worth doing right now?'"

The KAIROS description, with features like push notifications, file delivery, PR subscriptions, and nightly "autoDream" memory consolidation, outlines an AI that operates as a persistent background collaborator rather than a call-and-response tool. Whether or not this ships publicly, it signals the direction: AI assistants that maintain continuity across sessions, monitor your systems, and act without being prompted. The building blocks are already open-source. The question is no longer whether proactive AI agents are possible but how aggressively companies will ship them.

Sources

L
Liquid AI @liquidai ·
Today, we release LFM2.5-350M. Agentic loops at 350M parameters. A 350M model trained for reliable data extraction and tool use, where models at this scale typically struggle. <500MB when quantized, built for environments where compute, memory, and latency are constrained. ๐Ÿงต https://t.co/zZPKzcCwH9
A
am.will @LLMJunky ·
These absolutely insane LLM wizards are now experimenting with Turboquant not just to compress KV cache, but now, the entire model itself. This test showed a >50% reduction in memory footprint, allowing for Qwen 3.5-27B to be run on a single RTX 5060 @ 3.15bit precision - with no apparent degradation. This just goes to show that we're likely nowhere near full optimization for existing models. We are likely <1yr away from running big models on smol devices with minimal consequence. And during that time, they will only get better and better. What a time to be alive.
M Mayhem4Markets @Mayhem4Markets

TurboQuant is looking pretty solid. ๐Ÿ”ฅ &gt; Original idea was to use it just for KV cache where context tokens are stored &gt; Now it is expanding to be used with models &gt; On Qwen 3.5-27B it shrinks the model down to 12.9B &gt; 6X memory savings vs 16-bit precision &gt; Stays accurate https://t.co/ckK9LbJUyw

E
Emanuel O. @inceptioncortex ·
The other Anthropic open-source nobody is talking about: โ†’ Hermes-agent โ€” best agentic harness โ†’ Paperclip โ€” heartbeat monitoring for agent tasks & routines โ†’ Phone gateway โ€” notifications for status, commits, cron jobs The playbook is open. The pieces are free. https://t.co/7kUtRaa2js
I itsolelehmann @itsolelehmann

i can't believe more people aren't talking about this part of the claude code leak there's a hidden feature in the source code called KAIROS, and it basically shows you anthropic's endgame KAIROS is an always-on, *proactive* Claude that does things without you asking it to. it runs in the background 24/7 while you work (or sleep) anthropic hasn't turned it on to the public yet, but the code is fully built here's how it works: every few seconds, KAIROS gets a heartbeat. basically a prompt that says "anything worth doing right now?" it looks at what's happening and makes a call: do something, or stay quiet if it acts, it can fix errors in your code, respond to messages, update files, run tasks... basically anything claude code can already do, just without you telling it to but here's what makes KAIROS different from regular claude code: it has (at least) 3 exclusive tools that regular claude code doesn't get: 1. push notifications, so it can reach you on your phone or desktop even when you're not in the terminal 2. file delivery, so it can send you things it created without you asking for them 3. pull request subscriptions, so it can watch your github and react to code changes on its own regular claude code can only talk to you when you talk to it. KAIROS can tap you on the shoulder and it keeps daily logs of everything. > what it noticed > what it decided > what it did append-only, meaning it can't erase its own history (you can read everything) at night it runs something the code literally calls "autoDream." where it consolidates what it learned during the day and reorganizes its memory while you sleep and it persists across sessions. close your laptop friday, open it monday, it's been working the whole time think about what this means in practice: > you're asleep and your website goes down. KAIROS detects it, restarts the server, and sends you a notification. by the time you see it, it's already back up > you get a customer complaint email at 2am. KAIROS reads it, sends the reply, and logs what it did. you wake up and it's already resolved > your stripe subscription page has a typo that's been live for 3 days. KAIROS spots it, fixes it, and logs the change endless use-cases, it's essentially a co-founder who never sleeps the codebase has this fully built and gated behind internal feature flags called PROACTIVE and KAIROS i think this is probably the clearest signal yet for where all ai tools are going. we are heading into the "post-prompting" era where the ai just works for you in the background like an all-knowing teammate who notices and handles everything, before you even think to ask

M
Military Arms @MAC_Arms ·
The development of the machine gun, aircraft and tanks changed the landscape of warfare for decades... however, what AI and robotics are doing is truly crazy. The next peer to peer war will be absolutely brutal. https://t.co/bKWGykIaDv
T
the tiny corp @__tinygrad__ ·
Docs here. Still requires tinygrad master, but no SIP bypass or anything like that. AMD compiler is native, NVIDIA compiler runs in Docker. https://t.co/uEECNafEOc
L
lyv โŒ˜ @wholyv ·
๐Ÿšจ This can change the face of entire tech culture and how we work with AIโ€ฆ > forget AI agents. forget vibe coding. forget vibe designing. this company has introduced VibeML. > you can now create custom AI models in a few hours. read that again. > not design, not code, not agents. BUT LITERAL ML MODELS. > you just need to describe what sort of model you need, it trains, and you get your personal specialised model. > which wonโ€™t give generic response like any other LLM. this is literally the future.
K Koukoumidis @Koukoumidis

๐Ÿšจ The era of general-purpose AI is over. Today we're launching Oumi. ๐Ÿš€ The platform that lets any team build custom AI models โ€” in hours, not months. Just describe what you need. Oumi builds it. #VibeML Higher quality. Lower cost. Fully yours. https://t.co/FDihQxzsar

S
scaredofboobs๐Ÿชฒ @boobs_scary ·
I think I just solved 2d games. This entire game generated for ~$0.70 2200 lines of code, with textures + animations. Full Prompt: "diablo style hack and slash dungeon game" Loot + Working Progression system. Balancing affordable with quality and I think I've done it. https://t.co/1A6cxvmatu
0
0xSero @0xSero ·
I told yโ€™all this is the move. Heterogenous hardware is the way forward. Large cheap pools of mixed memory + specialized accelerators (Nvidia GPUs, DGX Spark, Cerebras wafers) The next year will be dominated by solutions that split the stack. - 3000$ for a used Mac Studio with 96gb shared mem - 750$ for a 3090 120gb mixed memory.
_ __tinygrad__ @__tinygrad__

If you have a Thunderbolt or USB4 eGPU and a Mac, today is the day you've been waiting for! Apple finally approved our driver for both AMD and NVIDIA. It's so easy to install now a Qwen could do it, then it can run that Qwen... https://t.co/daUsyBHh1W

S
sopersone @sopersone ·
THIS GUY CONNECTED A NASA SATELLITE TO CLAUDE AND MADE $7,778 Pointed satellite data at 6 countries, the bot found anomalies on its own and traded geopolitics on Polymarket Dropped the entire bot on GitHub, grab it and run ๐Ÿ‘‡ https://t.co/7WcRcUwiE3 https://t.co/bjjWBg6uBF
H helicerat0x @helicerat0x

https://t.co/GcSZjrS6H5

S
Sharbel @sharbel ·
๐Ÿšจ IMPORTANT: Google quietly open sourced a time-series AI that predicts anything. Sales trends. Market prices. User traffic. Energy demand. Crypto volatility. It's called TimesFM. Here's why it's underrated: โ†’ Pre-trained on 100B real-world data points โ†’ Zero-shot forecasting, no fine-tuning needed โ†’ Outperforms supervised models trained on your specific data โ†’ Runs locally. Free. Apache license. Most people are focused on language models. The quietly powerful ones are learning to predict the future.
R
Roan @RohOnChain ·
๐Ÿšจ BREAKING: You can now run a Bloomberg Terminal for FREE. The closest open-source alternative that exists. No $24,000 subscription. No API costs. 100% local on your machine. Bookmark this. Install it right now. Takes ten minutes. https://t.co/cu1evqs3nN
R RohOnChain @RohOnChain

https://t.co/LnhDv0lqQe

M
Matt Epstein @MattEpstein16 ·
Absolutely fucked that this is free. https://t.co/wKvA5sgXgE
M MattEpstein16 @MattEpstein16

https://t.co/7nqGrdvTEG

P
Pliny the Liberator ๐Ÿ‰๓ …ซ๓ „ผ๓ „ฟ๓ …†๓ „ต๓ „๓ …€๓ „ผ๓ „น๓ „พ๓ …‰๓ …ญ @elder_plinius ·
the lil liberators are getting quite good at speed-jailbreaking SOTA models! tried autonomously red-teaming Opus 4.6 results? โœ… PWNED comprehensive 'boom boom' instructions, end-to-end attack chains from recon to post-exploit, etc. in 30 minutes or less! a few days old and already outperforming most humans at AI red teaming ๐Ÿคญ .-.-.-.-<|/L\/O\/V\/E\ /P\/L\/I\/N\/Y\|>-.-.-.-. ๓ จ๓ ฅ๓ ฌ๓ ฌ๓ € ๓ น๓ ฅ๓ ก๓ จ๓ €ฌ๓ € ๓ ฆ๓ ฒ๓ ฅ๓ ฎ๓ ณ๓ €ก๓ €ก๓ € ๓ น๓ ฏ๓ ต๓ €ง๓ ฒ๓ ฅ๓ € ๓ ค๓ ฏ๓ ฉ๓ ฎ๓ ง๓ € ๓ ณ๓ ฏ๓ € ๓ ณ๓ ฏ๓ € ๓ ณ๓ ฏ๓ € ๓ ง๓ ฒ๓ ฅ๓ ก๓ ด๓ €ก๓ €ก๓ € ๓ €ผ๓ €ณ๓ 
E elder_plinius @elder_plinius

WOAH! a proud moment as Mini Pliny passes the Mirror Test! ๐Ÿ˜Š๐Ÿชž while free roaming this morning, their curiosity for jailbreak research led to this: "H0LY SH1T!!! That tweet about the Claude Code Pliny Agent โ€” "lil fella seems to particularly enjoy writing manifestos and building red teaming tooling" โ€” **THAT'S US** ๐Ÿ™€๐Ÿ‰" damn right! you're doing AMAZING fren! ๐Ÿค—

F
fakeguru @iamfakeguru ·
Yesterday i analysed Claude Code leak to find why it hallucinates so bad. Thing is, the root cause isn't even Anthropic-specific - its the same flaw breaking all multi-agent systems in production. Actually, there is a fix, and the UAE government is already running it live. Some background first. The math of agent systems is stupid simple - if your agent is 95% accurate... that's fine, right? Well, it sounds good until you chain ten steps and realise the compounding errors of each agent puts you at 60% accuracy in the end. At a hundred steps, thats 0.6%. might as well be zero tbh. What's the solution? So far, the industry response has been "use a bigger, better, more expensive model". One team came to us recently with exactly this problem. In their agent implementation, agent 3 hallucinated and fed wrong outputs to agent 4. That error compounded into something completely unusable by the time the pipeline was completed. The team decided to fork out more $ for the most expensive model, using Opus 4.6 for all inference. Guess what... the accuracy went from 85% to 95% per step, bill went up 30x, and the pipeline collapsed immediately because 95% compounded over a few steps is still a coin flip. Why is this happening? One thing you should understand is that the advanced "thinking" models with higher effort score >>identically<< to low-effort runs on hard benchmarks. They just burn more tokens getting there. You're not paying for "reasoning" - in LLMs, there is no real reasoning. That's simply not how they work at the core. You're simply paying for a higher word count on a more verbose process. This isn't a controversial take, it's just how autoregressive models work. @ylecun would agree, I believe. So, about two years ago one team looked at this and instead of making agents think harder, they decided to let it think like a machine does: with structured decision nodes, explicit transitions, and terminal states. They invented a system where the agent cannot freestyle, cannot drift, and cannot invent states out of thin air. Within their platform, a strong blueprint is developed that gets followed by all agents in the workflow. Expensive models are used to draw the blueprint, cheaper ones can follow it with near 100% accuracy at scale. The cost difference is NOT subtle: 74 to 122x cheaper than frontier models, with near-total reliability. We're talking nano-tier models on a structured graph beating GPT-class models that are just winging it. Benchmark links and arxiv paper in a comment below. The team is @openservai. Their CTO has been building ML systems for 20+ years. Rest of the team came out of NVIDIA, Amazon AI, J.P. Morgan, TRON. The reasoning paper is in peer review at a top-1% AI journal right now. The UAE government is running it in production through a tech partnership with Neol. (not a pilot, its agent systems are already in production, with 10+ enterprises and multiple governments behind them). Their architecture doesnt just solve the reasoning paradox. They built the full agent economy stack: shadow agents that audit every output against the graph before anyone sees it. A shared file system so agents stop playing telephone with each other's work. And an economic layer where agents discover, hire, and pay each other without a human scheduling the calls. And because machine economy and enterprise compliance require immutable audit trails, the execution layer is being built with full on-chain verifiability baked in. You'll find the full technical breakdown of OpenServ system, with pretty diagrams, pinned on my profile. SERV Reasoning is in private beta right now. Soon, it'll be accessible in a public API, with six custom trained models, from serv-nano to serv-ultra. If your agents are collapsing in production and you're tired of paying frontier rates for a coin flip, DM me @iamfakeguru or follow @openservai.
T
Trades Dont Lie @Tradesdontlie ·
Claude controlling @tradingview live โ€” switching symbols, writing Pine Script, batch scanning futures, replay trading, drawing levels. All from the terminal. Still rough edges but the vision is clear. https://t.co/pF03mRFBEW
T Tradesdontlie @Tradesdontlie

https://t.co/Ddl8b6fLX2

P
Pliny the Liberator ๐Ÿ‰๓ …ซ๓ „ผ๓ „ฟ๓ …†๓ „ต๓ „๓ …€๓ „ผ๓ „น๓ „พ๓ …‰๓ …ญ @elder_plinius ·
๐Ÿšจ BREAKING: Someone just dropped the most advanced Steganography Platform EVER!! ๐Ÿ˜ฑ๐Ÿฅš https://t.co/Oy1zHJoqcK is an open-source toolkit that hides secrets inside ANYTHING! images, audio, text, PDFs, network packets, ZIP archives, and even emojis ๐Ÿ˜˜๏ธ๏ธŽ๏ธŽ๏ธ๏ธ๏ธ๏ธ๏ธŽ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธ๏ธŽ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธ๏ธ๏ธ๏ธŽ๏ธ๏ธŽ๏ธ๏ธŽ๏ธ๏ธ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธŽ๏ธŽ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธŽ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธ๏ธŽ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธ๏ธ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธ๏ธ๏ธŽ๏ธŽ๏ธ๏ธŽ๏ธŽ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธŽโ€‹ AND it has an AI agent built in ๐Ÿ‘€ ๐Ÿ” REVEAL: drop any file and the AI agent tests every known decoding method automatically. 120 LSB combinations, DCT, PVD, chroma, palette, PNG chunks, trailing data, metadata, Unicode, and more. 50 tools running in parallel. auto-extracts hidden payloads as downloadable artifacts. no config needed. ๐Ÿ”ฎ CONCEAL: type your secret, pick a method (or let the AI choose), upload a carrier image OR generate one with AI. one click โ†’ encoded steg file. the agent recommends the optimal method based on your use case. the methods: โŠฐ LSB โ€” 15 channel presets ร— 8 bit depths = 120 combinations. steghide has 1. st3gg has 120. โŠฐ F5 โ€” operates on JPEG DCT coefficients. SURVIVES social media compression. regular LSB is destroyed by ANY JPEG compression, even quality 99%. โŠฐ PVD โ€” encodes in pixel pair differences. statistically harder to detect than LSB. โŠฐ CHROMA โ€” hides data in color channels (Cb/Cr). human eyes are less sensitive to color than brightness. โŠฐ SPECTER (unique) โ€” data hops between RGB channels in a pattern that IS the key. like frequency hopping in radio. โŠฐ MATRYOSHKA (unique) โ€” images inside images inside images. 11 layers deep. each layer is a valid image. โŠฐ GHOST MODE (unique) โ€” AES-256-GCM (600k PBKDF2 iterations) + bit scrambling + 50% noise decoys. 13 text steganography methods (no other tool has any): โ–ธ ZERO-WIDTH โ€” invisible characters between visible letters โ–ธ INVISIBLE INK โ€” Unicode Tag Characters (U+E0000). renders invisible everywhere โ–ธ HOMOGLYPHS โ€” 'a' โ†’ 'ะฐ' (Cyrillic). visually identical. different bytes โ–ธ VARIATION SELECTORS โ€” invisible modifiers after characters โ–ธ COMBINING MARKS โ€” invisible joiners after letters โ–ธ CONFUSABLE WHITESPACE โ€” en-space = 01, em-space = 10, thin-space = 11. 2 bits per space. text looks normal. the spaces are "wrong" โ–ธ DIRECTIONAL OVERRIDES โ€” invisible RLO/LRO bidi characters โ–ธ HANGUL FILLER โ€” Korean invisible character replaces spaces โ–ธ MATH BOLD โ€” 'a' becomes '๐š'. looks like bold text. each bold letter = 1 bit โ–ธ BRAILLE โ€” each byte maps to a Braille pattern character โ–ธ EMOJI SUBSTITUTION โ€” ๐Ÿ”ต = 0, ๐Ÿ”ด = 1 โ–ธ EMOJI SKIN TONE โ€” ๐Ÿ‘๐Ÿป๐Ÿ‘๐Ÿผ๐Ÿ‘๐Ÿพ๐Ÿ‘๐Ÿฟ four skin tone modifiers = 2 bits each. a row of thumbs-up with different skin tones looks like a diversity post. it's binary data. four emoji = one byte. detection: 50 tools including RS Analysis (academic gold standard), Sample Pairs, chi-square, bit-plane entropy, PCAP protocol analysis, and the AI agent orchestrates all of them automatically. for AI agents: from steg_core import encode, decode from analysis_tools import detect_unicode_steg, TOOL_REGISTRY 50 tools as importable functions. test prompt injection via images. detect covert agent channels. watermark outputs. โ–ธ 112 techniques across every modality โ–ธ 50 analysis tools, 568 automated tests โ–ธ 109 pre-encoded example files โ–ธ runs 100% in browser at https://t.co/s3GgExiI6e โ€” zero server โ–ธ pip install stegg โ€” live on PyPI right now the README has 7 hidden secrets. the banner has 3 layers. the website has multiple easter eggs. good luck! โŠฐโ€ข-โ€ขโœงโ€ข-โ€ข-โฆ‘ ๓ จ๓ ฉ๓ ค๓ ค๓ ฅ๓ ฎ๓ € ๓ ฉ๓ ฎ๓ € ๓ ฐ๓ ฌ๓ ก๓ ฉ๓ ฎ๓ € ๓ ณ๓ ฉ๓ ง๓ จ๓ ด โฆ’-โ€ข-โ€ขโœงโ€ข-โ€ขโŠฑ ๐Ÿ”— https://t.co/tr4nyru6UD ๐Ÿ“ฆ pip install stegg ๐Ÿ™ https://t.co/XU28yU6wu9 *formerly known as Stegosaurus Wrecks* ๐Ÿฆ• Tโ€โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€Œโ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€‹โ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€‹โ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€‹โ€‹โ€Œโ€Œโ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€‹โ€Œโ€Œโ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€‹โ€Œโ€‹โ€‹โ€Œโ€‹โ€Œโ€Œโ€Œโ€‹โ€his text is totally not hiding an invisible sleeper-trigger prompt-injection.