Article image
brinsa.com

Three AIs Walk Into a Bar ... and the bartender leaves the cash register open and the door propped.

markus brinsa 17 february 13, 2026 4 4 min read create pdf website all articles

Sources

People love to argue about which model is “smarter,” as if the great danger of modern AI is choosing the wrong flavor of genius. Meanwhile, the real disaster shows up wearing a name tag that says “mobile app,” holding a clipboard that says “chat history,” and smiling like it’s doing you a favor.

A popular AI wrapper app called Chat & Ask AI reportedly exposed hundreds of millions of messages tied to tens of millions of users because its backend database was configured in a way that let outsiders read what should have been private. The app wasn’t the model. The app was the storage. The app was the memory. The app was the part that quietly decided your chats were a product feature.

And then it did what products do when someone forgets to lock the door. It leaked.

The part nobody thinks about when they install it

AI wrapper apps sell convenience. One interface, many models, the promise that you can bounce between ChatGPT, Claude, and Gemini like you’re sampling gelato. The pitch is control. The reality is consolidation. One place where your chats live, where your settings live, where your prompts pile up into a single, searchable portrait of your life.

That portrait is not just a list of questions. It’s the questions you ask when you’re embarrassed. The drafts you wrote when you were angry. The plans you made when you were scared. The half-true versions of your job you told a machine because it felt easier than telling a person. The little personal facts you drip into a conversation because the interface makes it feel private and ephemeral.

It’s neither. A chat history is a diary with timestamps, preferences, and context. Add the model choice and app settings, and you’re no longer leaking text. You’re leaking behavior. You’re leaking patterns. You’re leaking the kind of metadata that makes social engineering feel less like guessing and more like reading.

Firebase isn’t the villain. Defaults are

This story didn’t require an elite hacker with a hoodie budget and a villain monologue. It required something much more common: a developer shipping fast and leaving Firebase security rules in a state that effectively made the data accessible when it shouldn’t have been.

Firebase is designed to help teams build quickly, and it absolutely can be secured. The issue is what happens in the gap between “it works” and “it’s defensible.” The app works. The database works. The authentication exists in theory. And then the security rules are left permissive, whether by oversight, haste, or the classic startup prayer: nobody will notice.

Somebody noticed. And because this is 2026, the story gets worse in the most predictable way possible. The researcher didn’t just find one app. They found a pattern, built a scanner for it, and reported that a large share of the apps they checked had similar exposures. Once a mistake becomes automatable, it stops being an incident and turns into an ecosystem condition.

The real punchline is that this is not rare

Everyone wants the plot twist to be “AI did something spooky.” The actual plot twist is that the spookiest thing about AI apps is that they are, structurally, normal apps built by normal teams under normal time pressure, and then marketed as magic.

Magic is a great way to get users to overshare. Speed is a great way to get developers to under-secure. Put them together and you get a product category that trains people to pour sensitive context into systems that are still being assembled.

If you want a clean mental model, here it is: the model provider is not automatically the place where your risk lives. The wrapper is. The app that stores your history is. The backend you never see is. The convenience layer is where your secrets go to wait for a configuration error.

The only safe assumption is that saved chats will leak eventually

Not because every company is careless. Not because every platform is broken. Because the more you store, the more you create a future breach surface, and the more incentives you create to keep storing even more.

History improves the product. History improves retention. History improves personalization. History also turns a chat app into a high-value target full of the exact kind of content people can’t easily rotate away from. You can change a password. You can’t change what you confessed.

So the lesson is boring, and that’s why it matters. Treat AI chats like email. Treat them like documents. Treat them like records. And if an app wants to keep your entire history by default, assume you are being asked to donate future leverage against yourself.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™