Linubra
Company K in The Linubra Journal

Your Life Is Not Training Data

What data sovereignty actually means in 2026

Part 4 of the series: Building Linubra

Patrick Lehmann
Patrick Lehmann
· 6 min read
A sealed concrete cube surrounded by untouching glass spheres — representing data sovereignty and protected personal data

Most AI products are honest about what they are, if you read carefully enough.

The terms of service tell you that your inputs may be used to improve the model. The privacy policy explains that your data is stored on shared infrastructure in a jurisdiction you didn’t choose. The marketing page says “private by design” and means “we use HTTPS.”

For a tool that stores your grocery list or your movie preferences, this is an acceptable trade. The data is low-stakes. The convenience is high. The bargain makes sense.

A Reasoning Memory Engine is not that kind of tool.

TL;DR: 82% of consumers perceive AI data loss-of-control as a serious personal threat, and 76% would switch brands for transparency — even at higher cost (Relyance AI, 2025). When your AI tool handles meeting commitments, health observations, and relationship dynamics, the standard “free in exchange for data” model doesn’t hold. Data sovereignty isn’t a feature — it’s the foundation.


Why Is Personal AI Data a Different Category?

A Reasoning Memory Engine captures meetings where budget commitments were made. Medical observations about your children. Injury logs. Family conflicts. Strategic decisions about your company. The names and relationship dynamics of people you trust.

This is not a notes app. It is a comprehensive record of your cognitive life — the decisions you made, the patterns in your health, the commitments others made to you, the private context behind every relationship that matters to you.

A tool of this sensitivity cannot operate under the standard “free in exchange for your data” business model of the AI industry. Not because that model is dishonest — it is often disclosed clearly — but because the data it would consume is categorically different from what most AI tools handle.

If your reasoning memory engine is also your model’s training set, you don’t have a reasoning memory engine. You have a data harvesting product with a useful interface.

This distinction matters. It shapes every architectural decision we’ve made.


What Does Data Sovereignty Actually Mean?

A December 2025 survey of over 1,000 U.S. consumers found that 82% perceive AI data loss-of-control as a serious personal threat — with 43% calling it “very serious.” 81% suspect their data is being used for AI training without disclosure (Relyance AI, 2025). The suspicion is often justified. But data sovereignty isn’t a marketing phrase. It’s a set of specific technical and operational commitments.

Your data trains nothing. Linubra processes your inputs using Gemini via Vertex AI under a contract that prohibits the use of customer data for model training. Your memories are not federated into a shared model. Your patterns do not inform improvements served to other users. The intelligence the system builds belongs to your Knowledge Graph, not to a foundation model.

Your data lives in dedicated storage. Every user’s data is isolated in its own Google Cloud Storage bucket, per environment. There is no shared storage layer where a misconfiguration could expose one user’s data to another. This is more expensive to operate than shared infrastructure. We consider it a baseline requirement.

Authentication is built for the threat model. The web application uses HttpOnly cookies for session management — not localStorage, not URL-embedded tokens. HttpOnly cookies are not accessible to JavaScript, which means they can’t be exfiltrated by XSS attacks. For a tool that handles sensitive personal and professional data, this is the correct architecture. It’s also less convenient to implement than the alternatives. We made that trade deliberately.

We do not sell access to your data. There is no advertising model. There is no data brokerage. There is no “anonymised aggregate insights” product that is, in practice, less anonymised than claimed. This is a subscription product. The revenue model is aligned with user value, not data volume.

What Consumers Do When AI Companies Lack Transparency — 57% stop using entirely, 27% restrict usage, 16% continue. 84% take action. Source: Relyance AI, December 2025


How Honest Is the Technical Picture?

We won’t claim that any system is perfectly secure. We will claim that our architecture reflects the actual threat model of a tool handling sensitive personal data, rather than the threat model of a SaaS product handling task lists.

The global average cost of a data breach reached $4.88 million in 2024, a 10% increase year-over-year, based on analysis of 604 real-world breaches (IBM, 2024). For a tool that stores the kind of information a Reasoning Memory Engine handles — health data, financial discussions, relationship dynamics — the consequences of a breach are not just financial. They’re personal.

The attack surface for a reasoning memory engine is different from the attack surface for a notes app. The appropriate response isn’t to add a compliance checkbox — it’s to build the security model into the architecture from the beginning, before there is any legacy infrastructure to work around.

That is what we’ve done. You can read more about our approach in our privacy policy.


Why Is This a Product Decision, Not a Marketing Decision?

76% of consumers say they would switch brands for AI transparency, even at higher cost (Relyance AI, 2025). Meanwhile, 70% of Americans have very little or no trust in companies to use AI responsibly (Pew Research Center, 2023). The trust deficit is growing, not shrinking.

There is a version of this product that would be easier to build and cheaper to operate if we relaxed these constraints. We could use shared storage. We could use localStorage for session tokens. We could participate in model improvement programmes in exchange for reduced API costs. We could build an advertising product on top of the behavioural patterns in the Knowledge Graph.

We haven’t done any of these things, and we won’t.

This isn’t because we’re naive about business models. It’s because the core value proposition — that a Reasoning Memory Engine should function as an autonomous Chief of Staff for your entire life — is only credible if the vault is actually closed.

You can’t build a comprehensive reasoning memory engine and ask people to trust it with their most sensitive data if that data isn’t genuinely protected. The product collapses without the trust. The trust requires the architecture. The architecture requires the constraints.

Data Sovereignty is not a feature. It is the foundation.


What Does This Mean for You?

If you use this system, your inputs are processed to build your Knowledge Graph. That graph is yours. It is not shared, not used for training, not monetised.

If you stop using it, your data can be exported and deleted. We’re building the export tooling now and will ship it before general availability. Portability is part of sovereignty.

If you have specific compliance requirements — GDPR, HIPAA, or otherwise — contact us. We’ll tell you exactly what we can and cannot support, without ambiguity.

Your life is not training data. That isn’t a slogan. It is the constraint the entire product is built around. And it’s why the knowledge graph builds itself from your experience, not from a system that demands your constant maintenance.


Frequently Asked Questions

How concerned are consumers about AI companies using their personal data?

82% of consumers perceive AI data loss-of-control as a serious personal threat, with 43% rating it “very serious.” 81% suspect their data is being used for AI training without adequate disclosure (Relyance AI, 2025). A separate Deloitte study found that consumer privacy concerns jumped from 60% to 70% year-over-year, with fewer than half believing online service benefits outweigh privacy risks (Deloitte, 2025).

What happens to my data if I stop using the service?

Your data can be exported in standard formats and permanently deleted upon request. We’re building full data portability tooling ahead of general availability. Portability is a core component of data sovereignty — owning your data means being able to leave with it. Read our full privacy policy for details.

Does the AI model learn from my personal data?

No. All AI processing uses Gemini via Vertex AI under contractual terms that prohibit the use of customer data for model training. Your memories, patterns, and knowledge graph are never federated into a shared model or used to improve outputs for other users. The intelligence belongs to your graph, not the foundation model.

How much does a data breach actually cost?

The global average cost of a data breach reached $4.88 million in 2024 — a 10% year-over-year increase — based on IBM’s analysis of 604 real-world breaches (IBM, 2024). For personal data of the sensitivity a Reasoning Memory Engine handles (health observations, financial discussions, relationship dynamics), the individual consequences extend well beyond financial cost.

Would consumers pay more for privacy-respecting AI tools?

Yes. 76% of consumers say they would switch brands for AI transparency, even at higher cost (Relyance AI, 2025). A 2025 ExpressVPN survey found that 21% would pay for platforms that don’t use their data for AI training, with another 50% open to it depending on features and price (ExpressVPN, 2025).


Linubra is a Reasoning Memory Engine built on a foundation of Data Sovereignty. It captures raw life logs and builds a private Knowledge Graph — one that belongs entirely to you.


Patrick Lehmann

Written by

Patrick Lehmann

Software Architect & AI Engineer

Founder of Linubra. Building tools that capture reality and retrieve wisdom. Software architect with a passion for AI-powered knowledge systems and the intersection of memory science and technology.

More from Patrick Lehmann

Abstract visualization of a pristine desk overshadowed by an overwhelming filing cabinet — the cost of manual knowledge maintenance

The Hidden Cost of Your Second Brain

Knowledge workers spend 25% of their workweek just searching for information (Glean/Harris Poll). Every note-taking system carries a hidden Maintenance Tax. Here's why we eliminated it.

Patrick Lehmann Patrick Lehmann · · 6 min read
Abstract visualization of threads connecting business and athletic objects — revealing hidden cross-domain patterns

The Knee, the Board Meeting, and a Pattern

75% of athletic injuries are associated with anxiety or depression (PMC, 2024). A founder's board-meeting stress caused a running injury no app caught — because the data lived in different silos.

Patrick Lehmann Patrick Lehmann · · 6 min read
Raw inputs — audio waveforms, text snippets, images — flowing through a reasoning layer into a structured knowledge graph

Why We Built Linubra

Knowledge workers lose 9.3 hours per week searching for information (McKinsey). Note-taking tools demand constant maintenance. A Reasoning Memory Engine builds the knowledge graph for you.

Patrick Lehmann Patrick Lehmann · · 7 min read

Stay in the loop

Get insights on AI, memory, and building tools that capture reality.