Linubra
Product K in The Linubra Journal

Why We Built Linubra

Most 'second brain' tools make you do all the work. We built one that thinks.

Part 1 of the series: Building Linubra

Patrick Lehmann
Patrick Lehmann
· 4 min read
Abstract visualization of neural connections forming a knowledge graph

Every knowledge worker has experienced the same frustration. You had a conversation three weeks ago — maybe it was a lunch meeting, a phone call, or a quick hallway chat — and now you need to recall a specific detail. A name. A number. A commitment someone made.

You open your note-taking app. Nothing. You search your email. Close, but not quite. You dig through your calendar, trying to reconstruct the context. Twenty minutes later, you either find a fragment or give up entirely.

This is the problem Linubra was built to solve.

The Second Brain Paradox

Tools like Obsidian, Notion, and Roam Research popularized the idea of a “second brain” — a digital extension of your memory where you capture, organize, and retrieve knowledge. The concept is powerful. The execution has a fatal flaw.

You have to do all the work.

Every insight must be manually typed. Every connection must be manually linked. Every piece of context must be manually tagged. The “second brain” is really just a fancy filing cabinet that requires constant maintenance.

For the rare disciplined note-taker, this works beautifully. For the other 95% of us — the ones with back-to-back meetings, who think while walking, who have their best ideas in the shower — the second brain stays empty.

The Life Log Gap

On the opposite end of the spectrum, tools like Rewind and Limitless take a passive approach. They record everything — screen activity, ambient audio, meeting transcripts — creating an exhaustive log of your digital life.

The problem? Retrieval is shallow. You get keyword search over transcripts. You get timestamped recordings. What you don’t get is understanding.

Ask these tools “What did Mark say about the Q3 budget?” and you’ll get a transcript snippet. Ask “How has Mark’s position on the Q3 budget changed over the last month?” and you get silence. They capture data. They don’t build knowledge.

The Missing Layer: Reasoning

Linubra occupies the space between these two approaches. Like life-logging tools, capture is effortless — speak into your phone, share a link, jot a quick note. No manual organization required.

But unlike life-logging tools, Linubra doesn’t just store what you said. It reasons about it.

When you record a voice note about a lunch meeting, Linubra’s AI engine:

  1. Extracts structured data — people mentioned, action items, decisions made, sentiment, location, and time
  2. Resolves entities — recognizing that “Dr. Schmidt,” “Patricia,” and “the neuroscientist from Berlin” are the same person
  3. Embeds the memory semantically — placing it in a high-dimensional vector space where similar concepts cluster together
  4. Detects contradictions — flagging when new information conflicts with what you’ve previously recorded
  5. Connects to your knowledge graph — linking people to events, projects to organizations, commitments to deadlines

The result isn’t a transcript. It’s a knowledge graph — a living, queryable model of your world that grows smarter with every interaction.

What This Means in Practice

Instead of searching through transcripts, you can ask Linubra:

  • “Brief me on everything related to Project Aurora before tomorrow’s meeting”
  • “When is Sarah’s birthday? I think she mentioned it last month”
  • “What commitments did I make to the engineering team this quarter?”
  • “How has my running pace changed over the past 6 weeks?”

Linubra doesn’t just find relevant recordings. It synthesizes an answer, cites its sources, and surfaces connections you might have missed.

The Technical Bet

Building Linubra required a technical bet: that large language models with long-context windows could serve as genuine reasoning engines over personal data, not just chatbots with memory.

Specifically, we bet on Google’s Gemini 3 Pro and its ability to:

  • Process long audio files directly (no separate transcription step)
  • Extract structured data from unstructured speech with high fidelity
  • Maintain consistency across hundreds of memories in a single context window
  • Resolve ambiguous entity references across conversations separated by weeks

So far, the bet is paying off. But there’s much more to build.

What’s Next

This post is the first in a series documenting how Linubra works under the hood. In upcoming posts, we’ll cover:

  • The Knowledge Graph architecture — PostgreSQL, pgvector, and why we chose a property graph over a triple store
  • Entity resolution at scale — how Linubra decides that “Mark” and “Marcus from accounting” are the same person
  • The Nightly Consolidator — how Linubra defragments your memory graph while you sleep
  • Hybrid search — combining semantic similarity with keyword matching for reliable retrieval

If you’re building in the AI-augmented knowledge space, or if you’re simply tired of losing important details to the void between your meetings and your notes, we’d love to hear from you.


Linubra is currently in development. Follow our journey as we build the reasoning memory engine we’ve always wanted.


Patrick Lehmann

Written by

Patrick Lehmann

Software Architect & AI Engineer

Founder of Linubra. Building tools that capture reality and retrieve wisdom. Software architect with a passion for AI-powered knowledge systems and the intersection of memory science and technology.

Stay in the loop

Get insights on AI, memory, and building tools that capture reality.