

There's a question we grapple with constantly at Keeqe: What does it mean to build software that remembers?
When you tell a friend about your career anxieties, your health concerns, or your family dynamics, there's an implicit understanding. They might remember. They might forget. They won't broadcast it. They'll use good judgment about when to bring things up.
AI doesn't work like that by default. It can remember everything. It doesn't forget unless instructed. And without careful design, it has no judgment about what's appropriate.
This creates profound responsibility.
The Memory Paradox
The more an AI knows about you, the more useful it becomes. An AI that remembers your goals, preferences, and context can genuinely help in ways a memoryless system cannot. This is the core value proposition of personal AI—it's personal.
But the more an AI knows about you, the more potential for harm. Data can be breached. Algorithms can draw unwelcome inferences. Memories can be used against you.
We call this the memory paradox, and we don't think it has a simple solution. What it requires is principled navigation.
Our Principles
1. Memory Serves You, Not Us
Every piece of information Keeqe remembers exists to help you. We don't mine your data for insights to sell. We don't use your personal patterns to train models that benefit other users. We don't build advertiser profiles.
This isn't just policy—it's architecture. Your data is siloed. Your memories are yours.
2. You Control What's Remembered
At any moment, you can:
- Ask Keeqe what it knows about any topic
- Delete specific observations
- Clear entire categories of memory
- Export everything we have about you
- Delete your account and all associated data
These aren't buried options. They're first-class features because control shouldn't require a law degree.
3. Inference Transparency
When Keeqe makes a recommendation or surfaces relevant context, it should be clear why. "I noticed you mentioned wanting to exercise more, and you said mornings work best for you" is accountable AI. "Here's what you should do" with no explanation is not.
We're building toward complete inference transparency—the ability to trace any AI decision back to the observations and reasoning that produced it.
4. No Surprise Memories
Keeqe only remembers information you've directly shared in conversation. We don't scrape your emails, analyze your location history, or infer from connected apps unless you explicitly enable these features and understand what they mean.
When we do offer integrations, the data flow is clear. We tell you exactly what we'll access and what we'll remember.
5. The Right to Be Forgotten
This goes beyond GDPR compliance (though we're compliant). It's a philosophical commitment to the idea that forgetting is a human right.
If you want Keeqe to forget that you ever mentioned a failed relationship, a health scare, or a career misstep—it forgets. Completely. Not archived, not hidden. Gone.
What We Won't Do
Principles are also about limits. Here's what we commit to never doing:
- Selling data or insights derived from your conversations
- Training foundation models on your personal information
- Sharing memories with other users, even aggregate patterns
- Using dark patterns to retain data you want deleted
- Building psychological profiles for manipulation or advertising
The Hard Questions
We don't pretend to have all the answers. Some questions are genuinely hard:
When should AI proactively surface sensitive memories? If you mentioned depression six months ago, should Keeqe bring that up if you seem to be struggling? There's a real tension between helpful awareness and inappropriate presumption.
How do we handle information about other people? When you tell Keeqe about a friend's job loss, whose data is that? We currently err toward minimal retention of third-party information.
What happens when users want AI to remember things they shouldn't? If someone wants their AI to help track an ex-partner's social media, that's a misuse of memory. But the line between legitimate use and misuse isn't always clear.
A Living Document
Our ethics aren't fixed. Technology changes. Social norms evolve. Our understanding deepens. We commit to publicly updating our position as we learn more about what it means to build AI that remembers.
If you have questions, concerns, or perspectives we haven't considered, we genuinely want to hear them. This is a conversation, not a proclamation.