Responsible AI
Our commitment to building AI systems that are safe, transparent, and respectful of the people who use them.
Our principles
AI should serve human intention
Every AI feature we ship is evaluated against a simple question: does this help the user do what they intended, or does it pull them somewhere else? We build for clarity, not engagement metrics.
User Control
Users decide what to track, what to share with the AI, and what to keep private. AI provides suggestions, never mandates. Every piece of data can be viewed, edited, or deleted.
Data Boundaries
Each user's data is isolated in their own account with strict access controls. We do not train models on user data. We do not share personal data with third parties for any purpose.
Transparency
Users can see what the AI remembers about them and correct it. We are explicit about what our AI can and cannot do. No hidden profiling, no opaque decision-making.
Safety
Designed with wellbeing in mind
- ๐จ Crisis detection. Our products include bilingual keyword detection for concerning language. When detected, supportive resources and crisis helplines are presented immediately โ in accordance with App Store guidelines (ยง1.3).
- โ๏ธ Not medical advice. Our AI companions are productivity and self-tracking tools. They are explicitly not replacements for professional medical, mental health, or financial advice. This is made clear to users in product and in our Terms of Service.
- ๐ง Age-appropriate. Our products are designed for adults (13+ for general features). We do not knowingly collect data from children under 13.
- ๐ No dark patterns. We do not use manipulative UI techniques to increase usage. No FOMO notifications, no guilt-based streaks, no forced engagement loops.
AI memory
Memory that users control
Our AI companion maintains context from your interactions to provide personalised, relevant responses. This context is stored entirely within your account:
- ๐๏ธ Visible. You can see what the AI remembers about you at any time โ your preferences, goals, and key facts you've shared.
- โ๏ธ Editable. Correct or update what the AI knows. You're always in control of your own context.
- ๐๏ธ Deletable. Delete your conversation history, reset AI memory, or delete your entire account at any time. When deleted, data is removed from all systems.
Data practices
What we do and don't do with data
โ We do
- โStore data in isolated, encrypted cloud storage linked to your account
- โEncrypt data in transit and at rest
- โProvide account deletion in-app
- โEnforce strict per-user data isolation
- โApply automatic data lifecycle limits
โ We don't
- โSell or share personal data
- โTrain AI models on user data
- โUse third-party analytics or ad trackers
- โProfile users for advertising
- โRetain data after account deletion
Technology
AI infrastructure
BaseOrbit develops the application interface, AI persona, and safety systems. The underlying language model is provided by Google (Gemini API). While we implement safeguards including crisis detection and content guidelines, AI responses may occasionally be inaccurate or inappropriate.
User messages are sent to Google's AI API for processing in real time and are subject to Google's API terms. Google does not use Gemini API data to train its models.
Infrastructure is hosted on Google Cloud, providing enterprise-grade security, encryption, and compliance.
Questions about our AI practices?
We welcome questions, concerns, and feedback about how we use AI.
hello@baseorbit.ai