Life OS Mobile
Offline-first mobile companion for Life OS with sub-5-second task capture via text, voice, and photo OCR. Built with React Native, Expo, and an AI chat assistant.
Built the mobile companion to Life OS — optimized for capturing tasks in under 5 seconds through text with natural language parsing, voice recording with Whisper transcription, and photo capture with OCR. Features a BYOK (Bring Your Own Key) AI assistant, native iOS/Android widgets, push notifications, and bidirectional sync with the desktop app. 100% feature-complete.
The Design Philosophy
Desktop Life OS is built for deep work — planning, knowledge management, long writing sessions. But most task creation happens away from a desk: walking, commuting, cooking, in meetings. I needed a mobile app that wasn't a miniaturized desktop — it needed to be purpose-built for capture speed and glanceable information.
The core insight: mobile productivity isn't about doing the work, it's about capturing the intention. Get the thought out of your head and into the system as fast as possible, then do the actual work on desktop later.
Everything in the mobile app is designed around this principle. The primary metric is time-to-capture: how many seconds from "I have an idea" to "it's saved."
Multi-Modal Capture
Quick Add — Text with NLP
A floating action button opens a modal with auto-focused input. Natural language parsing handles the rest: "Buy groceries tomorrow !high #personal" creates a task due tomorrow, high priority, tagged personal. The parse preview shows exactly what will be created before you save.
The parser handles:
- Smart dates: "tomorrow", "next week", "monday", "in 3 days"
- Priority extraction:
!high,!medium,!low - Tag parsing:
#work,#personal,#shopping - Auto-defaults: No date specified? Defaults to today.
Haptic feedback confirms creation. A toast notification with undo appears. Total time: ~3 seconds.
Voice Capture
Hold the mic button, speak your task, release. Whisper transcribes it, the NLP parser extracts structure, and the task is created. Hands-free capture for when you're walking, driving, or cooking.
Photo OCR Capture
Point your camera at a receipt, whiteboard, or business card. Google Cloud Vision extracts the text, and you confirm what becomes a task. Useful for turning physical artifacts into digital action items.
Today View
The primary screen shows exactly what matters right now:
- Overdue section (red) — tasks that slipped past their due date
- Today section (blue) — what's due today
- Inbox section (gray) — unscheduled items to triage
Task count badges give you a glanceable sense of your day's load. Pull-to-refresh syncs with the cloud. Completed tasks get a strikethrough with a confetti animation — because small celebrations matter.
Interactions: tap to complete, long-press for context menu, swipe-to-delete. Every touch has haptic feedback.
AI Chat Assistant
BYOK AI Integration
The app supports OpenAI, Anthropic, and Groq — but with a privacy-first twist: Bring Your Own Key. Your API keys are stored locally on-device only, never sent to any server. The AI can manage tasks and events through function calling, answer questions about your productivity patterns, and help you plan your day.
I chose BYOK over building a backend because:
- Privacy: Your data and keys never leave your device
- Cost control: Users pay their own API costs, no markup
- Flexibility: Switch providers anytime
- Compliance: No user data flowing through third-party AI services
Native Platform Features
iOS & Android Widgets
Small, medium, and large widgets for both platforms plus an iOS Lock Screen widget. They show today's tasks with priority color indicators and refresh every 15 minutes. Tap to open the app directly to the relevant task.
Smart Notifications
Three notification types: task reminders (1 hour before due), event reminders (15 minutes before), and a morning digest at 8am summarizing your day. Fully customizable — disable what you don't want.
Cross-Platform Sync
Both apps share the same Supabase backend with real-time subscriptions. The sync architecture handles a key challenge: the desktop app uses owner_id while mobile uses user_id (a legacy naming difference). Rather than force a migration, I built automatic field translation at the adapter layer.
Conflict resolution uses last-write-wins — simple and sufficient for a single-user app. CRDT-based sync is on the roadmap for when (if) I add collaboration.
Design Details
- Dark-first: Pure black (#0A0A0A) for OLED screens — true blacks save battery
- 44px minimum touch targets: Every tappable element meets accessibility guidelines
- NativeWind: Tailwind CSS for React Native, so the styling language matches the desktop app almost 1:1
- 60fps animations: React Native Reanimated for native-thread animations
- Platform-specific: iOS shadows vs Android elevation, respecting each platform's conventions
What I Learned
I initially planned to port the desktop UI to mobile. That was wrong. Mobile isn't a small desktop — it's a different context with different needs. The desktop app is for planning and deep work. The mobile app is for capturing and glancing. Once I embraced that distinction, the design got dramatically better.
A text field with natural language parsing is faster than any combination of date pickers, priority dropdowns, and tag selectors. Users type how they think. The parser adapts to them, not the other way around.
Adding haptic feedback to task creation, completion, and deletion made the app feel more reliable. There's a subtle but real psychological effect: physical feedback confirms that the system received your intent. Without it, the app felt "loose."