Know Your Energy. Protect Your Light.
Voice patterns. Body signals. The triggers you choose. Woven into a living portrait of your wellness — private, on-device, and never medical. AuraLLM combines voice energy analysis, habit tracking, and self-reported triggers into a single energy portrait powered by on-device ML, a dual-ring aura dial, chakra-mapped guidance, and an open AR glasses schema.
AuraLLM occupies a unique position in the $7B+ wellness app market by combining voice-derived energy analysis, structured habit tracking, and an open AR glasses schema into a single privacy-first experience. Unlike fitness trackers that focus on physical metrics or meditation apps limited to guided sessions, AuraLLM reflects how users feel and how they show up through a dual-ring energy dial that separates internal state from outward impact.
The core pipeline runs entirely on-device. Voice samples are processed in-memory and immediately discarded — only numeric scores persist. Habit entries stay in the local Room database. The 3-day rolling analysis window runs via WorkManager every six hours, generating reflective insights through a pluggable LLM interface that currently uses a deterministic template engine with a clear roadmap to on-device TFLite and cloud providers. This architecture means the product delivers full value at zero marginal cost per user while the freemium model ($4.99/month Aura+) gates advanced analytics, custom analysis windows, and AR visualization without restricting core privacy guarantees.
The AR glasses integration — built around an open JSON schema for DialState — positions AuraLLM ahead of the curve as smart glasses move from niche hardware into mainstream consumer adoption. The pulsing aura ring, energy score, and chakra bar render on any compatible HUD, creating a persistent wellness overlay that no competing app currently offers. Early integration with this form factor builds brand association and switching costs before the market matures.
No other wellness app combines this depth of personal energy reflection while keeping every byte on your device. AuraLLM weaves voice analysis, habit tracking, and self-reported triggers into a single, living energy portrait.
With explicit, revocable consent, AuraLLM captures a brief voice sample and extracts derived features entirely on-device. Raw audio is processed in-memory and immediately discarded — only numeric scores persist: Speaking Rate (words-per-minute cadence), Tone Warmth (emotional warmth 0-100), and Vocal Energy (estimated energy 0-100). Future TFLite prosody model integration will deepen analysis without changing the privacy guarantees.
Log the habits that shape your energy every day across six categories: Meal, Sleep, Exercise, Hydration, Meditation, and Social. Each entry is rated 1-5 for quality. A rolling 3-day average drives the Habit Bonus: excellent habits (4-5) add +10 to +15 points to Others Impact, good habits (3-4) add +5 to +10, building habits (1-3) add +0 to +5. Consistent positive actions compound visibly over days.
Tag what's influencing your energy today and let the model surface connections you feel but can't name. Six tag categories — Sleep, Exercise, Social, Work, Nature, and Creative — capture the dimensions of daily life that shape your energy portrait. The LLM weaves these triggers into personalized reflective narratives, revealing patterns across days and weeks that manual journaling would miss.
At the heart of AuraLLM is a concentric visualization that separates how you feel from how you show up. Two rings, one aura score — a reflective lens, not a clinical verdict.
The outer ring captures Self Energy — how you feel internally, derived from self-report or voice analysis. The inner ring captures Others Impact — how you show up to the world, calculated from self-energy plus a habit bonus that rewards consistent positive actions.
Every energy level maps to a DMT-palette colour that pulses through the dial and glasses HUD:
Energy maps to a structured framework for micro-interventions. Each chakra level delivers three specific, actionable suggestions. The Red Zone is a conscious user action — never algorithmically inferred.
Purpose & stillness. Sit in stillness for 3-5 minutes. Offer a kind act. Name one purpose to embody today.
Expression & communication. Speak one honest sentence kindly. Hydrate and take slow breaths.
Gratitude & connection. Place a hand on your chest and breathe in 4, out 6. List three things you're grateful for.
Clarity & action. Identify one thing you can control right now. Do it with intention.
Creativity & movement. Move your body for 5 minutes. Create something small. Connect with water.
Grounding & survival. Take a slow grounding walk. Exhale longer than you inhale. Choose one steadying habit.
The Red Zone is never algorithmically inferred. AuraLLM does not claim to detect distress, diagnose conditions, or assess risk. Red Zone activation is a conscious user action — an honest self-report that says "I'm not okay right now." When activated: energy scores set to minimum (5/5), aura label shifts to Red Zone, chakra drops to Root with grounding suggestions, the dial pulses red as a visible reminder, and the state is recorded with the trigger self-report:<reason>.
The app meets you where you are. It doesn't judge. It doesn't diagnose. It just reflects.
The chakra system also applies a self-centred penalty when Self Energy exceeds Others Impact by 20+ points and a red zone penalty — ensuring the system rewards outward awareness, not just personal highs.
A single snapshot tells you where you are. A rolling window tells you where you're headed. Background analysis runs every 6 hours via WorkManager, analyzing a 3-day rolling window of energy readings and habit data.
"Your energy has been consistently high. Keep nurturing what fuels you."
"Nice upward momentum — small positive changes are compounding."
"Your energy has been tapering. A short reset activity might help."
"Your energy is strong but shows a slight dip — consider what may have shifted."
"Low overall but the direction is improving. Gentle consistency matters."
"It looks like a challenging stretch. Be kind to yourself — rest is productive too."
All insights are appended with: (For wellness reflection only.)
A pluggable LLM architecture generates personalized wellness narratives from your energy data. The LlmProvider interface accepts energy readings and habit summaries, returning reflective insights. Integration is completely optional — your energy calculations, habit tracking, and dial work perfectly without it.
Deterministic, instant, zero-cost. No network required. Pattern-matched insights from energy data.
Private, fast, no API key. Runs entirely on device for maximum privacy with richer prosody analysis.
Maximum quality, fully local. Quantized language models running natively for deepest on-device narratives.
Deepest narratives, user-configured. Only aggregated scores sent — never raw audio or identifiable data.
interface LlmProvider {
suspend fun generateInsight(
readings: List<EnergyReading>,
habitSummary: String? = null
): String?
}
A minimal heads-up display shows your energy state at a glance — colour-coded, distraction-free, always private. Compatible with the open DialState schema for any smart glasses platform.
{
"selfEnergy": 78,
"othersImpact": 68,
"auraColorHex": "#9C1AE7",
"label": "Radiant",
"chakra": {
"level": 4, "label": "Throat",
"colorHex": "#3AE5E7",
"suggestions": ["Speak one honest sentence kindly.",
"Hydrate and take slow, open breaths.",
"Write a short note of appreciation."]
},
"isRedZone": false,
"timestampMillis": 1738800000000
}
AuraLLM features a DMT neon aesthetic on deep midnight — a design language inspired by bioluminescence, energy fields, and visual consciousness.
#9C1AE7#3AE5E7#2CD27E#F5CF1D#F7093B#0F0B1A#1A1528#F0ECF7Created with FAL (gpt-image-1.5) at 1024x1024. Each illustration bridges DMT-inspired energy art with clean mobile interface design.
Typography: Cormorant (display serif) + Outfit (geometric sans)
Your wellness data is deeply personal. AuraLLM was engineered from the ground up with privacy as a core architectural principle — not a feature, not an afterthought.
User taps "Enable Voice"
→ SpeechConsentManager.grant()
→ DataStore: consent = true
→ SpeechFeatureExtractor available
User taps "Revoke"
→ SpeechConsentManager.revoke()
→ DataStore: consent = false
→ Feature extraction disabled
Clean Architecture with MVVM. Three core data entities in Room, a WorkManager-driven analysis pipeline, and a Compose UI layer connected through ViewModel state flows.
User Input / Voice ──► SpeechFeatureExtractor ──► SpeechFeature (Room)
│
Manual Check-in ──► EnergyReading (Room) ◄──────────────┘
│
EnergyRepository (3-day rolling window)
│
AnalysisWorker (WorkManager, every 6h)
│
LlmProvider.generateInsight()
│
EnergyReading.insight (written back)
│
EnergyDialViewModel ──► UI (Compose)
│
DialState ──► AR Glasses (JSON export)
Table: energy_readings
selfEnergy, othersImpact, triggers, insight, source, timestamp
Table: speech_features
speakingRate, toneWarmth, vocalEnergy, sampleDurationSec
Table: habit_entries
category, label, quality (1-5), timestamp
SpeechFeatureExtractor processes audio in-memory with zero disk writes. Consent-gated via DataStore, the pipeline extracts speaking rate, tone warmth, and vocal energy. Raw audio discarded immediately. Future TFLite prosody model integration planned.
Room database stores HabitEntry records across six categories with 1-5 quality ratings. A 3-day rolling average drives the Habit Bonus algorithm that transparently influences Others Impact scores. Entries never leave the device.
LlmProvider interface accepts energy readings and habit summaries. Currently backed by deterministic template engine (zero cost). Planned: on-device TFLite, GGUF, and optional cloud API. All insights append wellness reflection disclaimer.
Serializable JSON object with selfEnergy, othersImpact, auraColorHex, label, chakra level and suggestions. In-app HUD preview with pulsing aura ring. Camera passthrough for AR preview. BLE broadcast and companion SDK planned.
Local-first Room database. No raw audio on disk or network. SpeechConsentManager with explicit grant/revoke. No analytics SDK, no advertising profiles, no data brokers. Full offline capability. Red Zone is self-report only.
AnalysisWorker runs every six hours consuming a 3-day rolling window. Feeds aggregated scores to LlmProvider.generateInsight(), writes reflective insight back to the reading. Battery-aware scheduling respects Doze and App Standby.
Not a feature — an architectural guarantee. Zero raw audio persistence, consent-gated features, no analytics SDK, no data brokers. In a market where wellness apps routinely harvest sensitive data, AuraLLM's privacy posture is a genuine competitive advantage.
The open DialState JSON schema and glasses HUD preview position AuraLLM to ride the smart glasses adoption curve. Building the integration layer now establishes brand association and switching costs before mainstream hardware arrives.
No competing app combines voice-derived energy, structured habit tracking, and self-reported triggers into a single composite score. The dual-ring dial creates a defensible product category rather than competing in an existing one.
Revenue scales with user value, not data exploitation. Premium features enhance the experience without gating core privacy protections.
Production-grade mobile architecture with measurable engineering outcomes and a clear path from core product to premium monetization and AR hardware integration.
Marginal cost per user
Premium monthly revenue
Offline capable
AI-generated illustrations
Core pipeline runs at zero marginal cost — no cloud inference, no per-user storage, no API fees. Revenue scales with premium conversions, not data exploitation.
Deliberate positioning avoids regulatory complexity. "Reflective wellness insights" framing serves users who want structured self-reflection without clinical framing.
Open DialState schema positions for smart glasses adoption. Early integration builds brand association and switching costs before the AR wellness category matures.
Applicable across consumer wellness, corporate wellbeing programs, wearable hardware partnerships, and white-label licensing for health platforms.
Direct-to-consumer Android app with freemium conversion to Aura+ subscription. Privacy-first positioning drives word-of-mouth in communities increasingly sceptical of data-harvesting wellness apps.
White-label deployment for enterprise wellness programs. The local-first architecture means employers never touch personal wellness data — they provide the tool, employees own the data.
Smart glasses manufacturers get a ready-made HUD implementation. Ring and watch manufacturers can ingest DialState for ambient energy display. Early partnerships create distribution before the AR wellness category is crowded.
AuraLLM brings together voice analysis, habit tracking, self-reported triggers, a dual-ring energy dial, chakra guidance, and AR glasses integration into one privacy-first Android experience.
Know your energy. Protect your light.
For personal wellness reflection only. Not a medical or diagnostic tool.