I will not promote. I actually mean that. It’s not a disclaimer for me; it’s kind of the whole point of the post. I’ll get to that. How this started February 20, 2026. I’m in my unfinished basement in Colorado where I’ve been building this company for the last 17 months. Five boys asleep upstairs, twins included. I opened a partnership agreement from Garmin Health on my laptop. I didn’t tell anyone for a while. I just sat with it. This is a multibillion-dollar S&P 500 company with 23,000 employees and they’re choosing to extend a partnership with me, a guy working from a shitty wooden desk I built years ago between diaper changes. No pitch deck. No investor intro. No impressive growth numbers. What actually made it happen was the opposite of what startup culture tells you to do. Why this exists I’m a dad of five boys, all under 7 years old, the twins are the youngest, and I’m an engineer. My wife and I were drowning in health data. Wearables tracking everything, lab results from different providers, medical records scattered across systems that don’t talk to each other. I got tired of sitting in the doctor’s office nodding along to things I didn’t fully understand. My wife would get our kids’ test results back and just look defeated because she couldn’t make sense of them. And the internet? Don’t get me started. Every search made us more anxious. So, I started building a health AI platform. Honestly it was just for us. I connected wearable data (13+ health streams), lab results, medical records, and imaging to an AI that reasons across all of it using peer-reviewed research. My wife and I were our own test subjects for months. Other people wanting to use it was a complete accident. The five decisions that actually mattered If you’re a founder, I think the decisions are way more interesting than the partnership itself. This is what I’d want to know if I were you reading this: 1. No VC. This was on purpose. Every subscription dollar goes back into the platform. I take no salary. No employees. No advisory board telling me to “move fast and break things” with people’s health data. I’ve had investors ask me, “How do you make money?” and my answer is always awkward: you pay us, that’s it. No hidden revenue. No data sales. No “anonymized” licensing. It’s embarrassingly simple. I know that sounds stubborn. It is, a little. But when you’re handling medical data for families, taking VC money shifts your incentives from “protect this data at all costs” to “grow at all costs.” I couldn’t do that. Plus, I really liked my day job, and it forced me to step away from constant flood of new ideas while still affording to take care of the kids, my wife, the house, etc. Nights and weekends gig when the kids napped, slept, were annoying (funny but true, man). 2. Saying no to features users asked for. This one still feels crazy to say out loud. People asked for searchable chat history. I said no. Conversation exports? No. Training the AI on community interactions? No way. The reason is that every single one of those “convenient” features is a security vulnerability waiting to happen. A searchable chat history is basically a honey pot. You’ve now got a database full of people’s medical conversations, perfectly indexed, easy to export, and really easy to “accidentally” use for training. Once you build the infrastructure for bulk data retrieval, you’ve also built the infrastructure for bulk data exploitation. Those two things are the same thing. What I built instead is a memory system. It remembers you like a good friend does, not like a filing cabinet. No transcripts, no logs you can search through. Just understanding that carries across conversations. The downside is you can’t export your history to a CSV. I can live with that. 3. Security and privacy baked into the architecture. This is probably the most important one so bear with me. Privacy: there’s no “master table” or index that has everything about you in one place. The AI pulls context together when you ask a question, uses it, and then it’s gone. We don’t train on user data. Period. Not anonymized, not aggregated, none of that “de-identified with best practices” bullshit. We improve our architecture through, peer-reviewed literature, and VPC’d LLM APIs internal to our cloud infrastructure. No external API calls. As those foundation models get better, so do we. If you carefully architect and secure the context protocol, this is more than half your battle. Don’t waste your time fine-tuning or trying to be clever with model weights. Security is the other half. I couldn’t afford a security team. At my stage most founders just cross their fingers and hope nothing bad happens. What I did instead was take the same AI infrastructure I’d already built for medical reasoning and repurposed it. I deployed AI agents to run a 24/7 Security Operations Center. They monitor threats getting mitigated at the WAF and its rulesets, analyze them, remediate, and generate reports. And then I published those reports. Directly to users. Real threat intelligence, real attack data, real remediation. That made some people uncomfortable but my thinking was: if you’re going to claim you’re secure, you should be able to prove it. Not just say it on a landing page. Most companies keep their security operations behind closed doors. I wanted mine to be visible to users who pay for me to secure their health data. 4. The business model. “How do you make money?” You pay us. “No but seriously how do you make money?” You. Pay. Us. That’s it. No advertising, no data licensing, no partnerships that involve sharing user info. Just money in exchange for the platform. It limits everything. Growth is slower. I can’t hire. Feature development takes longer when it’s just me. But here’s what I’ve learned: when a multibillion-dollar company is deciding whether to put their brand next to yours, they’re not looking at your MRR. They’re looking at your incentive structure and value proposition. A company that monetizes user data is a risk to partner with. A company that structurally can’t, because the architecture won’t allow it, is a different conversation entirely. Also, it turns out users wearing kick-ass Garmin wearables with all that telemetry really want to understand it in the context of their lab results, health records, and so on. 5. Constraints forced the good stuff. I built this between diaper changes and bedtime stories. Between my day job and the weekend chaos of five boys under 7 years old who think “quiet time” is a myth. I didn’t plan to build in the margins of my life but it ended up mattering. When you’re that stretched for time yourself, you build differently. You build for people who also don’t have time to figure things out. Also, I built it for my wife. She’s the best and I love her. I know she still sees that twenty-something year old kid in front of a baseball tee who just can’t step away. This is why she supports it (some nights she tolerates it). I didn’t do this because I had some grand vision for constraint-driven innovation. I did it because I literally had no other choice and after a few people started paying, I felt a responsibility to basically keep the lights on but then they started reaching out and asking about additional functionality. I felt a sense of responsibility to help and continue building. Just kind of spiraled from there. Now it feels a lot more like a sixth child in the house that my wife and I want to take care of. How the Garmin thing actually happened November 2025: I integrated with Garmin’s Health API through their Developer Program. I was just hoping their team would take a meeting with me. They did. What happened after that surprised me. I expected a transactional, arm’s-length thing. Big company, small founder, lots of paperwork, minimal contact. Instead it was collaborative. Their Health team actually understood what I was trying to do. I wasn’t building a fitness app, I was trying to connect wearable telemetry with medical intelligence. They got that and leaned in instead of walking away. Every feature I shipped after that deepened the trust. My wife asked, “How was my stress while I slept?” one night and it completely broke our architecture. That question seems simple, but it requires pulling stress data and sleep data simultaneously, aligning the timestamps, and understanding that she wanted stress during sleep, not just two separate answers about stress and sleep. Rebuilding that context protocol was partly driven by Garmin’s team pushing us to think harder about what correlations their 13+ data streams could actually reveal while I thought hard about how to maximize lightweight low latency LLMs for modeling user intent and mapping that to various tool calls for functions on the backend. Playing around with query routing, data schemas, tool calling, and classification tasks was a ton of fun and brought me back to some of my most intellectually engaging times in graduate school. February 2026: the partnership expanded beyond just data access into actual devices. Exclusive savings for members on Garmin wearables, scales, blood pressure monitors. Not a coupon code floating around the internet. A private benefit. They didn’t have to do any of this. Garmin’s brand doesn’t need a solo founder in Colorado. But the security and privacy stuff I described above, the architecture decisions that made everything slower and harder, turned out to be exactly what made them comfortable putting their name next to mine. What I got wrong
- The first architecture couldn’t handle questions that spanned multiple data streams. “How was my stress while I slept?” seems simple, but it broke everything. I had to rebuild the entire context protocol from scratch.
- Garmin stores data in UTC. When you work out at 5 PM in Denver, it registers as the next calendar day. My first version would miss that workout entirely if you asked about “today.” Technically correct, completely useless.
- Building features I thought users needed instead of talking to them every day and knowing for sure. This isn’t a statistical significance thing. Just intuition. When you’re really listening to your users, you quickly pick up on the patterns, and triage it based on what feels most urgent. Trust your users and yourself. Strike a balance. In general, the more painful the feedback, the more you should listen and reflect. This kind of sucks balls but it’s true.
- Solo founding is lonely yet familiar in ways I didn’t anticipate. Some nights I question my sanity. Most nights I question my competence. The internal dialogue is always the same: “whatever it takes.” For any former or current athletes reading this, you can relate. I was a baseball player and what comes to mind is being in the batting cage late at night by myself thinking “I’ve had enough cuts” yet for some reason continuing to put the ball on the tee, moving it around different sides of the plate and height. Continuing to take more cuts. Even when you’re hands bled from the blisters. There is something really rewarding about doing hard things when no one is around. As a kid you do this and have a sense of wonder or hope. You’re proud of yourself. At my stage, I wonder if it’s worth it but my brain won’t allow me to let go. Why I’m posting this Not telling you to do what I did. My path is specific to healthcare, my family, and a set of convictions that probably don’t map to your situation. But I do think the default startup playbook (raise money, grow fast, monetize data, optimize engagement) isn’t the only way to build something real. For certain kinds of products, especially ones touching people’s most personal data, it might be the wrong way entirely. Every decision that made this partnership happen also made my startup harder to build, slower to grow, and less interesting to investors. Those same decisions are what made a multibillion-dollar company comfortable enough to trust me with their brand. Sometimes the features you refuse to build matter more than the ones you ship. Happy to talk about any of this. Architecture, business model, bootstrapping in healthcare AI, the security stuff. Whatever. I’m an open book. submitted by /u/LongjumpingGoose4771
Originally posted by u/LongjumpingGoose4771 on r/ArtificialInteligence
