The choices that shaped the platform. Seventeen decisions where the path forked and a direction was chosen.
Chose: Leave physics research to build production software systems
Over: Continue in academia or take a traditional SWE role
The instinct was that understanding systems from first principles — the way a physicist would — could produce better infrastructure than following conventional paths. Not studying software engineering, but building the same things over and over until the abstractions surfaced naturally.
Fourteen months of foundational work produced the patterns, tools, and intuition that made everything after possible.
Chose: Spend months building frameworks, server patterns, and island hydration before any product
Over: Jump straight to a product and iterate on infrastructure later
The belief that the quality of the product is bounded by the quality of the tools. Static web generators, island hydration frameworks, event sourcing prototypes — none of these shipped, but they all taught something that mattered later.
staticweb, standard_islands, and kameo_es became the conceptual foundation for every production system that followed.
Chose: SolidJS as the frontend reactive framework
Over: React, Vue, Svelte
Fine-grained reactivity without a virtual DOM. Signals and effects map cleanly to how data actually flows through a UI. React's reconciliation model felt like solving a problem that shouldn't exist. SolidJS proved itself during the QMS iterations — reactive state without the overhead.
Every frontend from QMS through Assist uses SolidJS. The bet paid off in bundle size, performance, and developer experience.
Chose: Astro with static site generation and island hydration
Over: Next.js, Remix, or full SPA frameworks
Ship zero JavaScript by default. Hydrate only the interactive parts. Astro's island architecture matched the mental model from standard_islands exactly — the framework work wasn't wasted, it was research that validated the right production tool.
Astro handles every static frontend. Combined with SolidJS islands, pages load fast and stay interactive where they need to be.
Chose: Pivot from queue management to AI receptionist for dental offices
Over: Continue polishing the kiosk product
Five QMS iterations over three months weren't wasted — they were an apprenticeship. Building check-in, triage, provider routing, and patient flow meant learning the dental office from the inside. The kiosk wasn't the business. The domain expertise was.
The receptionist project started with deep workflow knowledge that no competitor had. Understanding when to route, when to escalate, and what questions to ask came from building QMS, not from studying dental software.
Chose: Build a custom voice pipeline with Deepgram STT, Cartesia TTS, and Fireworks LLM
Over: Continue using Vapi's managed voice AI platform
Vapi got the receptionist to a working demo fast, but it was a black box. Latency was unpredictable, customization was limited, and debugging meant guessing. The Golf iteration proved Vapi works for prototyping. Hotel proved pure third-party AI pipelines don't give you enough control for production.
The Deepgram/Cartesia/Fireworks stack emerged from elimination, not planning. Each component was chosen because the alternatives were tried and failed for specific, documented reasons.
Chose: Name each iteration, document why it died, and start fresh each time
Over: Incrementally improve a single codebase
Ten named iterations from Alpha through voice_stream. Each one taught something specific that couldn't have been learned by patching the previous version. Alpha was too coupled. Bravo's state machine was too rigid. Charlie proved SolidJS. Delta proved the booking flow. Each kill was a decision, not a failure.
The tenth iteration isn't the first iteration done ten times. It's a fundamentally different architecture informed by nine specific lessons.
Chose: Build a custom identity provider from scratch
Over: Continue migrating between third-party auth systems
Five auth systems in six months: SuperTokens was too opinionated, Ory was too complex to operate, Authentik was too heavy, Zitadel's multi-tenancy model didn't fit. Each migration taught a specific lesson about identity — session management, token lifetimes, OIDC flows, PKCE. By the fifth migration, the understanding was deep enough to build exactly what was needed.
The custom IDP handles OIDC, PKCE, JWT signing, session management via NATS KV, and integrates with the deploy CLI for automated provisioning. It does exactly what's needed and nothing more.
Chose: CloudNativePG on Kubernetes with automated backups and HA
Over: Continue with SQLite in production or use managed database services
SQLite worked for prototyping but couldn't handle concurrent writes from multiple services. Managed databases meant depending on a provider. CNPG gave PostgreSQL with automated failover, backup to SeaweedFS, and full control — running on our own hardware.
Every service connects to CNPG clusters. Migrations run embedded in the deploy CLI. Backups are automated. No external database dependency.
Chose: Migrate all services to a 5-node Kubernetes cluster
Over: Continue with SSH-and-script deployment
SSH deployment worked for single services but couldn't handle service discovery, rolling updates, secrets management, or networking between services. The cluster wasn't adopted because it's fashionable — it was adopted because the platform had outgrown manual orchestration.
Single-day k8s launch on March 12, 2026. Every service deployed with health checks, resource limits, and automated restarts. The deploy CLI wraps kubectl so no one touches manifests directly.
Chose: Treat the agent framework as architectural R&D, not a shipping product
Over: Ship the agent framework as the core product immediately
Ten days, five iterations. The Yulier line explored intent clarification, trust calibration, operator steering, and a 12-model evaluation. It proved the architecture for how an AI assistant should reason about tasks, know when to escalate, and earn trust over time. But it also revealed the infrastructure couldn't support it yet.
The agent architecture feeds directly into Assist's design. The concepts — trust calibration, escalation protocols, operator steering — are in the platform. The standalone agent framework is archived, its lessons extracted.
Chose: Run all infrastructure on owned hardware with no SaaS dependencies
Over: Use managed services (AWS RDS, managed Kubernetes, auth0, etc.)
Every SaaS dependency is a price increase waiting to happen, a terms-of-service change, an API deprecation. Self-hosting means the platform's cost structure is fixed hardware, not variable API calls. It also means understanding every layer — when something breaks, the answer is in our logs, not a vendor's status page.
CNPG PostgreSQL, NATS JetStream, SeaweedFS, Harbor registry, custom IDP, VictoriaMetrics, Grafana — all self-hosted on a 5-node cluster behind a NetBird overlay network.
Chose: NetBird mesh VPN for all internal service communication
Over: Expose services publicly with API keys or use Tailscale/Cloudflare Tunnel
Internal applications should not be reachable from the public internet at all. NetBird creates a WireGuard mesh between nodes where services communicate on private IPs. No port exposure, no public DNS for internal endpoints, no attack surface.
Every internal service — Harbor, Grafana, NATS, admin panels — is accessible only through the NetBird overlay. Public-facing services go through Traefik with Cloudflare DNS.
Chose: NATS JetStream with JWT/NKey auth for all inter-service messaging
Over: RabbitMQ, Redis Pub/Sub, or direct HTTP between services
NATS gives pub/sub, request/reply, key-value storage, and object storage in one system. JetStream adds persistence and exactly-once delivery. JWT/NKey auth means each service has scoped credentials. The KV store replaced Redis for session caching. One dependency instead of three.
Session storage, event streaming, service-to-service RPC, and real-time WebSocket fan-out all run through NATS. The Pylon introspection system uses NATS subjects for service registration.
Chose: Bitnami SealedSecrets for Kubernetes secret management
Over: Continue using sops for encrypted secrets in git
sops works for encrypting files but doesn't integrate with Kubernetes natively. SealedSecrets encrypts with the cluster's public key — the encrypted form can live in git, and only the cluster can decrypt it. No key distribution, no decryption step in CI.
All secrets are SealedSecrets committed to the infrastructure repo. The deploy CLI creates SealedSecrets automatically during provisioning.
Chose: Rebrand from dental receptionist to multi-channel AI communication platform
Over: Stay focused on the dental vertical
The receptionist solved dental phone calls, but the architecture — voice pipeline, booking engine, knowledge retrieval, agent framework — was general. The episode/session/artifact model works for any business communication channel. Staying dental-only would have been artificially constraining the platform.
Assist handles voice, chat, and eventually email and SMS. The dental receptionist becomes the first Assist integration, not the whole product.
Chose: Build a Rust CLI that handles init, provision, and deploy for every service
Over: Use Helm, ArgoCD, or manual kubectl workflows
Helm charts are too abstract. ArgoCD adds another system to maintain. The deploy CLI knows about our specific infrastructure — it creates namespaces, provisions databases, generates NATS credentials, creates SealedSecrets, builds containers, and deploys with health checks. One command, zero manual steps.
Every service deploys with `deploy up`. New services bootstrap with `deploy init`. The CLI is the single source of truth for how code becomes a running service.