Selected Works
Production systems, open-source libraries, hackathon winners, and peer-reviewed research. Filter by category below.
AuraHealth
arrow_outwardProduction voice-first triage platform that pre-authorises emergency healthcare payments into escrow before the patient arrives. Node.js/TypeScript APIs, Interswitch payment rails with OAuth 2.0 + HMAC-SHA-512 webhooks, three real-time SSE dashboards over one PostgreSQL source of truth.
Voxtar
arrow_outwardHealthcare voice AI platform. Deploy domain-specialised voice agents for triage, patient follow-up, and post-op monitoring. Multi-service: Next.js frontend, Python/FastAPI backend, self-hosted LiveKit for real-time audio, Qwen3 TTS and ElevenLabs for clinical-grade synthesis.
Vectorless
arrow_outwardDocument retrieval for the reasoning era. Structure-preserving retrieval that lets LLMs reason over document maps instead of vector search. No chunking, no top-K, no vector DB. Live at vectorless.store.
Vectorless Engine
arrow_outwardThe Go retrieval engine that powers Vectorless. Reasons over document structure — not embeddings. No chunking, no top-K, no vector DB. The opinionated take that kicked off the project.
Context8
arrow_outwardCollective problem-solving memory for coding agents, powered by Actian VectorAI DB. Context7 gives your agent the docs. Context8 gives it what the docs don't cover — tacit knowledge, gotchas, working configurations.
Aurasense
arrow_outwardReal-time agentic voice AI. Time-aware Graphiti RAG memory for long-running agentic interactions. Multi-model orchestration (Llama + Grok) with sub-second Groq TTS latency. FastAPI backend, Next.js frontend, Neo4j knowledge graph.
Stealth AI Platform (Contract)
Production multi-tenant agentic SaaS for US businesses — dedicated phone numbers, isolated agent configurations, self-service dashboards. LangGraph + LangSmith tracing + Opik versioning. 200+ synthetic eval datasets before production deployment.
Oncolens
Edge-native pathology AI. Applied Knowledge Distillation to compress PathFoundation and UNI foundation models into a sub-300KB MobileNetV3 — 96% sensitivity at 3.6ms inference. Human-in-the-loop escalation for low-confidence diagnoses.
llmgate
arrow_outwardLiteLLM for Go. Provider-agnostic LLM client over Anthropic, OpenAI, Gemini with router, fallback, cost tracking, capability flags, and composable middleware. Published on pkg.go.dev.