I build production agentic AI systems and lead 0→1 AI products that ship.
9+ years architecting multi-agent pipelines, enterprise RAG, and LLM platforms at scale. Led AI product strategy driving $2M+ ARR — from LangGraph cybersecurity agents processing 100K+ events to RAG serving 500+ users across 50K+ documents. I own the roadmap, build the systems, and ship what works.
Quantified results from production AI deployments, product launches, and team leadership.
Each project ships to production with real trade-offs considered — not toy demos.
Security teams manually sifting through daily alerts — slow, error-prone, burning out analysts. Built multi-agent pipeline: cleaning → validation → enrichment → threat detection → reporting.
Trade-off: Higher infrastructure cost and pipeline complexity in exchange for speed and accuracy at scale.
50K+ documents scattered across systems — employees couldn't find answers fast. Built production RAG with semantic chunking, hybrid search (dense + sparse), serving 500+ users.
Trade-off: One-time embedding/indexing cost exchanged for eliminated manual retrieval and sub-second latency.
Commerce backends couldn't coordinate intelligently — every integration was hard-coded. Built protocol-first multi-agent system with A2A coordination, LLM reasoning, and human-in-the-loop safeguards.
Trade-off: Protocol-first design adds complexity and LLM latency — with zero future integration costs and HIL production reliability.
Every new AI agent was built from scratch — inconsistent, slow to ship, not reusable. Created scalable framework with pre-built tools, memory, and modular plug-and-play components.
Trade-off: AWS Bedrock vendor lock-in limits portability — but delivers scaling, security, and zero infra management.
Standard RAG loses relational context between documents. Combined vector search with graph traversal on Neo4j for multi-hop reasoning and knowledge discovery.
Manual invoice processing was slow and error-prone across 10K+ monthly documents. Built embedding-based search with automated classification and extraction.
E-commerce needs natural language interfaces. Built voice-first AI agent with real-time STT/TTS, LLM reasoning, and agent orchestration for seamless shopping.
Dermatology screening needs scalable AI assistance. Built medical AI with MedGemma on Google Cloud for production screening — Kaggle competition entry.
Production agents need visual workflow orchestration. Combined AWS Bedrock with LangGraph Studio for enterprise-grade agent deployment and management.
Production-tested across the full AI/ML stack — from model selection to deployment at scale.
From software engineer to AI/ML product lead — building systems that drive real business outcomes.
Creator and maintainer of open-source AI tools used by developers.
Organized by category — agentic AI, RAG pipelines, cloud, ML, voice, and more.
Technical content, conference talks, and hands-on workshops — sharing production AI knowledge.
Open to AI/ML Engineering, Product Management, and Leadership roles at companies pushing the frontier.