About
I am a Principal Scientist at Snowflake, working on making Data and AI systems reliable and trustworthy. My work focuses on integrating formal methods and rigorous software testing techniques into the development process.
Previously, I was a Principal Applied Scientist at Amazon Web Services where I led a team that built tools for model checking, scalable fuzzing, property-based testing, and runtime monitoring of distributed systems. These tools were adopted by 80+ service teams and 600+ developers across AWS, finding critical bugs in flagship services including S3, DynamoDB, EBS, Aurora, EC2, and CloudFront. This work contributed to a cultural shift in software engineering practices at AWS, summarized in the ACM article Systems Correctness Practices at AWS.
I hold a PhD in Computer Science from UC Berkeley (2019), where I was co-advised by Prof. Sanjit Seshia and Dr. Shaz Qadeer. My PhD work created the P programming framework for reasoning about correctness of distributed systems, which is used across industry (AWS, Microsoft, DeepSeek) and academia.
Before graduate school, I spent 2+ years at Microsoft Research India working on scalable model checking for concurrent systems. During my time at IIT Kanpur, I led the on-board computer team for India's first nanosatellite JUGNU, which was successfully launched on 12 October 2011.
Research Interests
My research focuses on combining techniques from programming languages, formal methods, and software engineering to help programmers build reliable software systems. I have designed tools and methodologies for building reliable systems across domains including device drivers, fault-tolerant distributed systems, databases, robotics, and cyber-physical systems.
My current interests include applying formal methods techniques to reason about correctness of distributed services and AI systems — ranging from lightweight automated approaches like model checking, to systematic testing and fuzzing, to runtime monitoring in production.
Recent Highlights
Feedback-guided Adaptive Testing of Distributed Systems Designs
NSDI 2026 — Ao Li, Ankush Desai, Rohan Padhye
Specy: Learning Specifications for Distributed Systems from Event Traces
OOPSLA 2026 (to appear) — Mike He, Ankush Desai, Aishwarya Jagarapu, Doug Terry, Sharad Malik, Aarti Gupta
Trace-Guided Synthesis of Effectful Test Generators
arXiv 2026 — Zhe Zhou, Ankush Desai, Benjamin Delaware, Suresh Jagannathan
Ranking Formal Specifications using LLMs
LMPL 2025 — Mike He, Zhendong Ang, Ankush Desai, Aarti Gupta
Checking Observational Correctness of Database Systems
OOPSLA 2025 — Lauren Pick, Amanda Xu, Ankush Desai, Sanjit A. Seshia, Aws Albarghouthi
Systems Correctness Practices at Amazon Web Services
Communications of the ACM, June 2025 — Marc Brooker, Ankush Desai