Maitreya Patel
Ph.D. Student, School of Computing & AI, Arizona State University.

I am a senior Ph.D. student at Arizona State University (ASU). I am working alongside Yezhou Yang and Chitta Baral. I closely collaborate with Tejas Gokhale and Changhoon Kim.
My research focuses on the theoretical foundations of visual generative models
and their applications in conditional sampling, including image/video editing, inverse problems, and personalization. I am also interested in representation learning, large-scale multimodal foundational models, and inference-time steering
to enhance the controllability and reliability of generative models. I believe true World Models must be generalizable, efficient, controllable, responsible, and grounded in physical laws.
Alongside my research, I am writing The Stochastic Journey — a blog series that delves into the mathematical foundations of generative models, tracing their roots in stochastic calculus, probability theory, and differential equations.
I always look for self-motivated students who want to focus on either fundamental problems or responsibility aspects of Generative AI. If you have prior experience in related fields, feel free to reach out to me if you are interested.
News
Jan 22, 2025 | Voilà has been accepted at ICLR’25. ![]() |
---|---|
Nov 29, 2024 | 🚀 🚀 Releasing FlowChef, Steering Rectified Flow Models in the Vector Field for Controlled Image Generation , for trianing-, inversion-, and gradient-free controlled image generations. ![]() ![]() |
Oct 31, 2024 | λ-ECLIPSE, the resource-effecient Multi-Subject Text-to-Image Model accepted at TMLR . ![]() ![]() |
Sep 20, 2024 | One paper (lead author) accepted at NeurIPS (main conference). ![]() ![]() |
Sep 20, 2024 | One paper accepted at EMNLP (findings). ![]() |
Selected Publications
-
-
-
-
-
- ECLIPSE:A Resource-Efficient Text-to-Image Prior for Image Generations
In CVPR – 2024
-
-
-
-
-
-
-
-
-