Welcome
Welcome to my personal website! Here you'll find information about me, my research, and my cats.
Profile Picture

Patrick Pynadath

PhD Student @ Purdue University
Based in West Lafayette, IN

About Me

Hello! I'm Patrick Pynadath, a passionate PhD student with a strong interest in probabilistic machine learning and discrete generative modeling. I am currently pursuing my PhD at Purdue University under the guidance of Prof. Ruqi Zhang.

In the past, I have worked on fundamental statistical methods, applications of sampling techniques to LLMs, and LLM safety. Currently, I am very excited about discrete diffusion.

Feel free to reach out if you'd like to connect or discuss potential collaborations!

My Cats

Missy
Missy
Boba
Boba

Meet my two research assistants. Missy specializes in Supervised Keyboard Tuning (SKT) (demonstrated when I am coding); while Boba is our resident expert in discrete nap optimization. They are a crucial component of every research project.

Research Projects

Missy
Boba
Preprint

🍭CANDI: Hybrid Discrete-Continuous Diffusion Models

Authors: Patrick Pynadath, Jiaxin Shi, Ruqi Zhang

We figure out why continuous diffusion has struggled on discrete data, and introduce CANDI, a principled solution.

Overview

Continuous diffusion does extremely well on images, but struggles on discrete data. Theoretically, learning a continuous score function should enable coordinated refinement and ease of external guidance. However, recent works have moved towards pure discrete diffusion methods that lack the continuous geometry of Gaussian diffusion. These methods tend to outperform continuous methods.

We introduce token identifiability, a framework for studying Gaussian noise on discrete data. Using this analysis, we discover a temporal dissonance between discrete identity corruption (is an incorrect token closest to the noisy latent) and continuous rank degradation (how many incorrect tokens are closer to the noisy latent than the correct token). Both are vital for continuous diffusion to work well on discrete data, yet they become severely misaligned when the number of categories increases.

Given this, we introduce CANDI: Continuous and Discrete Diffusion. We disentangle the two forms of corruption by introducing an explicit masking schedule that directly controls discrete identity corruption. By decoupling the two forms of corruption, we paradoxically allow them to be coordinated with each other. We demonstrate that this brings the benefits of continuous diffusion towards discrete spaces.

TL;DR:

  • We introduce token identifiability as a framework for understanding how Gaussian noise corrupts discrete data.
  • We discover a temporal dissonance between discrete identity corruption and continuous rank degradation, which results in continuous diffusion underperforming discrete methods.
  • We introduce CANDI: Continuous and Discrete Diffusion, which resolves the temporal dissonance and brings the benefits of continuous diffusion to discrete spaces.
Missy
Boba
NeurIPS 2025

🔓VERA: Variational Inference Framework for Jailbreaking Large Language Models

Authors: Anamika Lochab, Lu Yan, Patrick Pynadath, Xiangyu Zhang, Ruqi Zhang

We use variational inference to introduce a scalable and effective framework for jailbreaking/red-teaming LLMs.

Overview

Most powerful language models today are only accessible through APIs—you can't see their internals. This makes it crucial to develop effective "black-box" methods to test their safety vulnerabilities. Current approaches rely on genetic algorithms that need carefully curated starting prompts and must be re-run from scratch for every new test case. They can't give us a broad view of where models are actually vulnerable.


We introduce VERA: a framework that treats jailbreak generation as a probabilistic inference problem. Instead of optimizing individual prompts, we train a small attacker model to learn the distribution of adversarial prompts that work against a target model. Once trained, VERA can instantly generate diverse, natural-sounding jailbreaks without extra optimization loops.


Our experiments show that VERA consistently succeeds across different target models, demonstrating that probabilistic inference offers a principled and scalable approach to discovering model vulnerabilities.

TL:DR;

  • Black-box jailbreaking is crucial towards understanding model vulnerabilities in actual use-cases.
  • Current methods rely on genetic algorithms, which need well-designed starting prompts and must be re-run for each test case.
  • We introduce VERA, which can generate diverse, natural sounding jailbreaks without needing re-optimization.
Missy
Boba
ICLR 2025

🎛️Controlled LLM Decoding via Discrete Auto-regressive Biasing

Authors: Patrick Pynadath, Ruqi Zhang

We use gradient-based discrete sampling to enable plug-and-play control over LLM generation.

Overview

As LLMs become ubiquitous, we increasingly need ways to control their outputs—enforcing constraints like sentiment, safety, or specific keywords. Current methods use energy-based decoding, which combines multiple constraints into a weighted score. But these approaches struggle to balance fluency with actually satisfying the constraints, even with careful tuning.


We identify the core issue: these methods sample in continuous space, but text is fundamentally discrete tokens. We introduce Discrete Auto-regressive Biasing, a controlled decoding algorithm that leverages gradients while staying entirely in the discrete text domain. Our key insight is to define a joint distribution over both the generated text and an auxiliary bias sequence, then sample from it using gradient-based discrete MCMC within a Langevin-within-Gibbs framework.


The result? Significantly better constraint satisfaction with comparable or better fluency—and lower computational cost. We demonstrate these benefits across sentiment control, detoxification, and keyword-guided generation.

TL:DR;

  • Current controlled text generation methods struggle to balance fluency and constraint satisfaction because they sample in continuous space rather than discrete token space.
  • We introduce Discrete Auto-regressive Biasing (DAB), which uses gradient-based discrete MCMC to control LLM outputs while staying entirely in the discrete text domain.
  • Our method achieves better constraint satisfaction, comparable fluency, and lower computational cost across sentiment control, detoxification, and keyword generation tasks.
Missy
Boba
NeurIPS 2024

🚲Gradient-based Discrete Sampling with Automatic Cyclical Scheduling

Authors: Patrick Pynadath, Riddhiman Bhattacharya, Arun Hariharan, Ruqi Zhang

We enable discrete gradient-based sampling methods to deal with multi-modal distributions by introducing automatically tuned cyclical schedules.

Overview

Discrete distributions in deep models are highly multimodal—full of peaks and valleys due to inherent discontinuities. Gradient-based samplers work well for exploring these distributions, but they have a critical flaw: they get trapped in local modes, missing the broader landscape.


We introduce automatic cyclical scheduling for efficient multimodal sampling. The key idea is simple: large steps discover new modes, small steps exploit each mode thoroughly. We combine this with a cyclical balancing schedule that ensures efficient proposals and an automatic tuning scheme that adapts to different datasets without manual hyperparameter tweaking.


We provide non-asymptotic convergence guarantees and show through extensive experiments that our method significantly outperforms existing approaches at sampling complex multimodal discrete distributions.

Technical Details

  • Discrete distributions in deep models are highly multimodal, and gradient-based samplers often get stuck in local modes instead of exploring the full distribution.
  • We propose automatic cyclical scheduling with three components: large steps to discover new modes and small steps to exploit them, balanced proposals for efficient sampling, and automatic hyperparameter tuning that adapts across datasets.
  • We prove non-asymptotic convergence guarantees and demonstrate superior performance in sampling complex multimodal discrete distributions with minimal manual tuning.

About Me

Background, education, and how to connect

🎓Education

My academic background

PhD in Computer Science

Purdue University • Ongoing

MS in Computer Science

Northwestern University • December 2022

BA in Mathematics

Northwestern University • June 2022

📬Get In Touch

I'm always interested in connecting with like-minded people and exploring new opportunities.

Whether you're looking to collaborate on a project, have a question about my work, or just want to say hello, I'd love to hear from you!

Let's Connect