RL researcher · AI safety engineer · systems builder
I study how reward signals shape agent behavior — and what happens when that behavior meets the real world. I've published empirical RL research and built production AI systems serving hundreds of millions.
I'm a published reinforcement learning researcher with hands-on experience in AI safety, production LLM systems, and autonomous robot certification. I hold an accelerated MS in Computer Science (University of Texas Arlington), where I also served as a UTA.
I'm a first-generation woman in tech. Every door I've walked through, I built the key myself.
I've watched people — especially younger generations — hand themselves over to AI willingly, because it's easy and feels like connection. That vulnerability is real. The only responsible answer isn't less AI. It's better AI — AI that deserves the trust people are already giving it.
Two IEEE-published, cited papers on autonomous robot navigation. Empirical work — designed environments, built reward structures, ran experiments, published results.
Designed and trained a DQN agent for autonomous navigation in dynamic environments. Built the simulation from scratch, engineered the reward function, ran ablation studies, and analyzed failure modes.
Key insight: A reward signal that works perfectly in one environment produces unexpected behavior when conditions shift. That's the alignment problem at small scale — and it shapes how I think about AI safety broadly.
Used NVIDIA Sionna to model real-world RF propagation environments as the physics layer for an RL agent optimizing robot navigation paths. High-fidelity simulation closes the sim-to-real gap.
Key insight: Physics-informed simulation closes the sim-to-real gap — safer real-world deployment starts with faithful environment modeling.
AI systems in production. Safety-critical. At scale.
Scale: hundreds of millions of active users
Built an autonomous incident response agent from scratch using LLMs, RAG, and FAISS. When an incident triggers, the agent retrieves runbooks, analyzes system state, proposes remediation steps, and generates post-mortem timelines. Production AI with real consequences.
Built ML models for AI safety certification of autonomous robots — determining whether a system is safe enough to operate near people. Defined what "safe" means, built models to measure it, created certification standards. Applied AI safety work, not theoretical.
Built user-facing features on a social connection platform. Taught me how software shapes human behavior from the inside — the feedback loops between design choices and user psychology. People form connections through software, not with it. That shapes how I think about AI systems that millions interact with daily.
I'm actively exploring research opportunities at the intersection of reinforcement learning and AI safety. If you're thinking about the same problems, I'd love to connect.