Research
I have been working as a research assistant advised by Mahdi Soltanolkotabi at the AI Foundations for the Sciences Center at USC since 2018. The lab’s focus is at the intersection of theoretical foundations of learning and efficient deep learning algorithms for scientific and medical imaging applications. We have published papers in top-tier machine learning conferences, such as ICML and NeurIPS, and IEEE conferences such as EUSIPCO.
I had the opportunity to work on a diverse set of projects during my PhD.
- My primary focus is tackling the challenges of adapting AI algorithms for computer vision problems in medical and computational imaging. I address issues such as training with data-scarcity (scientific data is costly!), robustness to distribution shifts, and compute efficiency of deep learning, all key to allowing a wider adoption of AI in scientific applications. We proposed techniques for fast and accurate nano-scale imaging, for MRI reconstruction with limited training data and other state-of-the-art medical image reconstruction techniques. My current research interesets are (1) diffusion models for image reconstruction, focusing on flexibility and efficiency and (2) exploring the potential of vision-language foundation models in zero-shot recognition and in various multimodal medical applications, such as medical report generation.
- One of our lab’s main focus is to gain a better fundamental understanding of deep learning. I had the chance to work on transfer learning generalization bounds based on a notion of semantic distance and a Jacobian-based theory of neural network generalization.
- I also worked on DARPA’s FastNICs project under the supervision of Salman Avestimehr, Mahdi Soltanolkotabi and Murali Annavaram. The goal of this project is to rethink distributed training of machine learning models in the extreme bandwidth regime where communication cost is not a bottleneck anymore. We developed novel second-order, distributed optimization techniques that can utilize the extra bandwidth and achieve faster convergence speed than traditional first-order methods.
Prior to joining USC, I have been a research assistant under the supervision of Se Young Yoon at the University of New Hampshire working on intelligent robotic swarm control algorithms. We published papers on adjustable swarm autonomy and the control of balanced leader-follower swarms at CDC, the top controls conference.
Industry research
Microsoft - Research Intern
I spent the summer of 2023 at the AI for Good Lab working on leveraging multimodal foundation models to advance wildlife conservation efforts under the mentorship of Zhongqi Miao. We developed a novel pipeline for robust recognition of animal species in the Amazon rainforest and we are in active collaboration in further exploring vision-language models for camera trap imagery.
Amazon - Applied Scientist Intern
I interned at Alexa Perceptual Technologies over the summer of 2022 under the mentorship of Rajath Kumar and Joe Wang. I designed and implemented novel data augmentation techniques for speech data and evaluated semi-supervised learning techniques for wake word verification model training.
Professional service
I have served as a reviewer for the following machine learning, signal processing and controls conferences:
- Neural Information Processing Systems (NeurIPS) 2020, 2021, 2022, 2023
- International Conference on Machine Learning (ICML) 2021, 2022 (Outstanding Reviewer)
- International Conference on Learning Representations (ICLR) 2020, 2021, 2022, 2023, 2024
- International Conference on Computer Vision (ICCV) 2023
- Sampling Theory and Applications (SampTA) 2019
- IEEE Conference on Decision and Control (CDC) 2016, 2017
Awards and achievements
- Ming Hsieh Institute PhD Scholar 2021-2022
- Annenberg PhD Fellow, 2017-2020
- Undergraduate Academic Scholarship, 2010-2014