I am a graduate student studying Computer Science at Brown University interested in Artificial Intelligence and Applied Mathematics. I obtained my Undergraduate degree from BITS Pilani, India where I studied Computer Science and Data Science.
My current interests (in no specific order) lie in Optimisation, Machine Learning, Computer Vision and Algorithmic Game Theory. I am interested in building more generalizable, robust and efficient learning algorithms.
Currently, I am a researcher in Serre Lab at Brown University where I work on Self-Supervised Learning and Mental Simulation.
Previously, I was an Analyst in Standard Chartered and spent a summer at IBM Research.
As an undergraduate student at BITS Pilani, I was affiliated with APPCAIR
where I worked in collaboration with TCS Research.
Self-SLAM: A Self Supervised Learning Based Annotation Method to reduce labelling overhead Accepted at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2024) Supervisors:Snehanshu Saha, Surjya Ghosh [ Paper ]
SSLAM is a self-supervised deep learning framework designed to generate labels while minimizing the overhead associated with tabular data annotation. SSLAM learns valuable
representations from unlabeled data that are applied to the downstream task of label generation generation by utilizing two pretext tasks with a novel log-cosh loss function.
Tackling Drift in Neural Responses in the Spinomotor Pathway Bachelors Thesis Supervisor:Dr. Thomas Serre [ Thesis ]
We aim to implement a robust and efficient machine learning-based solution for restoring motor functions in
patients with spinal cord injury using Epidural Electrical Stimulation. This thesis examines the problem of neural drift and the possibility of using meta-learning
to build an adaptive algorithm to tackle this drift.
DetAIL: A Tool to Automatically Detect and Analyze Drift In Language Accepted at the Annual Conference on Innovative Applications of Artificial Intelligence, Collocated with AAAI '23. Supervisors:Nishtha Madaan, Harivansh Kumar [ Paper ]
We propose to measure the data drift that takes place when new data kicks in so that one can adaptively
re-train the models whenever re-training is required. In addition to that, we generate
various sentence and dataset-level explanations to capture why a given payload text has drifted.
The Abstraction and Reasoning Corpus (ARC) was introduced by François Chollet
as a benchmark to measure AI skill acquisition on unknown tasks, with the constraint that only a handful of demonstrations are
shown to learn a complex task. We show how modern neural networks fail ARC and test the idea of using meta-learning to learn
priors useful to solve ARC.
This work explores the contexts associated with errors in the automatic transcription of spontaneous speech and its effects
on current State-of-the-art methods for dementia detection. We attempt to build a purely acoustic solution based on Vision Transformers
for dementia detection.
We develop a model miming the Visual Hierarchy observed in animals for decoding fMRI to classify images. We show that
hierarchical processing of fMRI data from different parts of the animal visual system aids classification.
Out-of-Distribution Detection for Skin Lesion Classification [ Code ]
In this work, we study the problems dealing with OOD data for skin cancer classification. We compare the performance of Virtual Outlier Synthesis,
a recent work published at ICLR 2022, with other State-of-the-art OOD detection methods. We further show how using another inference method help achieve better results than VOS.
We also study the effects of different types of OOD data on our method.