Hi, I'm Gabe Sarch
I'm a first-year Ph.D. student at Carnegie Mellon University in the joint program between Neural Computation and Machine Learning under the supervision of Dr. Mike Tarr and Dr. Katerina Fragkiadaki. My work is supported by the National Science Foundation Graduate Research Fellowship. Previously, I received a B.S. in Biomedical Engineering from the University of Rochester, where I studied the marmoset visual system under Dr. Jude Mitchell.
Self Supervision and Active Interaction in Computer Vision
Animals utilize prediction and active interaction to make sense of sensory inputs without a significant amount of explicit labels or instructions. However, most state-of-the-art computer visions require millions of human annotations and are not able to generalize their previously learned knowledge to novel inputs or tasks. A lot of my research focuses on how we can develop computer vision systems that can improve their understanding of the world in a self-supervised or unsupervised way, drawing on inspiration from psychological and neuroscientific literature.
Using AI to understand the human brain
Computer vision architectures, such as convolutional neural networks, have been shown to have strikingly high correlations with brain activity measures, such as fMRI and electrophysiology. By modeling the representations and behaviors of humans with AI systems optimized for different tasks and inputs, we can better understand how humans process and interpret their environment.
Publications and Preprints
Sarch, G., Fang, Z., Jain, A., Harley, A. W., & Fragkiadaki, K. (2020). Move to See Better: Towards Self-Supervised Amodal Object Detection. arXiv preprint arXiv:2012.00057.
See my full list of publications here