Optical interrogation and manipulation have been crucial tools in advancing systems neuroscience, but are inherently limited due to the limitations of optical access to deep brain regions. In my dissertation, we introduce a technique employing hundreds or thousands of microfibers that splay through the tissue to optically access deep brain regions while preserving local connectivity and minimizing tissue response.
I study brains, specifically optical ways to record and manipulate neural activity. My partner and I advocate for a more compassionate society by helping count the voices of protesters. Below is a sampling of projects, writing and cove from the last several years.
Consensus contours eliminate structurally unstable components in the spectral representation of audio, by finding consensus across timescales. The algorithm was originally described by Yoonseob Lim, Barbara Shinn-Cunningham and Tim Gardner. This code relies on vectorized instructions and minimal memory footprint to achieve the first implementation capable of realtime processing of audio.
As engineers and scientists with a keen interest in civic responsibility and public policy, Tommy and I set out to find the unusual and the extraordinary in the last six years of Hubway data. Please join us on our adventure through five million Hubway trips as we share what we've asked, what we've learned, and what we've built.
A record of demonstrations from across the country. These are the times when, together, we spoke out to stand for love, respect, and the earth. We collect data by crawling close to 3,000 local news sources for articles that reference protests, marches or rallies. By training neural networks off previous articles, we use natural language processing techniques to identify relevant articles and annotate key information. Finally, articles are manually reviewed and published nightly.
A Swift application that uses Core Audio and Accelerate to process realtime audio spectrograms through neural networks in order to detect specific “syllables” with low latency (2.8ms) and low CPU load (15%).
In order to identify near-duplicate videos, this projects uses a Caffe neural network to generate descriptive feature vectors for video frames. These feature vectors provide a unique signature for videos that can be quickly queried to find similar or near-duplicate videos, even after applying various distortions, including resizing, rotating and cropping the original video. The code is available in the video-similarity repository.
Video Capture is a Mac application that interfaces with the existing FinchScope miniature microscope in order to do recording and real-time processing of video and audio from the orthophysiology rig. Low latency feature extraction enables potential brain-machine interface applications, by triggering feedback or external actions based on recorded activity.
We introduce, describe and evaluate a novel hyperspectral anomaly detector, comparing performance and speed against implementations of existing detectors. The novel detector uses gaps in the principal component representation, with potential applications for search and rescue. The code, including MATLAB implementations of both the novel and existing hyperspectral anomaly detectors, is available on GitHub.
A simple reusable Python implementation for performing feature selection and linear regression on a large data set. Initially used to submit a predictions entry to the MIT Big Data Challenge predicting taxi demand. Since then, the code has been standardized and generalized.
Written as part of graduate school applications for programs in systems neuroscience. Following applications, I ended up going to Boston University for the Graduate Program for Neuroscience.
Written as part of graduate school applications for interdisciplinary programs focusing on complexity at a societal scale.
An essay written during my Peace Corps service, in response to my interactions and observations of the HIV/AIDS epidemic effecting the region and the communities where I lived.