Hi! I'm Aya Oshima,
an AI engineer based in NYC
Highlight Work:
- Developed and implemented statistical modeling method TCA to decompose high-dimensional neuron-firing data from the paraventricular nucleus (PVN) of mice, identifying structured neural patterns underlying maternal behavior learning
- Discovered that active maternal learning strengthens neural synchrony and precise temporal firing, demonstrating that oxytocin-related PVN circuits drive adaptive plasticity in social learning
Statistical ModelingPCATCAComputational NeuroscienceData AnalysisDimension ReductionMatlabPython


Neural-Symbolic VQA: Multi-Modal AI with NLP, Computer Vision, and Symbolic Reasoning
- Developed a Neural-Symbolic VQA system, integrating computer vision (CNN), NLP, and Symbolic Reasoning, trained and evaluated on the Sort-of-CLEVR dataset, which includes 10k images and 200k questions.
- Achieved accuracy of 88% in relational and 99% in non-relational questions.
Multi-ModalNLPComputer VisionCNNPyTorchSymbolic ReasoningPython
Object Tracking Under Occlusions: CNN Detection with Kalman Prediction
Leveraged a pre-trained CNN (ResNet), combined with the MS COCO dataset resulting in accurate tracking of obscured ball trajectories, integrating object detection, classification, and motion prediction using Kalman filter
Mathematical ModelingKalman FilterObject TrackingCNNResNetPython
Experience
AI Neuroscience Researcher
Neuroscience Kiani Lab, NYU Center for Neural Science
- -Developed DeepDream-based image generation algorithms using CNNs (ResNet/VGG-16) to create structured visual stimuli for primate decision-making experiments, where monkeys learn decision-making through reinforcement learning and reward-driven adaptation
- -Contributed 5K+ eye-tracking data points as a subject for human decision-making research
AI Robotics Engineer
Nihon Business Data Processing Center
- -Developed and deployed a voice-to-motion robotics system showcased to 10k+ exhibition visitors, leading the AI development as the sole AI engineer on a team of 5.
- -Integrated voice recognition (Julius/SRILM) and speech synthesis (JACK/OpenJTalk) into the robot, enabling seamless interaction between humans and robots.
- -Researched NLP for the future voice chatbot product, adjusting pre-trained models (BERT/GPT2/Blender), developing original small models (LSTM), and preprocessing messy 400kb data (MeCab/NLTK).
- -Played a project management role, leading cross-functional collaboration, timelines, and deployment strategies.