Career Profile

I am an AI resident at Facebook Artificial Intelligence Research (FAIR), working on the Open Catalyst Project. I received my master’s degree from the Georgia Institute of Technology with a specialization in Machine Learning.


AI Resident

Aug 2021 - Present
Facebook Artificial Intelligence Research (FAIR)

Working on the Open Catalyst Project, which aims to use AI to discover new catalysts for use in renewable energy storage to help in addressing climate change. Duties include (i) exploring efficient machine learning models for this task and (ii) scaling up the training of graph neural networks (e.g., DimeNets) across many nodes.

Software Development and Machine Learning Engineer

Nov 2019 - Aug 2021
Inovar Health, LLC

Worked on a mobile application that matches users with similar activity interests within an organization. Worked on matchmaking algorithm that used data science and machine learning to optimize matches made. Customers include Georgia State University, Mercer University, and Georgia Power.

Graduate Teaching Assistant

Aug 2020 - May 2021
Georgia Institute of Technology

Assisted Dr. Merrick Furst in teaching the Automata and Complexity course to 125 students. Wrote and graded homework assignments. Held weekly office hours to help students.

Student Research Assistant

May 2019 - May 2021
High Performance Architecture Lab at Georgia Institute of Technology

Under Dr. Hyesoon Kim’s supervision, worked on the following research projects:

  • Creating a lossy delta-based compression scheme for neural network weight compression which reduces memory usage by up to 85% with negligible accuracy decrease and decompression performance overhead.
  • Optimizing execution of visual SLAM on the Raspberry Pi to achieve a 5× speedup in total processing time.
  • Intelligent context-aware scheduling algorithm for dynamically allocating CPU resources, achieving a 42% speedup compared to the Linux scheduler.
  • Exploiting sparsity of the SLAM algorithm to create an efficient, low-power SLAM implementation on the FPGA, which consumes 2.5× less power and is 7.4× faster than the state-of-the-art.
  • Creating a secure and verified location-aware communication mechanism for autonomous vehicles.

Student Research Assistant

Jan 2020 - Aug 2020
Cyber Forensics Innovation (CyFI) Lab at Georgia Institute of Technology

Under Dr. Maria Konte’s supervision, developed social media data collection apparatuses for classification of malicious online behavior. Analyzed this data to create a pre-emptive cyber attack detection framework based on live social media data.

Under Dr. Brendan Saltaformaggio’s supervision, maintained and developed a concolic analysis tool that uses heuristics to induce (through symoblic execution) and detect malware behavior. Additionally, analyzed popular malware samples (e.g., WannaCry and Stuxnet) to create heuristics for detecting common malware behavior.

Software Engineering Intern

May 2017 - May 2018
Ciena Corporation

Developed software to interface with network devices (e.g., Cisco Meraki) for orchestration, which was deployed to over 150 customers worldwide, with 15 being tier 1 service providers. Maintained CI/CD pipeline for application build process.


ProSay - ProSay is a legal service delivery platform that has been accepted into Georgia Tech's CREATE-X program. Get your legal questions answered for as low as $10!


  • Transfer Learning using Attentions across Atomic Systems with Graph Neural Networks (TAAG)
  • Adeesh Kolluru, Nima Shoghi, Muhammed Shuaibi, Siddharth Goyal, Abhishek Das, Lawrence Zitnick, Zachary W Ulissi
    J. Chem. Phys.
    Proposes an attention-based transfer learning technique which uses models originally trained on data-rich catalyst discovery datasets (OC20) and fine-tunes them for small molecule datasets (e.g., MD17). Demonstrates, through extensive experiments, that for these tasks, initial layers of GNNs learn more basic representation that holds across domains, whereas final layers learn more task-specific features.
  • SmaQ: Smart Quantization for DNN Training by Exploiting Value Clustering (2021)
  • Nima Shoghi, Andrei Bersatti, Moinuddin Qureshi, Hyesoon Kim
    Proposes a data encoding scheme that exploits the observed normal distribution of neural network weights, gradients, gradient maps, and optimizer state to quantize inlier values to 6 bits and outlier values to 8 bits. Reduces memory usage during training by up to 6.7x with minor drops in accuracy.
  • Quantifying the Design-Space Tradeoffs in Autonomous Drones (2021)
  • Ramyad Hadidi, Bahar Asgari, Sam Jijina, Adriana Amyette, Nima Shoghi, Hyesoon Kim
    ASPLOS 2021
    Formalizes fundamental drone subsystems and finds how computations can impact this design space. Presents a design-space exploration of autonomous drone systems. Releases a fully customizable open-source drone hardware and software stack.
  • Context-Aware Task Handling in Resource-Constrained Robots with Virtualization (2021)
  • Ramyad Hadidi, Nima Shoghi, Bahar Asgari, Hyesoon Kim
    Proposes a fast context-aware task handling technique with three components: (i) a dynamic time-sharing mechanism, coupled with (ii) an event-driven task scheduling using reactive programming paradigm to mindfully use the limited resources; and, (iii) a lightweight virtualized execution to easily integrate functionalities and their dependencies.
  • Secure Location Aware Authentication and Communication Protocol for Autonomous Systems (2020)
  • Nima Shoghi, Ramyad Hadidi , Lee Jaewon, Jun Chen, Arthur Siqueria Rahul Rajan, Shaan Dhawan, Pooya Shoghi, Hyesoon Kim
    Developed a secure and verified location-aware communication protocol for autonomous vehicles that uses asymmetric encryption to broadcast signed messages between vehicles, where encryption keys are shared visually (e.g., QR codes).
  • Neural Network Weight Compression with NNW-BDI (2020)
  • Nima Shoghi, Andrei Bersatti, Hyesoon Kim
    MEMSYS 2020
    Proposes a compression mechanism for neural network weights that uses techniques such as quantization, downscaling, randomized base selection, and base-delta-configuration adjustment. Reduces memory usage by up to 85% without any inference accuracy reduction.
  • PISCES: Power-Aware Implementation of SLAM by Customizing Efficient Sparse Algebra (2020)
  • Bahar Asgari, Ramyad Hadidi, Nima Shoghi
    DAC 2020
    Implements a power-efficient SLAM algorithm on the FPGA by exploiting the sparsity of SLAM algorithms. Consumes 2.5× less power and is 7.4× faster than the state-of-the-art.
  • Understanding the Software and Hardware Stacks of a General-Purpose Cognitive Drone (2020)
  • Sam Jijina, Adriana Amyette, Nima Shoghi, Ramyad Hadidi
    ISPASS 2020
    Analyzes the ArduCopter's performance under different workloads. Studies area-specific applications' effect on the flight stack.
  • SLAM Performance on Embedded Robots (2019)
  • Nima Shoghi, Ramyad Hadidi, Hyesoon Kim
    ESWEEK 2019 (3rd Place Award)
    Measured and optimized the performance of running stereo camera SLAM on the Raspberry Pi. Concludes that our optimizations can speed up the algorithm’s runtime by about 5× with minor impact on accuracy. Awarded 3rd place at the 2019 ACM Student Research Competition at ESWEEK.