Publications
(* denotes equal contribution)
2025
RoFt-Mol: Benchmarking Robust Fine-Tuning with Molecular Graph Foundation Models
Shikun Liu, Deyu Zou, Nima Shoghi, Victor Fung, Kai Liu, and Pan Li
Introduces RoFt-Mol, a benchmark evaluating robust fine-tuning methods for molecular graph foundation models, and proposes DWiSE-FT, a method combining post-hoc weight interpolation with weight ensemble fine-tuning to improve performance across both regression and classification tasks.
Lingyu Kong, Nima Shoghi, Guoxiang Hu, Pan Li, and Victor Fung
Introduces MatterTune, a modular platform that enables fine-tuning of pre-trained atomistic foundation models for materials science applications.
2024
From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction
Nima Shoghi, Adeesh Kolluru, John Kitchin, Zachary Ulissi, C. Lawrence Zitnick, and Brandon Wood
Introduces Joint Multi-domain Pre-training (JMP), a supervised pre-training strategy that leverages diverse data to advance atomic property prediction across chemical domains, achieving state-of-the-art performance on 34 out of 40 downstream tasks.
Distribution Learning for Molecular Regression
Nima Shoghi, Pooya Shoghi, Anuroop Sriram, and Abhishek Das
Introduces Distributional Mixture of Experts (DMoE), a robust method for molecular property regression that outperforms baselines on multiple datasets and architectures.
2023
Context-Aware Task Handling in Resource-Constrained Robots with Virtualization
Ramyad Hadidi, Nima Shoghi, Bahar Asgari, and Hyesoon Kim
Develops a fast context-aware technique that enables resource-constrained robots to handle multiple tasks simultaneously with improved timeliness, demonstrating a 42% speedup in execution time compared to standard scheduling approaches.
The Open Catalyst 2022 (OC22) Dataset and Challenges for Oxide Electrocatalysts
Richard Tran, Janice Lan, Muhammed Shuaibi, Brandon M. Wood, Siddharth Goyal, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, Anuroop Sriram, Félix Therrien, Jehad Abed, Oleksandr Voznyy, Edward H. Sargent, Zachary Ulissi, and C. Lawrence Zitnick
Introduces the Open Catalyst 2022 (OC22) dataset, consisting of 62,331 DFT relaxations, to accelerate machine learning for oxide electrocatalysts and establish benchmarks for the field.
2022
Transfer Learning Using Attentions Across Atomic Systems with Graph Neural Networks (TAAG)
Adeesh Kolluru, Nima Shoghi, Muhammed Shuaibi, Siddharth Goyal, Abhishek Das, C. Lawrence Zitnick, and Zachary Ulissi
Introduces a transfer learning approach using Graph Neural Networks to generalize models across domains in molecular and catalyst discovery, reducing the need for large, domain-specific datasets.
Open Challenges in Developing Generalizable Large-Scale ML Models for Catalyst Discovery
Adeesh Kolluru, Muhammed Shuaibi, Aini Palizhati, Nima Shoghi, Abhishek Das, Brandon Wood, C. Lawrence Zitnick, John Kitchin, and Zachary Ulissi
Discusses the challenges and potential of developing generalizable machine learning models for catalyst discovery, highlighting the importance of large-scale datasets like the Open Catalyst 2020 Data set (OC20).
2021
SmaQ: Smart Quantization for DNN Training by Exploiting Value Clustering
Nima Shoghi, Andrei Bersatti, Moinuddin Qureshi, and Hyesoon Kim
Introduces SmaQ, a quantization scheme that leverages the normal distribution of neural network data structures to efficiently quantize them, addressing the memory bottleneck in single-machine training of deep networks.
Quantifying the Design-Space Tradeoffs in Autonomous Drones
Ramyad Hadidi, Bahar Asgari, Sam Jijina, Adriana Amyette, Nima Shoghi, and Hyesoon Kim
Formalizes the subsystems of autonomous drones and quantifies the complex tradeoffs in their design space to enable optimized solutions for diverse applications.
2020
Neural Network Weight Compression with NNW-BDI
Nima Shoghi, Andrei Bersatti, and Hyesoon Kim
Introduces NNW-BDI, a neural network weight compression scheme that reduces memory usage by up to 85% without sacrificing inference accuracy on an MNIST classification task.
Pisces: Power-Aware Implementation of SLAM by Customizing Efficient Sparse Algebra
Bahar Asgari, Ramyad Hadidi, Nima Shoghi, and Hyesoon Kim
Introduces Pisces, a power-aware SLAM implementation that consumes 2.5x less power and executes 7.4x faster than the state of the art by customizing efficient sparse algebra on FPGAs.
Nima Shoghi, Ramyad Hadidi, Lee Jaewon, Jun Chen, Arthur Siqueria, Rahul Rajan, Shaan Dhawan, Pooya Shoghi, and Hyesoon Kim
Introduces a scalable, infrastructure-independent, location-aware authentication protocol for intelligent transportation systems, providing trustworthy communication and efficient sender localization using visual authentication beacons.
Understanding the Software and Hardware Stacks of a General-Purpose Cognitive Drone
Sam Jijina, Adriana Amyette, Nima Shoghi, Ramyad Hadidi, and Hyesoon Kim
Conducts an in-depth analysis of the hardware and software components of autonomous drones, characterizing the performance of the ArduCopter flight stack and providing insights to optimize flight controllers and increase drone range.
2019
SLAM Performance on Embedded Robots
Nima Shoghi, Ramyad Hadidi, and Hyesoon Kim
Demonstrates the feasibility of running ORB-SLAM2 in real-time on the Raspberry Pi 3B+ for embedded robots through optimizations that achieved a 5x speedup with minor impact on accuracy.