Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Nima Shoghi, Ramyad Hadidi, Hyesoon Kim
Published in Student Research Competition at Embedded System Week (SRC ESWEEK), 2019
Demonstrates that with appropriate optimizations, the ORB-SLAM2 algorithm can run in real-time on a Raspberry Pi 3B+ for embedded robotics applications, achieving a 5x speed increase with minimal impact on mapping accuracy.
Andrei Bersatti, Nima Shoghi, Hyesoon Kim
Published in Proceedings of the International Symposium on Memory Systems, 2020
Introduces NNW-BDI, a memory compression technique for neural network weights that reduces memory usage by up to 85% without sacrificing accuracy.
Bahar Asgari, Ramyad Hadidi, Nima Shoghi, Hyesoon Kim
Published in 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020
Introduces Pisces, a power-efficient implementation of SLAM that exploits data sparsity to reduce power consumption by 2.5× and increase processing speed by 7.4× for autonomous systems.
Nima Shoghi, Ramyad Hadidi, Lee Jaewon, Jun Chen, Arthur Siqueria, Rahul Rajan, Shaan Dhawan, Pooya Shoghi, Hyesoon Kim
Published in arXiv preprint arXiv:2011.08936, 2020
Presents a novel security protocol for autonomous vehicles that integrates message authentication with visual localization, enabling vehicles to simultaneously verify messages and identify sender locations without additional computational costs or infrastructure requirements.
Sam Jijina, Adriana Amyette, Nima Shoghi, Ramyad Hadidi, Hyesoon Kim
Published in 2020 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2020
Characterizes a widely-used open source flight stack to understand the performance requirements of autonomous drones, revealing that optimizing the flight controller software can dramatically increase the drone's flying range.
Ramyad Hadidi, Bahar Asgari, Sam Jijina, Adriana Amyette, Nima Shoghi, Hyesoon Kim
Published in Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021
Explores and quantifies the design-space tradeoffs in autonomous drone systems, revealing that optimizing SLAM algorithms on FPGA hardware is particularly beneficial while also providing an open-source customizable drone platform.
Nima Shoghi, Andrei Bersatti, Moinuddin Qureshi, Hyesoon Kim
Published in IEEE Computer Architecture Letters, 2021
Introduces a smart quantization technique that reduces memory usage during neural network training by up to 6.7x while maintaining accuracy by exploiting the normal distribution properties of neural network values.
Adeesh Kolluru, Muhammed Shuaibi, Aini Palizhati, Nima Shoghi, Abhishek Das, Brandon Wood, C Zitnick, John Kitchin, Zachary Ulissi
Published in ACS Catalysis, 2022
Examines the challenges in developing machine learning models that work across different chemical systems for catalyst discovery, highlighting recent progress with the Open Catalyst 2020 Dataset and identifying critical areas for future research.
Adeesh Kolluru, Nima Shoghi, Muhammed Shuaibi, Siddharth Goyal, Abhishek Das, C Zitnick, Zachary Ulissi
Published in The Journal of Chemical Physics, 2022
Introduces TAAG, an attention-based transfer learning approach for graph neural networks that effectively transfers knowledge across diverse atomic systems, improving performance for out-of-domain datasets while achieving up to 4× speedup in model training.
Ramyad Hadidi, Nima Shoghi, Bahar Asgari, Hyesoon Kim
Published in 2023 IEEE International Conference on Edge Computing and Communications (EDGE), 2023
Develops a fast context-aware technique that enables resource-constrained robots to handle multiple tasks simultaneously with improved timeliness, demonstrating a 42% speedup in execution time compared to standard scheduling approaches.
Richard Tran, Janice Lan, Muhammed Shuaibi, Brandon Wood, Siddharth Goyal, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, others
Published in ACS Catalysis, 2023
Develops the Open Catalyst 2022 (OC22) dataset to fill a critical gap in machine learning training data for oxide electrocatalysts, demonstrating improved prediction accuracy and establishing benchmarks for future research in renewable energy materials.
Nima Shoghi, Pooya Shoghi, Anuroop Sriram, Abhishek Das
Published in arXiv preprint arXiv:2407.20475, 2024
Introduces a novel approach called Distributional Mixture of Experts (DMoE) for molecular property prediction that improves accuracy by training models to predict probability distributions rather than single values, demonstrating significant performance gains across multiple datasets and model architectures.
Nima Shoghi, Adeesh Kolluru, John Kitchin, Zachary Ulissi, C Zitnick, Brandon Wood
Published in International Conference on Learning Representations, 2024
Introduces a multi-domain pre-training strategy for molecular property prediction that learns simultaneously from diverse chemical datasets, demonstrating substantial improvements over previous methods and advancing the ability to accurately predict properties across molecules and materials.
Lingyu Kong, Nima Shoghi, Guoxiang Hu, Pan Li, Victor Fung
Published in arXiv preprint arXiv:2504.10655, 2025
Introduces MatterTune, a modular platform that enables fine-tuning of pre-trained atomistic foundation models for materials science applications, allowing researchers to overcome data limitations and seamlessly integrate advanced machine learning into materials discovery workflows.
Published:
This talk presents the seminal Transformer paper by Vaswani et al. (2017) and discusses its impact on the field of natural language processing. The Transformer architecture has revolutionized the field by introducing self-attention mechanisms that can model long-range dependencies in sequences, enabling parallelization and scalability in training.
Published:
This talk presents our work on a transformer-based encoder-decoder architecture for abstractive legal text summarization. Combines PEGASUS’ (from Zhang et al. 2020) pre-training objective with Longformer’s (from Beltagy et al. 2020) dilated attention mechanism to create a model that can handle extremely long input sequences to generate summaries of legal documents. Achieves state-of-the-art summarization performance on the BIGPATENT dataset.
Published:
This talk introduces the Smart Quantization (SmaQ) technique for DNN training. SmaQ is a novel quantization which exploits the observed (normally distributed) value clustering in DNNs to quantize neural network weight, gradient, feature map, gradient map, and optimizer state values. SmaQ is able to reduce memory usage during training by up to 6.7x with no loss in accuracy.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk explores the potential of pre-training methods to accelerate discovery in chemistry by learning general-purpose representations from large, diverse datasets. Building upon the speaker’s previous work on Joint Multi-domain Pre-training (JMP), which achieved state-of-the-art performance on a wide range of atomistic prediction tasks, the talk dives into key challenges and opportunities such as handling vast chemical space with limited data, developing pre-training objectives that leverage abundant simulation data, and scaling models to billions of parameters.
Published:
This talk explores the potential of pre-training methods to accelerate discovery in chemistry by learning general-purpose representations from large, diverse datasets. Building upon the speaker’s previous work on Joint Multi-domain Pre-training (JMP), which achieved state-of-the-art performance on a wide range of atomistic prediction tasks, the talk dives into key challenges and opportunities such as handling vast chemical space with limited data, developing pre-training objectives that leverage abundant simulation data, and scaling models to billions of parameters.