Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

SLAM Performance on Embedded Robots

Nima Shoghi, Ramyad Hadidi, Hyesoon Kim

Published in Student Research Competition at Embedded System Week (SRC ESWEEK), 2019

Examines the effectiveness of the ORBSLAM2 algorithm on the Raspberry Pi for real-time usage in embedded robots, identifying the Pi's slower performance, but proposing optimizations that nearly quintuple its speed with minimal precision loss, enabling real-time operation.

Neural Network Weight Compression with NNW-BDI

Andrei Bersatti*, Nima Shoghi*, Hyesoon Kim

Published in The International Symposium on Memory Systems, 2020

Introduces NNW-BDI, a specialized memory compression scheme for neural network weights, successfully decreasing memory usage by up to 85% without diminishing inference accuracy.

PISCES: Power-Aware Implementation of SLAM by Customizing Efficient Sparse Algebra

Bahar Asgari, Ramyad Hadidi, Nima Shoghi, Hyesoon Kim

Published in 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020

Introduces Pisces, a method that optimizes power consumption and latency for simultaneous localization and mapping (SLAM). Through using sparse data and reducing memory access, it results in a 2.5 times power reduction and 7.4 times faster execution than other contemporary methods.

Understanding the Software and Hardware Stacks of a General-Purpose Cognitive Drone

Sam Jijina, Adriana Amyette, Nima Shoghi, Ramyad Hadidi, Hyesoon Kim

Published in 2020 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2020

Conducts a detailed analysis of drone operation and efficiency by exploring hardware and software components, using ArduCopter as an example. Optimizing specific aspects of these components can significantly increase drone flight range.

Secure Location-Aware Authentication and Communication for Intelligent Transportation Systems

Nima Shoghi, Ramyad Hadidi, Lee Jaewon, Jun Chen, Arthur Siqueria, Rahul Rajan, Shaan Dhawan, Pooya Shoghi, Hyesoon Kim

Published in arXiv preprint arXiv:2011.08936, 2020

Introduces a unique location-aware protocol for secure communication in Intelligent Transportation Systems, leveraging in situ visual localization (like QR codes) for efficient message verification. The new method is efficient, scalable, and infrastructure-independent, providing a more trustworthy and widely applicable solution than previous approaches.

Quantifying the design-space tradeoffs in autonomous drones

Ramyad Hadidi, Bahar Asgari, Sam Jijina, Adriana Amyette, Nima Shoghi, Hyesoon Kim

Published in Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021

Examines the inherent design complexities in autonomous drones—especially the trade-offs between compute, energy, and electromechanical resources—and proposes a systematic exploration of the drone design space. The study emphasizes the benefits of optimizing the SLAM process on FPGA platforms and introduces a customizable, open-source drone.

SmaQ: Smart Quantization for DNN Training by Exploiting Value Clustering

Nima Shoghi*, Andrei Bersatti*, Moinuddin Qureshi, Hyesoon Kim

Published in IEEE Computer Architecture Letters, 2021

Introduces Smart Quantization (SmaQ), a quantization scheme that leverages the normal distribution properties of neural network data structures, leading to a memory usage reduction of up to 6.7x during training, with minimal impact on accuracy.

Transfer learning using attentions across atomic systems with graph neural networks (TAAG)

Adeesh Kolluru, Nima Shoghi, Muhammed Shuaibi, Siddharth Goyal, Abhishek Das, C Zitnick, Zachary Ulissi

Published in The Journal of Chemical Physics, 2022

Introduces TAAG, a novel attention-based transfer learning approach for Graph Neural Networks which significantly improves performance for out-of-domain datasets and speeds up model training, demonstrating the potential for generalizing important aspects across different atomic system domains.

The Open Catalyst 2022 (OC22) dataset and challenges for oxide electrocatalysts

Richard Tran, Janice Lan, Muhammed Shuaibi, Brandon Wood, Siddharth Goyal, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, Anuroop Sriram, Félix Therrien, Jehad Abed, Oleksandr Voznyy, Edward Sargent, Zachary Ulissi, C. Zitnick

Published in ACS Catalysis, 2023

Introduces the open source OC22 dataset, containing relaxations across various oxide materials, coverages, and adsorbates, to improve machine learning models for oxide electrocatalysts. The paper also establishes clear benchmarks for future efforts in this area, opens the data and models for community development, and introduces a public leaderboard.

Context-Aware Task Handling in Resource-Constrained Robots with Virtualization

Ramyad Hadidi, Nima Shoghi, Bahar Asgari, Hyesoon Kim

Published in IEEE International Conference on Edge Computing and Communications, 2023

Presents a new context-aware approach for handling tasks in real-time on resource-constrained robots, achieving increased execution speed by integrating a dynamic time-sharing mechanism, event-driven scheduling, and lightweight virtualization.

From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction

Nima Shoghi, Adeesh Kolluru, John Kitchin, Zachary Ulissi, C. Zitnick, Brandon Wood

Published in arXiv preprint arXiv:2310.16802, 2023

Introduces Joint Multi-domain Pre-training (JMP), an approach that advances atomic property prediction by training on multiple datasets across diverse chemical domains simultaneously, significantly improving accuracy and setting or matching the state-of-the-art in many tasks.

talks

Attention is All You Need

Published:

Introduces the Transformer, a sequence-to-sequence model by Vaswani et al. (2017) which uses the attention mechanism to learn dependencies between input and output tokens. Goes in-depth into the self-attention and multi-head attention mechanisms and discusses the advantages of the Transformer over recurrent neural networks.

Legal Text Summarization Using Transformer Models

Published:

Develops a transformer-based encoder-decoder architecture for abstractive legal text summarization. Combines PEGASUS’ (from Zhang et al. 2020) pre-training objective with Longformer’s (from Beltagy et al. 2020) dilated attention mechanism to create a model that can handle extremely long input sequences to generate summaries of legal documents. Achieves state-of-the-art summarization performance on the BIGPATENT dataset.

SmaQ: Smart Quantization for DNN Training by Exploiting Value Clustering

Published:

Introduces the Smart Quantization (SmaQ) technique for DNN training. SmaQ is a novel quantization which exploits the observed (normally distributed) value clustering in DNNs to quantize neural network weight, gradient, feature map, gradient map, and optimizer state values. SmaQ is able to reduce memory usage during training by up to 6.7x with no loss in accuracy.

teaching