Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Nima Shoghi Ghalehshahi, Ramyad Hadidi, Hyesoon Kim
Published in Student Research Competition at Embedded System Week (SRC ESWEEK), 2019
Demonstrated the feasibility of running ORB-SLAM2 in real-time on the Raspberry Pi 3B+ for embedded robots through optimizations that achieved a 5x speedup with minor impact on accuracy.
Andrei Bersatti, Nima Shoghi Ghalehshahi, Hyesoon Kim
Published in Unknown, 2020
Developed NNW-BDI, a neural network weight compression scheme that reduces memory usage by up to 85% without sacrificing inference accuracy on an MNIST classification task.
Bahar Asgari, Ramyad Hadidi, Nima Shoghi Ghaleshahi, Hyesoon Kim
Published in 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020
Developed Pisces, a power-aware SLAM implementation that consumes 2.5× less power and executes 7.4× faster than the state of the art by customizing efficient sparse algebra on FPGAs.
Nima Shoghi Ghalehshahi, Ramyad Hadidi, Lee Jaewon, Jun Chen, Arthur Siqueria, Rahul Rajan, Shaan Dhawan, Pooya Shoghi Ghalehshahi, Hyesoon Kim
Published in arXiv preprint arXiv:2011.08936, 2020
Developed a scalable, infrastructure-independent, location-aware authentication protocol for intelligent transportation systems, providing trustworthy communication and efficient sender localization using visual authentication beacons.
Sam Jijina, Adriana Amyette, Nima Shoghi, Ramyad Hadidi, Hyesoon Kim
Published in 2020 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2020
Conducted an in-depth analysis of the hardware and software components of autonomous drones, characterizing the performance of the ArduCopter flight stack and providing insights to optimize flight controllers and increase drone range.
Ramyad Hadidi, Bahar Asgari, Sam Jijina, Adriana Amyette, Nima Shoghi, Hyesoon Kim
Published in Unknown, 2021
Formalized the subsystems of autonomous drones and quantified the complex tradeoffs in their design space to enable optimized solutions for diverse applications.
Nima Shoghi, Andrei Bersatti, Moinuddin Qureshi, Hyesoon Kim
Published in IEEE Computer Architecture Letters, 2021
Introduced SmaQ, a quantization scheme that leverages the normal distribution of neural network data structures to efficiently quantize them, addressing the memory bottleneck in single-machine training of deep networks.
Adeesh Kolluru, Muhammed Shuaibi, Aini Palizhati, Nima Shoghi, Abhishek Das, Brandon Wood, C Lawrence Zitnick, John R Kitchin, Zachary W Ulissi
Published in ACS Catalysis, 2022
Discussed the challenges and potential of developing generalizable machine learning models for catalyst discovery, highlighting the importance of large-scale datasets like the Open Catalyst 2020 Data set (OC20).
Adeesh Kolluru, Nima Shoghi, Muhammed Shuaibi, Siddharth Goyal, Abhishek Das, C Lawrence Zitnick, Zachary Ulissi
Published in The Journal of Chemical Physics, 2022
Developed a transfer learning approach using Graph Neural Networks to generalize models across domains in molecular and catalyst discovery, reducing the need for large, domain-specific datasets.
Ramyad Hadidi, Nima Shoghi Ghaleshahi, Bahar Asgari, Hyesoon Kim
Published in 2023 IEEE International Conference on Edge Computing and Communications (EDGE), 2023
Developed a context-aware task handling technique for resource-constrained mobile robots, enabling concurrent execution of critical tasks with improved real-time performance.
Richard Tran, Janice Lan, Muhammed Shuaibi, Brandon M Wood, Siddharth Goyal, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, Anuroop Sriram, Félix Therrien, Jehad Abed, Oleksandr Voznyy, Edward H Sargent, Zachary Ulissi, C Lawrence Zitnick
Published in ACS Catalysis, 2023
Developed the Open Catalyst 2022 (OC22) dataset, consisting of 62,331 DFT relaxations, to accelerate machine learning for oxide electrocatalysts and establish benchmarks for the field.
Nima Shoghi, Pooya Shoghi, Anuroop Sriram, Abhishek Das
Published in arXiv preprint arXiv:2407.20475, 2024
Developed Distributional Mixture of Experts (DMoE), a robust method for molecular property regression that outperforms baselines on multiple datasets and architectures.
Nima Shoghi, Adeesh Kolluru, John R Kitchin, Zachary W Ulissi, C Lawrence Zitnick, Brandon M Wood
Published in International Conference on Learning Representations, 2024
Developed Joint Multi-domain Pre-training (JMP), a supervised pre-training strategy that leverages diverse data to advance atomic property prediction across chemical domains, achieving state-of-the-art performance on 34 out of 40 downstream tasks.
Published:
This talk presents the seminal Transformer paper by Vaswani et al. (2017) and discusses its impact on the field of natural language processing. The Transformer architecture has revolutionized the field by introducing self-attention mechanisms that can model long-range dependencies in sequences, enabling parallelization and scalability in training.
Published:
This talk presents our work on a transformer-based encoder-decoder architecture for abstractive legal text summarization. Combines PEGASUS’ (from Zhang et al. 2020) pre-training objective with Longformer’s (from Beltagy et al. 2020) dilated attention mechanism to create a model that can handle extremely long input sequences to generate summaries of legal documents. Achieves state-of-the-art summarization performance on the BIGPATENT dataset.
Published:
This talk introduces the Smart Quantization (SmaQ) technique for DNN training. SmaQ is a novel quantization which exploits the observed (normally distributed) value clustering in DNNs to quantize neural network weight, gradient, feature map, gradient map, and optimizer state values. SmaQ is able to reduce memory usage during training by up to 6.7x with no loss in accuracy.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk introduces Joint Multi-Domain Pre-training (JMP), a robust supervised pre-training approach which simultaneously trains on data from multiple chemical domains. JMP demonstrates state-of-the-art results on key small molecule, large molecule, and materials datasets and offers insights into the influence of pre-training strategies on fine-tuning.
Published:
This talk explores the potential of pre-training methods to accelerate discovery in chemistry by learning general-purpose representations from large, diverse datasets. Building upon the speaker’s previous work on Joint Multi-domain Pre-training (JMP), which achieved state-of-the-art performance on a wide range of atomistic prediction tasks, the talk dives into key challenges and opportunities such as handling vast chemical space with limited data, developing pre-training objectives that leverage abundant simulation data, and scaling models to billions of parameters.
Published:
This talk explores the potential of pre-training methods to accelerate discovery in chemistry by learning general-purpose representations from large, diverse datasets. Building upon the speaker’s previous work on Joint Multi-domain Pre-training (JMP), which achieved state-of-the-art performance on a wide range of atomistic prediction tasks, the talk dives into key challenges and opportunities such as handling vast chemical space with limited data, developing pre-training objectives that leverage abundant simulation data, and scaling models to billions of parameters.