Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
Contact me
This is a page not in th emain menu
Background and Volunteering
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
Monkey Pox Detection using a variety of Augmentation Techniques.
Published:
Road Object Detection : Utilized and compared between YOLOv5 and Faster-RCNN
Published in ShapeMI MICCAI 2023: Workshop on Shape in Medical Imaging, 2023
Statistical shape models (SSM) have been well-established as an excellent tool for identifying variations in the morphology of anatomy across the underlying population. Shape models use consistent shape representation across all the samples in a given cohort, which helps to compare shapes and identify the variations that can detect pathologies and help in formulating treatment plans. In medical imaging, computing these shape representations from CT/MRI scans requires time-intensive preprocessing operations, including but not limited to anatomy segmentation annotations, registration, and texture denoising. Deep learning models have demonstrated exceptional capabilities in learning shape representations directly from volumetric images, giving rise to highly effective and efficient Image-to-SSM networks. Nevertheless, these models are data-hungry and due to the limited availability of medical data, deep learning models tend to overfit. Offline data augmentation techniques, that use kernel density estimation based (KDE) methods for generating shape-augmented samples, have successfully aided Image-to-SSM networks in achieving comparable accuracy to traditional SSM methods. However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit image-based texture bias resulting in sub-optimal models. This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation. The proposed framework is trained as an adversary to the Image-to-SSM network, augmenting diverse and challenging noisy samples. Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
Recommended citation: Karanam, Mokshagna Sai Teja, Tushar Kataria, Krithika Iyer, and Shireen Y. Elhabian. "ADASSM: Adversarial Data Augmentation in Statistical Shape Models from Images." In International Workshop on Shape in Medical Imaging, pp. 90-104. Cham: Springer Nature Switzerland, 2023. https://link.springer.com/chapter/10.1007/978-3-031-46914-5_8
Published in EduHPC23 Workshop on Education for High Performance Computing, 2023, 2023
This paper presents an overview of an NSF Research Experience for Undergraduate (REU) Site on Trust and Reproducibility of Intelligent Computation, delivered by faculty and graduate students in the Kahlert School of Computing at University of Utah. The chosen themes bring together several concerns for the future in producing computational results that can be trusted: secure, reproducible, based on sound algorithmic foundations, and developed in the context of ethical considerations. The research areas represented by student projects include machine learning, high-performance computing, algorithms and applications, computer security, data science, and human-centered computing. In the first four weeks of the program, the entire student cohort spent their mornings in lessons from experts in these crosscutting topics, and used one-of-a-kind research platforms operated by the University of Utah, namely NSF-funded CloudLab and POWDER facilities; reading assignments, quizzes, and hands-on exercises reinforced the lessons. In the subsequent five weeks, lectures were less frequent, as students branched into small groups to develop their research projects. The final week focused on a poster presentation and final report. Through describing our experiences, this program can serve as a model for preparing a future workforce to integrate machine learning into trustworthy and reproducible applications.
Recommended citation: Hall, Mary, Ganesh Gopalakrishnan, Eric Eide, Johanna Cohoon, Jeff Phillips, Mu Zhang, Shireen Elhabian et al. "An NSF REU Site Based on Trust and Reproducibility of Intelligent Computation: Experience Report." In Proceedings of the SC23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, pp. 343-349. 2023. https://dl.acm.org/doi/abs/10.1145/3624062.3624100
Published in WACV 2025: IEEE/CVF Winter Conference on Applications of Computer Vision, 2025
Transformers have emerged as the state-of-the-art architecture in medical image registration, outperforming convolutional neural networks (CNNs) by addressing their limited receptive fields and overcoming gradient instability in deeper models. Despite their success, transformer-based models require substantial resources for training, including data, memory, and computational power, which may restrict their applicability for end users with limited resources. In particular, existing transformer-based 3D image registration architectures face two critical gaps that challenge their efficiency and effectiveness. Firstly, although window-based attention mechanisms reduce the quadratic complexity of full attention by focusing on local regions, they often struggle to effectively integrate both local and global information. Secondly, the granularity of tokenization, a crucial factor in registration accuracy, presents a performance trade-off: smaller voxel-size tokens enhance detail capture but come with increased computational complexity, higher memory usage, and a greater risk of overfitting. We present \name, a transformer-based architecture for unsupervised 3D image registration that balances local and global attention in 3D volumes through a plane-based attention mechanism and employs a Hi-Res tokenization strategy with merging operations, thus capturing finer details without compromising computational efficiency. Notably, \name sets a new benchmark for performance on the OASIS dataset with 16-27x fewer parameters.
Recommended citation: Aziz, Abu Zahid Bin, Mokshagna Sai Teja Karanam, Tushar Kataria, and Shireen Y. Elhabian. "EfficientMorph: Parameter-Efficient Transformer-Based Architecture for 3D Image Registration." arXiv preprint arXiv:2403.11026 (2024). https://arxiv.org/abs/2403.11026
Published:
I have presented the work of Gao, Yunhe, et al. “Enabling data diversity: efficient automatic augmentation via regularized adversarial training.” International Conference on Information Processing in Medical Imaging. Cham: Springer International Publishing, 2021.
Published:
Download here
, , 1900
Please don’t hesitate to get in touch if you have any questions.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.