Fooladgar, Fahimeh; To, Minh Nguyen Nhat; Mousavi, Parvin; Abolmaesumi, Purang
Manifold DivideMix: A semi-supervised contrastive learning framework for severe label noise Proceedings Article
In: pp. 4012-4021, 2024.
@inproceedings{fooladgar2024,
title = {Manifold DivideMix: A semi-supervised contrastive learning framework for severe label noise},
author = {Fahimeh Fooladgar and Minh Nguyen Nhat To and Parvin Mousavi and Purang Abolmaesumi},
year = {2024},
date = {2024-01-01},
pages = {4012-4021},
abstract = {Deep neural networks have proven to be highly effective when large amounts of data with clean labels are available. However their performance degrades when training data contains noisy labels leading to poor generalization on the test set. Real-world datasets contain noisy label samples that either have similar visual semantics to other classes (in-distribution) or have no semantic relevance to any class (out-of-distribution) in the dataset. Most state-of-the-art methods leverage ID labeled noisy samples as unlabeled data for semi-supervised learning but OOD labeled noisy samples cannot be used in this way because they do not belong to any class within the dataset. Hence in this paper we propose incorporating the information from all the training data by leveraging the benefits of self-supervised training. Our method aims to extract a meaningful and generalizable embedding space for each sample regardless of its label. Then we employ a simple yet effective K-nearest neighbor method to remove portions of out-of-distribution samples. By discarding these samples we propose an iterative" Manifold DivideMix" algorithm to find clean and noisy samples and train our model in a semi-supervised way. In addition we propose" MixEMatch" a new algorithm for the semi-supervised step that involves mixup augmentation at the input and final hidden representations of the model. This will extract better representations by interpolating both in the input and manifold spaces. Extensive experiments on multiple synthetic-noise image benchmarks and real-world web-crawled datasets demonstrate the effectiveness of our proposed framework. Code is …},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Mannina, Sophia; Addas, Shamel; Abolmaesumi, Purang; Mousavi, Parvin; Maghsoodi, Nooshin; Nassar, Sarah; Maslove, David
Digital Nudges in Healthcare Contexts: An Information Systems Perspective Journal Article
In: 2024.
@article{mannina2024b,
title = {Digital Nudges in Healthcare Contexts: An Information Systems Perspective},
author = {Sophia Mannina and Shamel Addas and Purang Abolmaesumi and Parvin Mousavi and Nooshin Maghsoodi and Sarah Nassar and David Maslove},
year = {2024},
date = {2024-01-01},
abstract = {Digital transformation has presented healthcare providers with new tools, roles, and challenges related to patient care. Although digital technologies like electronic health records can offer valuable information concerning patients' needs, the growing volume of data that healthcare providers receive through these tools can contribute to information overload and alert fatigue. Nudging is a behavioural economics technique that can be applied to guide healthcare providers toward optimal care decisions while limiting information overload. To better understand the application of this technique, we perform a systematic literature review that explores digital nudges oriented toward healthcare providers from an information systems perspective. This review identifies positive and negative outcomes of digital nudges and presents design principles that can guide development of nudges directed toward healthcare providers. Opportunities are discussed to further assess digital nudges through the information systems lens.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Harmanani, Mohamed; Wilson, Paul FR; Fooladgar, Fahimeh; Jamzad, Amoon; Gilany, Mahdi; To, Minh Nguyen Nhat; Wodlinger, Brian; Abolmaesumi, Purang; Mousavi, Parvin
Benchmarking image transformers for prostate cancer detection from ultrasound data Proceedings Article
In: pp. 245-251, SPIE, 2024.
@inproceedings{harmanani2024b,
title = {Benchmarking image transformers for prostate cancer detection from ultrasound data},
author = {Mohamed Harmanani and Paul FR Wilson and Fahimeh Fooladgar and Amoon Jamzad and Mahdi Gilany and Minh Nguyen Nhat To and Brian Wodlinger and Purang Abolmaesumi and Parvin Mousavi},
year = {2024},
date = {2024-01-01},
volume = {12928},
pages = {245-251},
publisher = {SPIE},
abstract = {PURPOSE
Deep learning methods for classifying prostate cancer (PCa) in ultrasound images typically employ convolutional neural networks (CNN) to detect cancer in small regions of interest (ROI) along a needle trace region. However, this approach suffers from weak labelling, since the ground-truth histopathology labels do not describe the properties of individual ROIs. Recently, multi-scale approaches have sought to mitigate this issue by combining the context awareness of transformers with a convolutional feature extractor to detect cancer from multiple ROIs using multiple-instance learning (MIL). In this work, we present a detailed study of several image transformer architectures for both ROI-scale and multi-scale classification, and a comparison of the performance of CNNs and transformers for ultrasound-based prostate cancer classification. We also design a novel multi-objective learning strategy that …},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Deep learning methods for classifying prostate cancer (PCa) in ultrasound images typically employ convolutional neural networks (CNN) to detect cancer in small regions of interest (ROI) along a needle trace region. However, this approach suffers from weak labelling, since the ground-truth histopathology labels do not describe the properties of individual ROIs. Recently, multi-scale approaches have sought to mitigate this issue by combining the context awareness of transformers with a convolutional feature extractor to detect cancer from multiple ROIs using multiple-instance learning (MIL). In this work, we present a detailed study of several image transformer architectures for both ROI-scale and multi-scale classification, and a comparison of the performance of CNNs and transformers for ultrasound-based prostate cancer classification. We also design a novel multi-objective learning strategy that …
To, Minh Nguyen Nhat; Fooladgar, Fahimeh; Wilson, Paul; Harmanani, Mohamed; Gilany, Mahdi; Sojoudi, Samira; Jamzad, Amoon; Chang, Silvia; Black, Peter; Mousavi, Parvin; Abolmaesumi, Purang
LensePro: Label noise-tolerant prototype-based network for improving cancer detection in prostate ultrasound with limited annotations Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, pp. 1-8, 2024.
@article{to2024b,
title = {LensePro: Label noise-tolerant prototype-based network for improving cancer detection in prostate ultrasound with limited annotations},
author = {Minh Nguyen Nhat To and Fahimeh Fooladgar and Paul Wilson and Mohamed Harmanani and Mahdi Gilany and Samira Sojoudi and Amoon Jamzad and Silvia Chang and Peter Black and Parvin Mousavi and Purang Abolmaesumi},
year = {2024},
date = {2024-01-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1-8},
publisher = {Springer International Publishing},
abstract = {Purpose
The standard of care for prostate cancer (PCa) diagnosis is the histopathological analysis of tissue samples obtained via transrectal ultrasound (TRUS) guided biopsy. Models built with deep neural networks (DNNs) hold the potential for direct PCa detection from TRUS, which allows targeted biopsy and subsequently enhances outcomes. Yet, there are ongoing challenges with training robust models, stemming from issues such as noisy labels, out-of-distribution (OOD) data, and limited labeled data.
Methods
This study presents LensePro, a unified method that not only excels in label efficiency but also demonstrates robustness against label noise and OOD data. LensePro comprises two key stages: first, self-supervised learning to extract high-quality feature representations from abundant unlabeled TRUS data and, second, label noise-tolerant prototype-based learning to classify the extracted features …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The standard of care for prostate cancer (PCa) diagnosis is the histopathological analysis of tissue samples obtained via transrectal ultrasound (TRUS) guided biopsy. Models built with deep neural networks (DNNs) hold the potential for direct PCa detection from TRUS, which allows targeted biopsy and subsequently enhances outcomes. Yet, there are ongoing challenges with training robust models, stemming from issues such as noisy labels, out-of-distribution (OOD) data, and limited labeled data.
Methods
This study presents LensePro, a unified method that not only excels in label efficiency but also demonstrates robustness against label noise and OOD data. LensePro comprises two key stages: first, self-supervised learning to extract high-quality feature representations from abundant unlabeled TRUS data and, second, label noise-tolerant prototype-based learning to classify the extracted features …
Connolly, Laura; Fooladgar, Fahimeh; Jamzad, Amoon; Kaufmann, Martin; Syeda, Ayesha; Ren, Kevin; Abolmaesumi, Purang; Rudan, John F; McKay, Doug; Fichtinger, Gabor; Mousavi, Parvin
ImSpect: Image-driven self-supervised learning for surgical margin evaluation with mass spectrometry Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, pp. 1-8, 2024.
@article{connolly2024b,
title = {ImSpect: Image-driven self-supervised learning for surgical margin evaluation with mass spectrometry},
author = {Laura Connolly and Fahimeh Fooladgar and Amoon Jamzad and Martin Kaufmann and Ayesha Syeda and Kevin Ren and Purang Abolmaesumi and John F Rudan and Doug McKay and Gabor Fichtinger and Parvin Mousavi},
year = {2024},
date = {2024-01-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1-8},
publisher = {Springer International Publishing},
abstract = {Purpose
Real-time assessment of surgical margins is critical for favorable outcomes in cancer patients. The iKnife is a mass spectrometry device that has demonstrated potential for margin detection in cancer surgery. Previous studies have shown that using deep learning on iKnife data can facilitate real-time tissue characterization. However, none of the existing literature on the iKnife facilitate the use of publicly available, state-of-the-art pretrained networks or datasets that have been used in computer vision and other domains.
Methods
In a new framework we call ImSpect, we convert 1D iKnife data, captured during basal cell carcinoma (BCC) surgery, into 2D images in order to capitalize on state-of-the-art image classification networks. We also use self-supervision to leverage large amounts of unlabeled, intraoperative data to accommodate the data requirements of these networks.
Results
Through extensive ablation …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Real-time assessment of surgical margins is critical for favorable outcomes in cancer patients. The iKnife is a mass spectrometry device that has demonstrated potential for margin detection in cancer surgery. Previous studies have shown that using deep learning on iKnife data can facilitate real-time tissue characterization. However, none of the existing literature on the iKnife facilitate the use of publicly available, state-of-the-art pretrained networks or datasets that have been used in computer vision and other domains.
Methods
In a new framework we call ImSpect, we convert 1D iKnife data, captured during basal cell carcinoma (BCC) surgery, into 2D images in order to capitalize on state-of-the-art image classification networks. We also use self-supervision to leverage large amounts of unlabeled, intraoperative data to accommodate the data requirements of these networks.
Results
Through extensive ablation …
Yeung, Chris; Ungi, Tamas; Hu, Zoe; Jamzad, Amoon; Kaufmann, Martin; Walker, Ross; Merchant, Shaila; Engel, Cecil Jay; Jabs, Doris; Rudan, John; Mousavi, Parvin; Fichtinger, Gabor
From quantitative metrics to clinical success: assessing the utility of deep learning for tumor segmentation in breast surgery Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, pp. 1-9, 2024.
@article{yeung2024b,
title = {From quantitative metrics to clinical success: assessing the utility of deep learning for tumor segmentation in breast surgery},
author = {Chris Yeung and Tamas Ungi and Zoe Hu and Amoon Jamzad and Martin Kaufmann and Ross Walker and Shaila Merchant and Cecil Jay Engel and Doris Jabs and John Rudan and Parvin Mousavi and Gabor Fichtinger},
year = {2024},
date = {2024-01-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1-9},
publisher = {Springer International Publishing},
abstract = {Purpose
Preventing positive margins is essential for ensuring favorable patient outcomes following breast-conserving surgery (BCS). Deep learning has the potential to enable this by automatically contouring the tumor and guiding resection in real time. However, evaluation of such models with respect to pathology outcomes is necessary for their successful translation into clinical practice.
Methods
Sixteen deep learning models based on established architectures in the literature are trained on 7318 ultrasound images from 33 patients. Models are ranked by an expert based on their contours generated from images in our test set. Generated contours from each model are also analyzed using recorded cautery trajectories of five navigated BCS cases to predict margin status. Predicted margins are compared with pathology reports.
Results
The best-performing model using both quantitative evaluation and our visual …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Preventing positive margins is essential for ensuring favorable patient outcomes following breast-conserving surgery (BCS). Deep learning has the potential to enable this by automatically contouring the tumor and guiding resection in real time. However, evaluation of such models with respect to pathology outcomes is necessary for their successful translation into clinical practice.
Methods
Sixteen deep learning models based on established architectures in the literature are trained on 7318 ultrasound images from 33 patients. Models are ranked by an expert based on their contours generated from images in our test set. Generated contours from each model are also analyzed using recorded cautery trajectories of five navigated BCS cases to predict margin status. Predicted margins are compared with pathology reports.
Results
The best-performing model using both quantitative evaluation and our visual …
Kaufmann, Martin; Jamzad, Amoon; Ungi, Tamas; Rodgers, Jessica R; Koster, Teaghan; Yeung, Chris; Ehrlich, Josh; Santilli, Alice; Asselin, Mark; Janssen, Natasja; McMullen, Julie; Solberg, Kathryn; Cheesman, Joanna; Carlo, Alessia Di; Ren, Kevin Yi Mi; Varma, Sonal; Merchant, Shaila; Engel, Cecil Jay; Walker, G Ross; Gallo, Andrea; Jabs, Doris; Mousavi, Parvin; Fichtinger, Gabor; Rudan, John F
Abstract PO2-23-07: Three-dimensional navigated mass spectrometry for intraoperative margin assessment during breast cancer surgery Journal Article
In: Cancer Research, vol. 84, no. 9_Supplement, pp. PO2-23-07-PO2-23-07, 2024.
@article{kaufmann2024c,
title = {Abstract PO2-23-07: Three-dimensional navigated mass spectrometry for intraoperative margin assessment during breast cancer surgery},
author = {Martin Kaufmann and Amoon Jamzad and Tamas Ungi and Jessica R Rodgers and Teaghan Koster and Chris Yeung and Josh Ehrlich and Alice Santilli and Mark Asselin and Natasja Janssen and Julie McMullen and Kathryn Solberg and Joanna Cheesman and Alessia Di Carlo and Kevin Yi Mi Ren and Sonal Varma and Shaila Merchant and Cecil Jay Engel and G Ross Walker and Andrea Gallo and Doris Jabs and Parvin Mousavi and Gabor Fichtinger and John F Rudan},
year = {2024},
date = {2024-01-01},
journal = {Cancer Research},
volume = {84},
number = {9_Supplement},
pages = {PO2-23-07-PO2-23-07},
publisher = {The American Association for Cancer Research},
abstract = {Positive resection margins occur in approximately 25% of breast cancer (BCa) surgeries, requiring re-operation. Margin status is not routinely available during surgery; thus, technologies that identify residual cancer on the specimen or cavity are needed to provide intraoperative decision support that may reduce positive margin rates. Rapid evaporative ionization mass spectrometry (REIMS) is an emerging technique that chemically profiles the plume generated by tissue cauterization to classify the ablated tissue as either cancerous or non-cancerous, on the basis of detected lipid species. Although REIMS can distinguish cancer and non-cancerous breast tissue by the signals generated, it does not indicate the location of the classified tissue in real-time. Our objective was to combine REIMS with spatio-temporal navigation (navigated REIMS), and to compare performance of navigated REIMS with conventional …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kaufmann, Martin; Jamzad, Amoon; Ungi, Tamas; Rodgers, Jessica; Koster, Teaghan; Chris, Yeung; Janssen, Natasja; McMullen, Julie; Solberg, Kathryn; Cheesman, Joanna; Ren, Kevin Ti Mi; Varma, Sonal; Merchant, Shaila; Engel, Cecil Jay; Walker, G Ross; Gallo, Andrea; Jabs, Doris; Mousavi, Parvin; Fichtinger, Gabor; Rudan, John
Three-dimensional navigated mass spectrometry for intraoperative margin assessment during breast cancer surgery Journal Article
In: vol. 31, no. 1, pp. S10-S10, 2024.
@article{kaufmann2024b,
title = {Three-dimensional navigated mass spectrometry for intraoperative margin assessment during breast cancer surgery},
author = {Martin Kaufmann and Amoon Jamzad and Tamas Ungi and Jessica Rodgers and Teaghan Koster and Yeung Chris and Natasja Janssen and Julie McMullen and Kathryn Solberg and Joanna Cheesman and Kevin Ti Mi Ren and Sonal Varma and Shaila Merchant and Cecil Jay Engel and G Ross Walker and Andrea Gallo and Doris Jabs and Parvin Mousavi and Gabor Fichtinger and John Rudan},
year = {2024},
date = {2024-01-01},
volume = {31},
number = {1},
pages = {S10-S10},
publisher = {SPRINGER},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kim, Andrew; Yeung, Chris; Szabo, Robert; Sunderland, Kyle; Hisey, Rebecca; Morton, David; Kikinis, Ron; Diao, Babacar; Mousavi, Parvin; Ungi, Tamas; Fichtinger, Gabor
Percutaneous nephrostomy needle guidance using real-time 3D anatomical visualization with live ultrasound segmentation Proceedings Article
In: pp. 163-168, SPIE, 2024.
@inproceedings{kim2024,
title = {Percutaneous nephrostomy needle guidance using real-time 3D anatomical visualization with live ultrasound segmentation},
author = {Andrew Kim and Chris Yeung and Robert Szabo and Kyle Sunderland and Rebecca Hisey and David Morton and Ron Kikinis and Babacar Diao and Parvin Mousavi and Tamas Ungi and Gabor Fichtinger},
year = {2024},
date = {2024-01-01},
volume = {12928},
pages = {163-168},
publisher = {SPIE},
abstract = {PURPOSE
Percutaneous nephrostomy is a commonly performed procedure to drain urine to provide relief in patients with hydronephrosis. Conventional percutaneous nephrostomy needle guidance methods can be difficult, expensive, or not portable. We propose an open-source real-time 3D anatomical visualization aid for needle guidance with live ultrasound segmentation and 3D volume reconstruction using free, open-source software.
METHODS
Basic hydronephrotic kidney phantoms were created, and recordings of these models were manually segmented and used to train a deep learning model that makes live segmentation predictions to perform live 3D volume reconstruction of the fluid-filled cavity. Participants performed 5 needle insertions with the visualization aid and 5 insertions with ultrasound needle guidance on a kidney phantom in randomized order, and these were recorded. Recordings of the …},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Percutaneous nephrostomy is a commonly performed procedure to drain urine to provide relief in patients with hydronephrosis. Conventional percutaneous nephrostomy needle guidance methods can be difficult, expensive, or not portable. We propose an open-source real-time 3D anatomical visualization aid for needle guidance with live ultrasound segmentation and 3D volume reconstruction using free, open-source software.
METHODS
Basic hydronephrotic kidney phantoms were created, and recordings of these models were manually segmented and used to train a deep learning model that makes live segmentation predictions to perform live 3D volume reconstruction of the fluid-filled cavity. Participants performed 5 needle insertions with the visualization aid and 5 insertions with ultrasound needle guidance on a kidney phantom in randomized order, and these were recorded. Recordings of the …
Akbarifar, Faranak; Dukelow, Sean P; Jin, Albert; Mousavi, Parvin; Scott, Stephen H
Optimizing Stroke Detection Using Evidential Networks and Uncertainty-Based Refinement Journal Article
In: 2024.
@article{akbarifar2024,
title = {Optimizing Stroke Detection Using Evidential Networks and Uncertainty-Based Refinement},
author = {Faranak Akbarifar and Sean P Dukelow and Albert Jin and Parvin Mousavi and Stephen H Scott},
year = {2024},
date = {2024-01-01},
abstract = {Background:
Technologies such as interactive robotics and motion capture systems permit the development of kinematic-based approaches to assess motor impairments in stroke survivors. Here we utilise the Kinarm Exoskeleton robotic system and deep learning techniques to explore differences in motor performance between healthy controls, individuals with stroke and transient ischemic attacks (TIA).
Methods:
Building upon previous research that employed deep learning methods to distinguish between minimally impaired stroke patients and healthy controls using Kinarm data, this study introduces a novel dimension by estimating the confidence or uncertainty of the model's predictions. An evidential network is employed to measure this confidence, which subsequently aids in the refinement of training and testing datasets.
Results:
The application of deep learning techniques in this context proves to be promising …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Technologies such as interactive robotics and motion capture systems permit the development of kinematic-based approaches to assess motor impairments in stroke survivors. Here we utilise the Kinarm Exoskeleton robotic system and deep learning techniques to explore differences in motor performance between healthy controls, individuals with stroke and transient ischemic attacks (TIA).
Methods:
Building upon previous research that employed deep learning methods to distinguish between minimally impaired stroke patients and healthy controls using Kinarm data, this study introduces a novel dimension by estimating the confidence or uncertainty of the model's predictions. An evidential network is employed to measure this confidence, which subsequently aids in the refinement of training and testing datasets.
Results:
The application of deep learning techniques in this context proves to be promising …
Wilson, Paul FR; Harmanani, Mohamed; To, Minh Nguyen Nhat; Gilany, Mahdi; Jamzad, Amoon; Fooladgar, Fahimeh; Wodlinger, Brian; Abolmaesumi, Purang; Mousavi, Parvin
Toward confident prostate cancer detection using ultrasound: a multi-center study Journal Article
In: International Journal of Computer Assisted Radiology and Surgery, pp. 1-9, 2024.
@article{wilson2024,
title = {Toward confident prostate cancer detection using ultrasound: a multi-center study},
author = {Paul FR Wilson and Mohamed Harmanani and Minh Nguyen Nhat To and Mahdi Gilany and Amoon Jamzad and Fahimeh Fooladgar and Brian Wodlinger and Purang Abolmaesumi and Parvin Mousavi},
year = {2024},
date = {2024-01-01},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1-9},
publisher = {Springer International Publishing},
abstract = {Purpose
Deep learning-based analysis of micro-ultrasound images to detect cancerous lesions is a promising tool for improving prostate cancer (PCa) diagnosis. An ideal model should confidently identify cancer while responding with appropriate uncertainty when presented with out-of-distribution inputs that arise during deployment due to imaging artifacts and the biological heterogeneity of patients and prostatic tissue.
Methods
Using micro-ultrasound data from 693 patients across 5 clinical centers who underwent micro-ultrasound guided prostate biopsy, we train and evaluate convolutional neural network models for PCa detection. To improve robustness to out-of-distribution inputs, we employ and comprehensively benchmark several state-of-the-art uncertainty estimation methods.
Results
PCa detection models achieve performance scores up to average AUROC with a 10-fold cross validation setup. Models with …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Deep learning-based analysis of micro-ultrasound images to detect cancerous lesions is a promising tool for improving prostate cancer (PCa) diagnosis. An ideal model should confidently identify cancer while responding with appropriate uncertainty when presented with out-of-distribution inputs that arise during deployment due to imaging artifacts and the biological heterogeneity of patients and prostatic tissue.
Methods
Using micro-ultrasound data from 693 patients across 5 clinical centers who underwent micro-ultrasound guided prostate biopsy, we train and evaluate convolutional neural network models for PCa detection. To improve robustness to out-of-distribution inputs, we employ and comprehensively benchmark several state-of-the-art uncertainty estimation methods.
Results
PCa detection models achieve performance scores up to average AUROC with a 10-fold cross validation setup. Models with …
Farahmand, Mohammad
End-to-End Object Tracking with Spatio-Temporal Transformers Masters Thesis
Iran University of Science and Technology, 2023.
@mastersthesis{farahmand2023end,
title = {End-to-End Object Tracking with Spatio-Temporal Transformers},
author = {Mohammad Farahmand},
year = {2023},
date = {2023-06-01},
school = {Iran University of Science and Technology},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Bayat, Sharareh; Jamzad, Amoon; Zobeiry, Navid; Poursartip, Anoush; Mousavi, Parvin; Abolmaesumi, Purang
Temporal enhanced Ultrasound: A new method for detection of porosity defects in composites Journal Article
In: Composites Part A: Applied Science and Manufacturing, vol. 164, pp. 107259, 2023.
@article{bayat2023,
title = {Temporal enhanced Ultrasound: A new method for detection of porosity defects in composites},
author = {Sharareh Bayat and Amoon Jamzad and Navid Zobeiry and Anoush Poursartip and Parvin Mousavi and Purang Abolmaesumi},
year = {2023},
date = {2023-01-01},
journal = {Composites Part A: Applied Science and Manufacturing},
volume = {164},
pages = {107259},
publisher = {Elsevier},
abstract = {Non-Destructive Evaluation (NDE) methods are commonly employed for identifying porosity, which is one of the most common manufacturing defects observed in composite structures. Among current widely used approaches are conventional ultrasonic methods such as pulse-echo analysis based on loss of signal amplitude. Application of these conventional ultrasonic methods, however, can be challenging in cases where the loss of signal is negligible, such as with porosity. In this paper, we propose Temporal-enhanced Ultrasound (TeUS) as a novel ultrasound-based imaging technique for NDE of composites. TeUS represents the analysis of a sequence of ultrasound images obtained from composites by varying an image acquisition parameter, such as the focal point, over the sequence. We present details on the analytical formulation of TeUS, followed by extensive simulation and experimental results to …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Nezamabadi, Kasra
XplainScar: Explainable Artificial Intelligence to Identify and Localize Left Ventricular Scar in Hypertrophic Cardiomyopathy from 12-lead Electrocardiogram Journal Article
In: 2023.
@article{nezamabadi2023,
title = {XplainScar: Explainable Artificial Intelligence to Identify and Localize Left Ventricular Scar in Hypertrophic Cardiomyopathy from 12-lead Electrocardiogram},
author = {Kasra Nezamabadi},
year = {2023},
date = {2023-01-01},
abstract = {Myocardial scar in the left ventricular contributes significantly to sudden cardiac death in hypertrophic cardiomyopathy (HCM). Although late gadolinium-contrast enhanced magnetic resonance imaging is commonly used for detecting HCM scar, its high cost, limited availability, and susceptibility to artifacts from implanted devices make it unsuitable for ongoing scar progression and risk stratification monitoring. The 12-lead electrocardiogram (ECG) is a widely accessible alternative, but its utilization in identifying LV scar has been limited by the complexity and heterogeneity of HCM, even for human experts. To address this challenge, we propose XplainScar, an innovative and explainable machine learning framework that identifies LV scar from 12-lead ECGs. XplainScar employs three key strategies:(1) extracting simple yet comprehensive ECG features to enable explainable predictions of scar in different LV regions …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Greenspan, Hayit; Taylor, Russell; Madabhushi, Anant; Deike-Hofmann, Katerina; Syeda-Mahmood, Tanveer; Radbruch, Alexander; Pinetz, Thomas; Mousavi, Parvin; Duncan, James; Kobler, Erich; Salcudean, Septimiu; Effland, Alexander; Haase, Robert
Faithful Synthesis of Low-Dose Contrast-Enhanced Brain MRI Scans Using Noise-Preserving Conditional GANs Journal Article
In: no. DZNE-2023-01039, 2023.
@article{greenspan2023a,
title = {Faithful Synthesis of Low-Dose Contrast-Enhanced Brain MRI Scans Using Noise-Preserving Conditional GANs},
author = {Hayit Greenspan and Russell Taylor and Anant Madabhushi and Katerina Deike-Hofmann and Tanveer Syeda-Mahmood and Alexander Radbruch and Thomas Pinetz and Parvin Mousavi and James Duncan and Erich Kobler and Septimiu Salcudean and Alexander Effland and Robert Haase},
year = {2023},
date = {2023-01-01},
number = {DZNE-2023-01039},
publisher = {Clinical Neuroimaging},
abstract = {Today Gadolinium-based contrast agents (GBCA) are indispensable in Magnetic Resonance Imaging (MRI) for diagnosing various diseases. However, GBCAs are expensive and may accumulate in patients with potential side effects, thus dose-reduction is recommended. Still, it is unclear to which extent the GBCA dose can be reduced while preserving the diagnostic value–especially in pathological regions. To address this issue, we collected brain MRI scans at numerous non-standard GBCA dosages and developed a conditional GAN model for synthesizing corresponding images at fractional dose levels. Along with the adversarial loss, we advocate a novel content loss function based on the Wasserstein distance of locally paired patch statistics for the faithful preservation of noise. Our numerical experiments show that conditional GANs are suitable for generating images at different GBCA dose levels and can be …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Morton, David; Connolly, Laura; Groves, Leah; Sunderland, Kyle; Ungi, Tamas; Jamzad, Amoon; Kaufmann, Martin; Ren, Kevin; Rudan, John F; Fichtinger, Gabor; Mousavi, Parvin
Development of a Research Testbed for Intraoperative Optical Spectroscopy Tumor Margin Assessment Journal Article
In: Acta Polytechnica Hungarica, vol. 20, no. 8, 2023.
@article{morton2023b,
title = {Development of a Research Testbed for Intraoperative Optical Spectroscopy Tumor Margin Assessment},
author = {David Morton and Laura Connolly and Leah Groves and Kyle Sunderland and Tamas Ungi and Amoon Jamzad and Martin Kaufmann and Kevin Ren and John F Rudan and Gabor Fichtinger and Parvin Mousavi},
year = {2023},
date = {2023-01-01},
journal = {Acta Polytechnica Hungarica},
volume = {20},
number = {8},
abstract = {Surgical intervention is a primary treatment option for early-stage cancers. However, the difficulty of intraoperative tumor margin assessment contributes to a high rate of incomplete tumor resection, necessitating revision surgery. This work aims to develop and evaluate a prototype of a tracked tissue sensing research testbed for navigated tumor margin assessment. Our testbed employs diffuse reflection broadband optical spectroscopy for tissue characterization and electromagnetic tracking for navigation. Spectroscopy data and a trained classifier are used to predict tissue types. Navigation allows these predictions to be superimposed on the scanned tissue, creating a spatial classification map. We evaluate the real-time operation of our testbed using an ex vivo tissue phantom. Furthermore, we use the testbed to interrogate ex vivo human kidney tissue and establish a modeling pipeline to classify cancerous and non-neoplastic tissue. The testbed recorded latencies of 125±11 ms and 167±26 ms for navigation and classification respectively. The testbed achieved a Dice similarity coefficient of 93%, and an accuracy of 94% for the spatial classification. These results demonstrated the capabilities of our testbed for the real-time interrogation of an arbitrary tissue volume. Our modeling pipeline attained a balanced accuracy of 91%±4% on the classification of cancerous and non-neoplastic human kidney tissue. Our tracked tissue sensing research testbed prototype shows potential for facilitating the development and evaluation of intraoperative tumor margin assessment technologies across tissue types. The capacity to assess tumor margin status …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, Brian; Maslove, David M; Curran, Jeffrey D; Hamilton, Alexander; Laird, Philip R; Mousavi, Parvin; Sibley, Stephanie
A deep learning model for the classification of atrial fibrillation in critically ill patients Journal Article
In: Intensive care medicine experimental, vol. 11, no. 1, pp. 2, 2023.
@article{chen2023,
title = {A deep learning model for the classification of atrial fibrillation in critically ill patients},
author = {Brian Chen and David M Maslove and Jeffrey D Curran and Alexander Hamilton and Philip R Laird and Parvin Mousavi and Stephanie Sibley},
year = {2023},
date = {2023-01-01},
journal = {Intensive care medicine experimental},
volume = {11},
number = {1},
pages = {2},
publisher = {Springer International Publishing},
abstract = {Background
Atrial fibrillation (AF) is the most common cardiac arrhythmia in the intensive care unit and is associated with increased morbidity and mortality. New-onset atrial fibrillation (NOAF) is often initially paroxysmal and fleeting, making it difficult to diagnose, and therefore difficult to understand the true burden of disease. Automated algorithms to detect AF in the ICU have been advocated as a means to better quantify its true burden.
Results
We used a publicly available 12-lead ECG dataset to train a deep learning model for the classification of AF. We then conducted an external independent validation of the model using continuous telemetry data from 984 critically ill patients collected in our institutional database. Performance metrics were stratified by signal quality, classified as either clean or noisy. The deep learning model was able to classify AF with an overall sensitivity of 84%, specificity of 89%, positive …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Atrial fibrillation (AF) is the most common cardiac arrhythmia in the intensive care unit and is associated with increased morbidity and mortality. New-onset atrial fibrillation (NOAF) is often initially paroxysmal and fleeting, making it difficult to diagnose, and therefore difficult to understand the true burden of disease. Automated algorithms to detect AF in the ICU have been advocated as a means to better quantify its true burden.
Results
We used a publicly available 12-lead ECG dataset to train a deep learning model for the classification of AF. We then conducted an external independent validation of the model using continuous telemetry data from 984 critically ill patients collected in our institutional database. Performance metrics were stratified by signal quality, classified as either clean or noisy. The deep learning model was able to classify AF with an overall sensitivity of 84%, specificity of 89%, positive …
Kaufmann, Martin; Iaboni, Natasha; Jamzad, Amoon; Hurlbut, David; Ren, Kevin Yi Mi; Rudan, John F; Mousavi, Parvin; Fichtinger, Gabor; Varma, Sonal; Caycedo-Marulanda, Antonio; Nicol, Christopher JB
Metabolically active zones involving fatty acid elongation delineated by DESI-MSI correlate with pathological and prognostic features of colorectal cancer Journal Article
In: Metabolites, vol. 13, no. 4, pp. 508, 2023.
@article{kaufmann2023,
title = {Metabolically active zones involving fatty acid elongation delineated by DESI-MSI correlate with pathological and prognostic features of colorectal cancer},
author = {Martin Kaufmann and Natasha Iaboni and Amoon Jamzad and David Hurlbut and Kevin Yi Mi Ren and John F Rudan and Parvin Mousavi and Gabor Fichtinger and Sonal Varma and Antonio Caycedo-Marulanda and Christopher JB Nicol},
year = {2023},
date = {2023-01-01},
journal = {Metabolites},
volume = {13},
number = {4},
pages = {508},
publisher = {MDPI},
abstract = {Colorectal cancer (CRC) is the second leading cause of cancer deaths. Despite recent advances, five-year survival rates remain largely unchanged. Desorption electrospray ionization mass spectrometry imaging (DESI) is an emerging nondestructive metabolomics-based method that retains the spatial orientation of small-molecule profiles on tissue sections, which may be validated by ‘gold standard’ histopathology. In this study, CRC samples were analyzed by DESI from 10 patients undergoing surgery at Kingston Health Sciences Center. The spatial correlation of the mass spectral profiles was compared with histopathological annotations and prognostic biomarkers. Fresh frozen sections of representative colorectal cross sections and simulated endoscopic biopsy samples containing tumour and non-neoplastic mucosa for each patient were generated and analyzed by DESI in a blinded fashion. Sections were then hematoxylin and eosin (H and E) stained, annotated by two independent pathologists, and analyzed. Using PCA/LDA-based models, DESI profiles of the cross sections and biopsies achieved 97% and 75% accuracies in identifying the presence of adenocarcinoma, using leave-one-patient-out cross validation. Among the m/z ratios exhibiting the greatest differential abundance in adenocarcinoma were a series of eight long-chain or very-long-chain fatty acids, consistent with molecular and targeted metabolomics indicators of de novo lipogenesis in CRC tissue. Sample stratification based on the presence of lympovascular invasion (LVI), a poor CRC prognostic indicator, revealed the abundance of oxidized phospholipids, suggestive …},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kitner, Nicole; Rodgers, Jessica R; Ungi, Tamas; Korzeniowski, Martin; Olding, Timothy; Mousavi, Parvin; Fichtinger, Gabor
Multi-catheter modelling in reconstructed 3D transrectal ultrasound images from prostate brachytherapy Proceedings Article
In: pp. 126-135, SPIE, 2023.
@inproceedings{kitner2023,
title = {Multi-catheter modelling in reconstructed 3D transrectal ultrasound images from prostate brachytherapy},
author = {Nicole Kitner and Jessica R Rodgers and Tamas Ungi and Martin Korzeniowski and Timothy Olding and Parvin Mousavi and Gabor Fichtinger},
year = {2023},
date = {2023-01-01},
volume = {12466},
pages = {126-135},
publisher = {SPIE},
abstract = {High-dose-rate brachytherapy is an accepted standard-of-care treatment for prostate cancer. In this procedure, catheters are inserted using three-dimensional (3D) transrectal ultrasound image-guidance. Their positions are manually segmented for treatment planning and delivery. The transverse ultrasound sweep, which is subject to tip and depth error for catheter localization, is a commonly used ultrasound imaging option available for image acquisition. We propose a two-step pipeline that uses a deep-learning network and curve fitting to automatically localize and model catheters in transversely reconstructed 3D ultrasound images. In the first step, a 3D U-Net was trained to automatically segment all catheters in a 3D ultrasound image. Following this step, curve fitting was implemented to detect the shapes of individual catheters using polynomial fitting. Of the 343 catheters (from 20 patients) in the testing data, the …},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Srikanthan, Dilakshan; Kaufmann, Martin; Jamzad, Amoon; Syeda, Ayesha; Santilli, Alice; Sedghi, Alireza; Fichtinger, Gabor; Purzner, Jamie; Rudan, John; Purzner, Teresa; Mousavi, Parvin
Attention-based multi-instance learning for improved glioblastoma detection using mass spectrometry Proceedings Article
In: pp. 248-253, SPIE, 2023.
@inproceedings{srikanthan2023,
title = {Attention-based multi-instance learning for improved glioblastoma detection using mass spectrometry},
author = {Dilakshan Srikanthan and Martin Kaufmann and Amoon Jamzad and Ayesha Syeda and Alice Santilli and Alireza Sedghi and Gabor Fichtinger and Jamie Purzner and John Rudan and Teresa Purzner and Parvin Mousavi},
year = {2023},
date = {2023-01-01},
volume = {12466},
pages = {248-253},
publisher = {SPIE},
abstract = {Glioblastoma Multiforme (GBM) is the most common and most lethal primary brain tumor in adults with a five-year survival rate of 5%. The current standard of care and survival rate have remained largely unchanged due to the degree of difficulty in surgically removing these tumors, which plays a crucial role in survival, as better surgical resection leads to longer survival times. Thus, novel technologies need to be identified to improve resection accuracy. Our study features a curated database of GBM and normal brain tissue specimens, which we used to train and validate a multi-instance learning model for GBM detection via rapid evaporative ionization mass spectrometry. This method enables real-time tissue typing. The specimens were collected by a surgeon, reviewed by a pathologist, and sampled with an electrocautery device. The dataset comprised 276 normal tissue burns and 321 GBM tissue burns. Our multi …},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}