{"id":2092,"date":"2024-08-21T18:45:14","date_gmt":"2024-08-21T18:45:14","guid":{"rendered":"https:\/\/labs.cs.queensu.ca\/perklab\/?post_type=qsc_member&#038;p=2092"},"modified":"2024-08-25T21:01:04","modified_gmt":"2024-08-25T21:01:04","slug":"colton-barr","status":"publish","type":"qsc_member","link":"https:\/\/labs.cs.queensu.ca\/perklab\/members\/colton-barr\/","title":{"rendered":"Colton Barr"},"content":{"rendered":"<div class=\"wp-block-columns is-layout-flex wp-block-columns-is-layout-flex qsc-member-single-core-info-container\">\n\t<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow qsc-member-single-photo-column\">\n\t\t<img loading=\"lazy\" decoding=\"async\" width=\"201\" height=\"250\" src=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot.jpg\" class=\"qsc-member-single-photo wp-post-image\" alt=\"\" srcset=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot.jpg 964w, https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-241x300.jpg 241w, https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-823x1024.jpg 823w, https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-768x955.jpg 768w\" sizes=\"auto, (max-width: 201px) 100vw, 201px\" \/>\n\t<\/div>\n\t<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow qsc-member-single-info-column\">\n\t\t<div class=\"qsc-member-name\"><h1>Colton Barr<\/h1><\/div>\n\t\t<div class=\"qsc-member-position\">MD\/PhD Student<\/div>\n\t\t<div class=\"qsc-member-department\">School of Computing<\/div>\n\t\t<div class=\"qsc-member-organization\">Queen&#8217;s University<\/div>\n\t\t<div class=\"qsc-member-contact\">\n\t\t\t<div class=\"qsc-member-email\"><a href=\"mailto:c.barr@queensu.ca\">c.barr@queensu.ca<\/a><\/div>\n\t\t\t<div class=\"qsc-member-socials\">\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n<div class=\"qsc-member-bio\">\n\t\n<h2 class=\"wp-block-heading\">Biography<\/h2>\n\n\n\n<p>Colton Barr is an MD \/ PhD student in the School of Computing at Queen&#8217;s University supervised by Professor Gabor Fichtinger and Professor Parvin Mousavi. He is currently completing his second internship as a visiting researcher at the Golby Lab at Brigham and Women&#8217;s Hospital under the supervision of Dr. Alexandra Golby. His research interests lie at the intersection of surgical navigation, open-source software development and artificial intelligence, with a particular focus on building accessible solutions for deployment in limited-resource healthcare settings.<\/p>\n\n\n\n<p>Colton completed his BCompH at Queen&#8217;s in Biomedical Computing in 2019 and his MSc in Computing in 2022 at the Perk Lab.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Publications<\/h2>\n\n\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Farvolden, Coleman;  Hashtrudi-Zaad, Kian;  Connolly, Laura;  Barr, Colton;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\">An accessible six-axis testbed for image-guided robotics research <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_volume\">vol. 13408, <\/span><span class=\"tp_pub_additional_pages\">pp. 458-463, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1169\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1169','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1169\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1169','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1169\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{farvolden2025,<br \/>\r\ntitle = {An accessible six-axis testbed for image-guided robotics research},<br \/>\r\nauthor = {Coleman Farvolden and Kian Hashtrudi-Zaad and Laura Connolly and Colton Barr and Gabor Fichtinger},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-01-01},<br \/>\r\nvolume = {13408},<br \/>\r\npages = {458-463},<br \/>\r\npublisher = {SPIE},<br \/>\r\nabstract = {PURPOSE: Cancer can recur after tumor resection surgery if tumor tissue is missed and left behind. We hypothesize that intraoperative robotic imaging could be used to inspect the surgical cavity and localize residual cancer tissue. This technique has the potential to improve the success rate of tumor resection surgery. Towards this, we propose and evaluate a benchtop testbed for robotic manipulation of an optical imaging probe. We use low-cost hardware and open-source software to construct the testbed and describe the implementation so that it can be easily adopted to support similar research. METHODS: We implemented a reusable, open-source module in 3D Slicer for reading position coordinates and motion planning with an inexpensive 6-axis robotic arm in Robot Operating System (ROS). For demonstration, a custom end-effector was used to fix an optical probe to the robot. The accuracy of the testbed \u2026},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1169','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1169\" style=\"display:none;\"><div class=\"tp_abstract_entry\">PURPOSE: Cancer can recur after tumor resection surgery if tumor tissue is missed and left behind. We hypothesize that intraoperative robotic imaging could be used to inspect the surgical cavity and localize residual cancer tissue. This technique has the potential to improve the success rate of tumor resection surgery. Towards this, we propose and evaluate a benchtop testbed for robotic manipulation of an optical imaging probe. We use low-cost hardware and open-source software to construct the testbed and describe the implementation so that it can be easily adopted to support similar research. METHODS: We implemented a reusable, open-source module in 3D Slicer for reading position coordinates and motion planning with an inexpensive 6-axis robotic arm in Robot Operating System (ROS). For demonstration, a custom end-effector was used to fix an optical probe to the robot. The accuracy of the testbed \u2026<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1169','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Hashtrudi-Zaad, Kian;  Farvolden, Coleman;  Connolly, Laura;  Barr, Colton;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\">Robotic tracking of a resection cavity using a low cost bench-top robotic arm and electromagnetics <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_volume\">vol. 13408, <\/span><span class=\"tp_pub_additional_pages\">pp. 217-222, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1170\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1170','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1170\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1170','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1170\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{hashtrudi-zaad2025,<br \/>\r\ntitle = {Robotic tracking of a resection cavity using a low cost bench-top robotic arm and electromagnetics},<br \/>\r\nauthor = {Kian Hashtrudi-Zaad and Coleman Farvolden and Laura Connolly and Colton Barr and Gabor Fichtinger},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-01-01},<br \/>\r\nvolume = {13408},<br \/>\r\npages = {217-222},<br \/>\r\npublisher = {SPIE},<br \/>\r\nabstract = {INTRODUCTION <br \/>\r\nRoughly 40% of breast cancer patients are required to undergo corrective surgery after tumour resection via breast-conserving surgery (BCS). Sweeping of the cavity, resulting from the tumour resection, by spectroscopy and ultrasound imaging is emerging as a potential solution for identifying leftover cancer. However, the use of imaging modalities in the cavity is challenging as breast tissue is soft, malleable, and moves frequently. This paper presents and verifies an approach for tracking the relative motion of a resection cavity with a robotic arm. <br \/>\r\nMETHODS <br \/>\r\nWe use electromagnetic tracking and a low cost 6-axis robotic arm to track a simulated resection cavity. We embed an electromagnetic sensor in a 3D printed retractor that is designed to hold the cavity open. An open-source module in 3D Slicer is then used to detect cavity motion from the retractor and command the robotic arm to follow the \u2026},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1170','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1170\" style=\"display:none;\"><div class=\"tp_abstract_entry\">INTRODUCTION <br \/>\r\nRoughly 40% of breast cancer patients are required to undergo corrective surgery after tumour resection via breast-conserving surgery (BCS). Sweeping of the cavity, resulting from the tumour resection, by spectroscopy and ultrasound imaging is emerging as a potential solution for identifying leftover cancer. However, the use of imaging modalities in the cavity is challenging as breast tissue is soft, malleable, and moves frequently. This paper presents and verifies an approach for tracking the relative motion of a resection cavity with a robotic arm. <br \/>\r\nMETHODS <br \/>\r\nWe use electromagnetic tracking and a low cost 6-axis robotic arm to track a simulated resection cavity. We embed an electromagnetic sensor in a 3D printed retractor that is designed to hold the cavity open. An open-source module in 3D Slicer is then used to detect cavity motion from the retractor and command the robotic arm to follow the \u2026<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1170','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Elkind, Emese;  Barr, Keiran;  Barr, Colton;  Moga, Kristof;  Garamvolgy, Tivadar;  Haidegger, Tamas;  Ungi, Tamas;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/10\/EmeseElkindImNO2024-2.docx\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/10\/EmeseElkindImNO2024-2.docx\" target=\"blank\">Modifying Radix Lenses to Survive Low-Cost Sterilization: An Exploratory Study<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_publisher\">Imaging Network of Ontario (ImNO) Symposium, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1155\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1155','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1155\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1155','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1155\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1155','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1155\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Elkind2024,<br \/>\r\ntitle = {Modifying Radix Lenses to Survive Low-Cost Sterilization: An Exploratory Study},<br \/>\r\nauthor = {Emese Elkind and Keiran Barr and Colton Barr and Kristof Moga and Tivadar Garamvolgy and Tamas Haidegger and Tamas Ungi and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/10\/EmeseElkindImNO2024-2.docx},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-03-19},<br \/>\r\nurldate = {2024-03-19},<br \/>\r\npublisher = {Imaging Network of Ontario (ImNO) Symposium},<br \/>\r\nabstract = {INTRODUCTION: A major challenge with deploying infrared camera-tracked surgical navigation solutions, such as NousNav [1], in low-resource settings is the high cost and unavailability of disposable retroreflective infrared markers. Developing an accessible method to reuse and sterilize retroreflective markers could lead to significant increase in the uptake of this technology. As none of the known infrared markers can endure standard autoclaving and most places do not have access to gas sterilization, attention is focused on cold liquid sterilisation methods commonly used in laparoscopy and other optical tools that cannot be sterilized in a conventional autoclave.<br \/>\r\nMETHODS: We propose to modify NDI Radix\u2122 Lens [1], single-use retroreflective spherical marker manufactured by Northern Digital, Waterloo, Canada. Radix lenses are uniquely promising candidates for liquid sterilization given their smooth, spherical surface. This quality also makes them easier to clean perioperatively compared to other retroreflective infrared marker designs. Initial experiments show that liquid sterilization agents degrade the marker\u2019s retroreflective gold coating (Fig. 1). Hence the objective of this project is to develop a method to protect the Radix Lenses with a layer of coating material that does not allow the sanitizing agent to degrade the coating to enable the lens to survive multiple sanitation cycles while retaining sufficient tracking accuracy. We employed two cold liquid sterilisation agents, household bleach which is a common ingredient of liquid sterilisation solutions and Sekusept\u2122 Aktiv (Ecolab, Saint Paul, MN, USA), which is widely known for sterilizing laparoscopy instruments. Store-bought nail polish and Zink-Alu Spray were used to coat the lenses. Data were obtained by recording five tests each with five rounds of sterilization, each tested with six trials, for a total of 150 recordings. The five tests were as follows: 1) Radix lens coated with nail polish and bleached, 2) uncoated and bleached, 3) coated with nail polish and sanitised, 4) uncoated and sanitised, and 5) coated with Zink-Alu Spray and sanitised. To assess the impact of the sterilization on the lens\u2019s fiducial localization error, two metal marker frames equipped with four sockets designed for the Radix lenses were used. The reference marker frame was secured to a flat table while the other marker frame moved along a fixed path on the table. The position and orientation of the marker clusters were streamed into 3D Slicer using the Public Library for Ultrasound Toolkit (PLUS). A plane was then fit to the recorded marker poses in 3D Slicer using Iterative Closest Point and the marker registration error was computed. Distance from the camera, angle of view, and distance from the edges of the field of view were held constant.<br \/>\r\nRESULTS: With each round of sterilization, the error of coated lenses was lower than the unprotected lenses, and the error showed a slightly increasing trend (Fig. 2). The lenses appeared fainter in the tracking software the lenses appeared fainter while all lenses remained trackable and visible despite the significant removal of reflective coating.<br \/>\r\nWhen reflective coating was fully rubbed off the lenses, the tracking software could still localize the markers; however, the lenses did appear much fainter in the tracking software.  We observed that the reflective coating rubs off the lens in routine handling, and recoating with Zink-Alu spray can partially restore marker visibility. Using protective nail polish coating prevented the reflective coating from rubbing off altogether. <br \/>\r\nCONCLUSIONS: This exploratory study represents a promising step toward achieving low-cost sterilization of retroreflective infrared markers. Studies with the NousNav system need to be undertaken to measure the extent of degradation in tracking accuracy is tolerable as a side effect of marker sterilization. Before using coated Radix lenses on human subjects, it must be verified that the protective coating (common nail polish in our study) is fully biocompatible and remains undamaged by the cold sterilization agent (Sekusept\u2122 Aktiv in our study.) <br \/>\r\nREFERENCES: [1] NousNav: A low-cost neuronavigation system for deployment in lower-resource settings, International Journal of Computer Assisted Radiology and Surgery, 2022 Sep;17(9):1745-1750. [2] NDI Radix\u2122 Lens (https:\/\/www.ndigital.com\/optical-measurement-technology\/radix-lens\/)  },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1155','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1155\" style=\"display:none;\"><div class=\"tp_abstract_entry\">INTRODUCTION: A major challenge with deploying infrared camera-tracked surgical navigation solutions, such as NousNav [1], in low-resource settings is the high cost and unavailability of disposable retroreflective infrared markers. Developing an accessible method to reuse and sterilize retroreflective markers could lead to significant increase in the uptake of this technology. As none of the known infrared markers can endure standard autoclaving and most places do not have access to gas sterilization, attention is focused on cold liquid sterilisation methods commonly used in laparoscopy and other optical tools that cannot be sterilized in a conventional autoclave.<br \/>\r\nMETHODS: We propose to modify NDI Radix\u2122 Lens [1], single-use retroreflective spherical marker manufactured by Northern Digital, Waterloo, Canada. Radix lenses are uniquely promising candidates for liquid sterilization given their smooth, spherical surface. This quality also makes them easier to clean perioperatively compared to other retroreflective infrared marker designs. Initial experiments show that liquid sterilization agents degrade the marker\u2019s retroreflective gold coating (Fig. 1). Hence the objective of this project is to develop a method to protect the Radix Lenses with a layer of coating material that does not allow the sanitizing agent to degrade the coating to enable the lens to survive multiple sanitation cycles while retaining sufficient tracking accuracy. We employed two cold liquid sterilisation agents, household bleach which is a common ingredient of liquid sterilisation solutions and Sekusept\u2122 Aktiv (Ecolab, Saint Paul, MN, USA), which is widely known for sterilizing laparoscopy instruments. Store-bought nail polish and Zink-Alu Spray were used to coat the lenses. Data were obtained by recording five tests each with five rounds of sterilization, each tested with six trials, for a total of 150 recordings. The five tests were as follows: 1) Radix lens coated with nail polish and bleached, 2) uncoated and bleached, 3) coated with nail polish and sanitised, 4) uncoated and sanitised, and 5) coated with Zink-Alu Spray and sanitised. To assess the impact of the sterilization on the lens\u2019s fiducial localization error, two metal marker frames equipped with four sockets designed for the Radix lenses were used. The reference marker frame was secured to a flat table while the other marker frame moved along a fixed path on the table. The position and orientation of the marker clusters were streamed into 3D Slicer using the Public Library for Ultrasound Toolkit (PLUS). A plane was then fit to the recorded marker poses in 3D Slicer using Iterative Closest Point and the marker registration error was computed. Distance from the camera, angle of view, and distance from the edges of the field of view were held constant.<br \/>\r\nRESULTS: With each round of sterilization, the error of coated lenses was lower than the unprotected lenses, and the error showed a slightly increasing trend (Fig. 2). The lenses appeared fainter in the tracking software the lenses appeared fainter while all lenses remained trackable and visible despite the significant removal of reflective coating.<br \/>\r\nWhen reflective coating was fully rubbed off the lenses, the tracking software could still localize the markers; however, the lenses did appear much fainter in the tracking software.  We observed that the reflective coating rubs off the lens in routine handling, and recoating with Zink-Alu spray can partially restore marker visibility. Using protective nail polish coating prevented the reflective coating from rubbing off altogether. <br \/>\r\nCONCLUSIONS: This exploratory study represents a promising step toward achieving low-cost sterilization of retroreflective infrared markers. Studies with the NousNav system need to be undertaken to measure the extent of degradation in tracking accuracy is tolerable as a side effect of marker sterilization. Before using coated Radix lenses on human subjects, it must be verified that the protective coating (common nail polish in our study) is fully biocompatible and remains undamaged by the cold sterilization agent (Sekusept\u2122 Aktiv in our study.) <br \/>\r\nREFERENCES: [1] NousNav: A low-cost neuronavigation system for deployment in lower-resource settings, International Journal of Computer Assisted Radiology and Surgery, 2022 Sep;17(9):1745-1750. [2] NDI Radix\u2122 Lens (https:\/\/www.ndigital.com\/optical-measurement-technology\/radix-lens\/)  <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1155','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1155\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-word\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/10\/EmeseElkindImNO2024-2.docx\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/10\/EmeseElkin[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/10\/EmeseElkin[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1155','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barr, Colton;  Groves, Leah;  Ungi, Tamas;  Siemens, D Robert;  Diao, Babacar;  Kikinis, Ron;  Mousavi, Parvin;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\">Extracting 3D Prostate Geometry from 2D Optically-Tracked Transrectal Ultrasound Images <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_pages\">pp. 32-37, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1149\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1149','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1149\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1149','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1149\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{barr2024,<br \/>\r\ntitle = {Extracting 3D Prostate Geometry from 2D Optically-Tracked Transrectal Ultrasound Images},<br \/>\r\nauthor = {Colton Barr and Leah Groves and Tamas Ungi and D Robert Siemens and Babacar Diao and Ron Kikinis and Parvin Mousavi and Gabor Fichtinger},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\npages = {32-37},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {The technical challenges of traditional transrectal ultrasound-guided prostate biopsy, combined with the limited availability of more advanced prostate imaging techniques, have exacerbated existing differences in prostate cancer outcomes between high-resource and low-resource healthcare settings. The objective of this paper is to improve the tools available to clinicians in low-resource settings by working towards an inexpensive ultrasound-guided prostate biopsy navigation system. The principal contributions detailed here are the design, implementation, and testing of a system capable of generating a 3D model of the prostate from spatially-tracked 2D ultrasound images. The system uses open-source software, low-cost materials, and deep learning to segment and localize cross-sections of the prostate in order to produce a patient-specific 3D prostate model. A user study was performed to evaluate the \u2026},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1149','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1149\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The technical challenges of traditional transrectal ultrasound-guided prostate biopsy, combined with the limited availability of more advanced prostate imaging techniques, have exacerbated existing differences in prostate cancer outcomes between high-resource and low-resource healthcare settings. The objective of this paper is to improve the tools available to clinicians in low-resource settings by working towards an inexpensive ultrasound-guided prostate biopsy navigation system. The principal contributions detailed here are the design, implementation, and testing of a system capable of generating a 3D model of the prostate from spatially-tracked 2D ultrasound images. The system uses open-source software, low-cost materials, and deep learning to segment and localize cross-sections of the prostate in order to produce a patient-specific 3D prostate model. A user study was performed to evaluate the \u2026<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1149','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Orosz, G\u00e1bor;  Szab\u00f3, R\u00f3bert Zsolt;  Ungi, Tam\u00e1s;  Barr, Colton;  Yeung, Chris;  Fichtinger, G\u00e1bor;  G\u00e1l, J\u00e1nos;  Haidegger, Tam\u00e1s<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/acta.uni-obuda.hu\/Orosz_Szabo_Ungi_Barr_Yeung_Fichtinger_Gal_Haidegger_137.pdf\" title=\"https:\/\/acta.uni-obuda.hu\/Orosz_Szabo_Ungi_Barr_Yeung_Fichtinger_Gal_Haidegger_137.pdf\" target=\"blank\">Lung Ultrasound Imaging and Image Processing with Artificial Intelligence Methods for Bedside Diagnostic Examinations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Acta Polytechnica Hungarica, <\/span><span class=\"tp_pub_additional_volume\">vol. 20, <\/span><span class=\"tp_pub_additional_issue\">iss. 8, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_915\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('915','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_915\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('915','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_915\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('915','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_915\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{fichtinger2023d,<br \/>\r\ntitle = {Lung Ultrasound Imaging and Image Processing with Artificial Intelligence Methods for Bedside Diagnostic Examinations},<br \/>\r\nauthor = {G\u00e1bor Orosz and R\u00f3bert Zsolt Szab\u00f3 and Tam\u00e1s Ungi and Colton Barr and Chris Yeung and G\u00e1bor Fichtinger and J\u00e1nos G\u00e1l and Tam\u00e1s Haidegger},<br \/>\r\nurl = {https:\/\/acta.uni-obuda.hu\/Orosz_Szabo_Ungi_Barr_Yeung_Fichtinger_Gal_Haidegger_137.pdf},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\njournal = {Acta Polytechnica Hungarica},<br \/>\r\nvolume = {20},<br \/>\r\nissue = {8},<br \/>\r\nabstract = {Artificial Intelligence-assisted radiology has shown to offer significant benefits in clinical care. Physicians often face challenges in identifying the underlying causes of acute respiratory failure. One method employed by experts is the utilization of bedside lung ultrasound, although it has a significant learning curve. In our study, we explore the potential of a Machine Learning-based automated decision-support system to assist inexperienced practitioners in interpreting lung ultrasound scans. This system incorporates medical ultrasound, advanced data processing techniques, and a neural network implementation to achieve its objective. The article provides a comprehensive overview of the steps involved in data preparation and the implementation of the neural network. The accuracy and error rate of the most effective model are presented, accompanied by illustrative examples of their predictions. Furthermore, the paper concludes with an evaluation of the results, identification of limitations, and recommendations for future enhancements.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('915','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_915\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Artificial Intelligence-assisted radiology has shown to offer significant benefits in clinical care. Physicians often face challenges in identifying the underlying causes of acute respiratory failure. One method employed by experts is the utilization of bedside lung ultrasound, although it has a significant learning curve. In our study, we explore the potential of a Machine Learning-based automated decision-support system to assist inexperienced practitioners in interpreting lung ultrasound scans. This system incorporates medical ultrasound, advanced data processing techniques, and a neural network implementation to achieve its objective. The article provides a comprehensive overview of the steps involved in data preparation and the implementation of the neural network. The accuracy and error rate of the most effective model are presented, accompanied by illustrative examples of their predictions. Furthermore, the paper concludes with an evaluation of the results, identification of limitations, and recommendations for future enhancements.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('915','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_915\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/acta.uni-obuda.hu\/Orosz_Szabo_Ungi_Barr_Yeung_Fichtinger_Gal_Haidegger_137.pdf\" title=\"https:\/\/acta.uni-obuda.hu\/Orosz_Szabo_Ungi_Barr_Yeung_Fichtinger_Gal_Haidegger_1[...]\" target=\"_blank\">https:\/\/acta.uni-obuda.hu\/Orosz_Szabo_Ungi_Barr_Yeung_Fichtinger_Gal_Haidegger_1[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('915','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Szab\u00f3, R\u00f3bert Zsolt;  Orosz, G\u00e1bor;  Ungi, Tam\u00e1s;  Barr, Colton;  Yeung, Chris;  Incze, Roland;  Fichtinger, Gabor;  G\u00e1l, J\u00e1nos;  Haidegger, Tam\u00e1s<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10158672\/\" title=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10158672\/\" target=\"blank\">Automation of lung ultrasound imaging and image processing for bedside diagnostic examinations<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_pages\">pp. 000779-000784, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_940\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('940','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_940\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('940','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_940\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('940','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_940\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{fichtinger2023h,<br \/>\r\ntitle = {Automation of lung ultrasound imaging and image processing for bedside diagnostic examinations},<br \/>\r\nauthor = {R\u00f3bert Zsolt Szab\u00f3 and G\u00e1bor Orosz and Tam\u00e1s Ungi and Colton Barr and Chris Yeung and Roland Incze and Gabor Fichtinger and J\u00e1nos G\u00e1l and Tam\u00e1s Haidegger},<br \/>\r\nurl = {https:\/\/ieeexplore.ieee.org\/abstract\/document\/10158672\/},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\npages = {000779-000784},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {The causes of acute respiratory failure can be difficult to identify for physicians. Experts can differentiate these causes using bedside lung ultrasound, but lung ultrasound has a considerable learning curve. We investigate if an automated decision-support system could help novices interpret lung ultrasound scans. The system utilizes medical ultrasound, data processing, and a neural network implementation to achieve this goal. The article details the steps taken in the data preparation, and the implementation of the neural network. The best model\u2019s accuracy and error rate are presented, along with examples of its predictions. The paper concludes with an evaluation of the results, identification of limitations, and suggestions for future improvements.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('940','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_940\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The causes of acute respiratory failure can be difficult to identify for physicians. Experts can differentiate these causes using bedside lung ultrasound, but lung ultrasound has a considerable learning curve. We investigate if an automated decision-support system could help novices interpret lung ultrasound scans. The system utilizes medical ultrasound, data processing, and a neural network implementation to achieve this goal. The article details the steps taken in the data preparation, and the implementation of the neural network. The best model\u2019s accuracy and error rate are presented, along with examples of its predictions. The paper concludes with an evaluation of the results, identification of limitations, and suggestions for future improvements.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('940','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_940\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10158672\/\" title=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10158672\/\" target=\"_blank\">https:\/\/ieeexplore.ieee.org\/abstract\/document\/10158672\/<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('940','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barr, Colton;  Hisey, Rebecca;  Ungi, Tamas;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CBarr2021a.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CBarr2021a.pdf\" target=\"blank\">Ultrasound Probe Pose Classification for Task Recognition in Central Venous Catheterization<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">43rd Conference of the IEEE Engineering Medicine and Biology Society, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_44\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('44','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_44\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('44','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_44\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('44','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_44\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{CBarr2021b,<br \/>\r\ntitle = {Ultrasound Probe Pose Classification for Task Recognition in Central Venous Catheterization},<br \/>\r\nauthor = {Colton Barr and Rebecca Hisey and Tamas Ungi and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CBarr2021a.pdf},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-10-01},<br \/>\r\nurldate = {2021-10-01},<br \/>\r\nbooktitle = {43rd Conference of the IEEE Engineering Medicine and Biology Society},<br \/>\r\nabstract = {&lt;p&gt;Central Line Tutor is a system that facilitates real-time feedback during training for central venous catheterization. One limitation of Central Line Tutor is its reliance on expensive, cumbersome electromagnetic tracking to facilitate various training aids, including ultrasound task identification and segmentation of neck vasculature. The purpose of this study is to validate deep learning methods for vessel segmentation and ultrasound pose classification in order to mitigate the system\u2019s reliance on electromagnetic tracking. A large dataset of segmented and classified ultrasound images was generated from participant data captured using Central Line Tutor. A U-Net architecture was used to perform vessel segmentation, while a shallow Convolutional Neural Network (CNN) architecture was designed to classify the pose of the ultrasound probe. A second classifier architecture was also tested that used the U-Net output as the CNN input. The mean testing set Intersect over Union score for U-Net cross-validation was 0.746 \u00b1 0.052. The mean test set classification accuracy for the CNN was 92.0% \u00b1 3.0, while the U-Net + CNN achieved 92.7% \u00b1 2.1%. This study highlights the potential for deep learning on ultrasound images to replace the current electromagnetic tracking-based methods for vessel segmentation and ultrasound pose classification, and represents an important step towards removing the electromagnetic tracker altogether. Removing the need for an external tracking system would significantly reduce the cost of Central Line Tutor and make it far more accessible to the medical trainees that would benefit from it most.&lt;\/p&gt;},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('44','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_44\" style=\"display:none;\"><div class=\"tp_abstract_entry\">&lt;p&gt;Central Line Tutor is a system that facilitates real-time feedback during training for central venous catheterization. One limitation of Central Line Tutor is its reliance on expensive, cumbersome electromagnetic tracking to facilitate various training aids, including ultrasound task identification and segmentation of neck vasculature. The purpose of this study is to validate deep learning methods for vessel segmentation and ultrasound pose classification in order to mitigate the system\u2019s reliance on electromagnetic tracking. A large dataset of segmented and classified ultrasound images was generated from participant data captured using Central Line Tutor. A U-Net architecture was used to perform vessel segmentation, while a shallow Convolutional Neural Network (CNN) architecture was designed to classify the pose of the ultrasound probe. A second classifier architecture was also tested that used the U-Net output as the CNN input. The mean testing set Intersect over Union score for U-Net cross-validation was 0.746 \u00b1 0.052. The mean test set classification accuracy for the CNN was 92.0% \u00b1 3.0, while the U-Net + CNN achieved 92.7% \u00b1 2.1%. This study highlights the potential for deep learning on ultrasound images to replace the current electromagnetic tracking-based methods for vessel segmentation and ultrasound pose classification, and represents an important step towards removing the electromagnetic tracker altogether. Removing the need for an external tracking system would significantly reduce the cost of Central Line Tutor and make it far more accessible to the medical trainees that would benefit from it most.&lt;\/p&gt;<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('44','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_44\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CBarr2021a.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CBarr2021a[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CBarr2021a[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('44','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barr, Colton;  Hisey, Rebecca;  Ungi, Tamas;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\">Ultrasound Probe Pose Classification for Task Recognition in Central Venous Catheterization <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Imaging Network of Ontario Symposium, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_43\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('43','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_43\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{CBarr2021a,<br \/>\r\ntitle = {Ultrasound Probe Pose Classification for Task Recognition in Central Venous Catheterization},<br \/>\r\nauthor = {Colton Barr and Rebecca Hisey and Tamas Ungi and Gabor Fichtinger},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-02-01},<br \/>\r\nurldate = {2021-02-01},<br \/>\r\nbooktitle = {Imaging Network of Ontario Symposium},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('43','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barr, Colton;  Hisey, Rebecca;  Ungi, Tamas;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9630033\/\" title=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9630033\/\" target=\"blank\">Ultrasound probe pose classification for task recognition in central venous catheterization<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_pages\">pp. 5023-5026, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_944\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('944','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_944\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('944','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_944\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('944','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_944\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{fichtinger2021l,<br \/>\r\ntitle = {Ultrasound probe pose classification for task recognition in central venous catheterization},<br \/>\r\nauthor = {Colton Barr and Rebecca Hisey and Tamas Ungi and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/ieeexplore.ieee.org\/abstract\/document\/9630033\/},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-01-01},<br \/>\r\npages = {5023-5026},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {Central Line Tutor is a system that facilitates real-time feedback during training for central venous catheterization. One limitation of Central Line Tutor is its reliance on expensive, cumbersome electromagnetic tracking to facilitate various training aids, including ultrasound task identification and segmentation of neck vasculature. The purpose of this study is to validate deep learning methods for vessel segmentation and ultrasound pose classification in order to mitigate the system\u2019s reliance on electromagnetic tracking. A large dataset of segmented and classified ultrasound images was generated from participant data captured using Central Line Tutor. A U-Net architecture was used to perform vessel segmentation, while a shallow Convolutional Neural Network (CNN) architecture was designed to classify the pose of the ultrasound probe. A second classifier architecture was also tested that used the U-Net output as \u2026},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('944','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_944\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Central Line Tutor is a system that facilitates real-time feedback during training for central venous catheterization. One limitation of Central Line Tutor is its reliance on expensive, cumbersome electromagnetic tracking to facilitate various training aids, including ultrasound task identification and segmentation of neck vasculature. The purpose of this study is to validate deep learning methods for vessel segmentation and ultrasound pose classification in order to mitigate the system\u2019s reliance on electromagnetic tracking. A large dataset of segmented and classified ultrasound images was generated from participant data captured using Central Line Tutor. A U-Net architecture was used to perform vessel segmentation, while a shallow Convolutional Neural Network (CNN) architecture was designed to classify the pose of the ultrasound probe. A second classifier architecture was also tested that used the U-Net output as \u2026<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('944','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_944\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9630033\/\" title=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9630033\/\" target=\"_blank\">https:\/\/ieeexplore.ieee.org\/abstract\/document\/9630033\/<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('944','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barr, Colton;  Lasso, Andras;  Asselin, Mark;  Pieper, Steve;  Robertson, Faith C.;  Gormley, William B.;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/dx.doi.org\/10.1117\/12.2549723\" title=\"Towards portable image guidance and automatic patient registration using an RGB-D camera and video projector\" target=\"blank\">Towards portable image guidance and automatic patient registration using an RGB-D camera and video projector<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions and Modeling, <\/span><span class=\"tp_pub_additional_volume\">vol. 11315, <\/span><span class=\"tp_pub_additional_organization\">SPIE <\/span><span class=\"tp_pub_additional_publisher\">SPIE, <\/span><span class=\"tp_pub_additional_address\">Houston, Texas, United States, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_57\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('57','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_57\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('57','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_57\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{BarrC2020,<br \/>\r\ntitle = {Towards portable image guidance and automatic patient registration using an RGB-D camera and video projector},<br \/>\r\nauthor = {Colton Barr and Andras Lasso and Mark Asselin and Steve Pieper and Faith C. Robertson and William B. Gormley and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Barr2020.pdf},<br \/>\r\ndoi = {10.1117\/12.2549723},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-01-01},<br \/>\r\nurldate = {2020-01-01},<br \/>\r\nbooktitle = {Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions and Modeling},<br \/>\r\nvolume = {11315},<br \/>\r\npublisher = {SPIE},<br \/>\r\naddress = {Houston, Texas, United States},<br \/>\r\norganization = {SPIE},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('57','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_57\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Barr2020.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Barr2020.p[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Barr2020.p[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1117\/12.2549723\" title=\"Follow DOI:10.1117\/12.2549723\" target=\"_blank\">doi:10.1117\/12.2549723<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('57','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_unpublished\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Barr, Colton<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/06\/SlicerChat_2024.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/06\/SlicerChat_2024.pdf\" target=\"blank\">SlicerChat: Building a Local Chatbot for 3D Slicer<\/a> <span class=\"tp_pub_type tp_  unpublished\">Unpublished<\/span> <span class=\"tp_pub_label_status forthcoming\">Forthcoming<\/span><\/p><p class=\"tp_pub_additional\">Forthcoming.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_1140\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1140','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1140\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1140','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1140\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@unpublished{nokey,<br \/>\r\ntitle = {SlicerChat: Building a Local Chatbot for 3D Slicer},<br \/>\r\nauthor = {Colton Barr},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/06\/SlicerChat_2024.pdf},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {forthcoming},<br \/>\r\ntppubtype = {unpublished}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1140','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1140\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/06\/SlicerChat_2024.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/06\/SlicerChat[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/06\/SlicerChat[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1140','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div>\n\n<\/div>\n","protected":false},"featured_media":2066,"template":"","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-2092","qsc_member","type-qsc_member","status-publish","has-post-thumbnail","hentry"],"acf":[],"spectra_custom_meta":{"_edit_lock":["1736278113:2"],"_thumbnail_id":["2066"],"_uag_custom_page_level_css":[""],"theme-transparent-header-meta":[""],"adv-header-id-meta":[""],"stick-header-meta":[""],"footnotes":[""],"_edit_last":["11"],"field_qsc_member_acf_email":["c.barr@queensu.ca"],"_field_qsc_member_acf_email":["qsc_member_acf_email"],"qsc_member_acf_position":["MD\/PhD Student"],"_qsc_member_acf_position":["field_qsc_member_acf_position"],"qsc_member_acf_department":["a:1:{i:0;s:19:\"School of Computing\";}"],"_qsc_member_acf_department":["field_qsc_member_acf_department"],"field_qsc_member_acf_organization":["Queen's University"],"_field_qsc_member_acf_organization":["qsc_member_acf_organization"],"field_qsc_member_acf_linkedin":[""],"_field_qsc_member_acf_linkedin":["qsc_member_acf_linkedin"],"field_qsc_member_acf_gscholar":[""],"_field_qsc_member_acf_gscholar":["qsc_member_acf_gscholar"],"field_qsc_member_acf_github":[""],"_field_qsc_member_acf_github":["qsc_member_acf_github"],"field_qsc_member_acf_researchgate":[""],"_field_qsc_member_acf_researchgate":["qsc_member_acf_researchgate"],"field_qsc_member_acf_web":[""],"_field_qsc_member_acf_web":["qsc_member_acf_web"],"field_qsc_member_acf_program_status":["Current"],"_field_qsc_member_acf_program_status":["qsc_member_acf_program_status"],"field_qsc_member_acf_start_year":[""],"_field_qsc_member_acf_start_year":["qsc_member_acf_start_year"],"field_qsc_member_acf_end_year":[""],"_field_qsc_member_acf_end_year":["qsc_member_acf_end_year"],"_uag_css_file_name":["uag-css-2092.css"],"_uag_page_assets":["a:9:{s:3:\"css\";s:263:\".uag-blocks-common-selector{z-index:var(--z-index-desktop) !important}@media (max-width: 976px){.uag-blocks-common-selector{z-index:var(--z-index-tablet) !important}}@media (max-width: 767px){.uag-blocks-common-selector{z-index:var(--z-index-mobile) !important}}\n\";s:2:\"js\";s:0:\"\";s:18:\"current_block_list\";a:9:{i:0;s:12:\"core\/heading\";i:1;s:14:\"core\/paragraph\";i:2;s:14:\"core\/shortcode\";i:3;s:11:\"core\/search\";i:4;s:10:\"core\/group\";i:5;s:17:\"core\/latest-posts\";i:6;s:20:\"core\/latest-comments\";i:7;s:13:\"core\/archives\";i:8;s:15:\"core\/categories\";}s:8:\"uag_flag\";b:0;s:11:\"uag_version\";s:10:\"1771033544\";s:6:\"gfonts\";a:0:{}s:10:\"gfonts_url\";s:0:\"\";s:12:\"gfonts_files\";a:0:{}s:14:\"uag_faq_layout\";b:0;}"]},"uagb_featured_image_src":{"full":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot.jpg",964,1199,false],"thumbnail":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-150x150.jpg",150,150,true],"medium":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-241x300.jpg",241,300,true],"medium_large":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-768x955.jpg",768,955,true],"large":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot-823x1024.jpg",823,1024,true],"1536x1536":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot.jpg",964,1199,false],"2048x2048":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/05\/ColtonBarr_Headshot.jpg",964,1199,false]},"uagb_author_info":{"display_name":"Khyle Sewpersaud","author_link":"https:\/\/labs.cs.queensu.ca\/perklab\/author\/"},"uagb_comment_info":0,"uagb_excerpt":"Colton Barr MD\/PhD Student School of Computing Queen&#8217;s University c.barr@queensu.ca Biography Colton Barr is an MD \/ PhD student in the School of Computing at Queen&#8217;s University supervised by Professor Gabor Fichtinger and Professor Parvin Mousavi. He is currently completing his second internship as a visiting researcher at the Golby Lab at Brigham and Women&#8217;s&hellip;","_links":{"self":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2092","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member"}],"about":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/types\/qsc_member"}],"version-history":[{"count":3,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2092\/revisions"}],"predecessor-version":[{"id":2130,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2092\/revisions\/2130"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/media\/2066"}],"wp:attachment":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/media?parent=2092"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}