{"id":2474,"date":"2024-05-03T20:21:12","date_gmt":"2024-05-03T20:21:12","guid":{"rendered":"https:\/\/labs.cs.queensu.ca\/perklab\/members\/victoria-wu\/"},"modified":"2024-05-03T20:21:12","modified_gmt":"2024-05-03T20:21:12","slug":"victoria-wu","status":"publish","type":"qsc_member","link":"https:\/\/labs.cs.queensu.ca\/perklab\/members\/victoria-wu\/","title":{"rendered":"Victoria Wu"},"content":{"rendered":"<div class=\"wp-block-columns is-layout-flex wp-block-columns-is-layout-flex qsc-member-single-core-info-container\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow qsc-member-single-photo-column\">\n\t\t<img decoding=\"async\" src=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/plugins\/qsc-members\/\/images\/missing-image-placeholder.png\" class=\"qsc-member-single-photo\"\/>\n\t<\/div>\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow qsc-member-single-info-column\">\n<div class=\"qsc-member-name\">\n<h1>Victoria Wu<\/h1>\n<\/div>\n<div class=\"qsc-member-position\">Undergraduate Student<\/div>\n<div class=\"qsc-member-department\">School of Computing<\/div>\n<div class=\"qsc-member-organization\">Queen&#8217;s University<\/div>\n<div class=\"qsc-member-date-range\">Member from <em>2018<\/em> to <em>present<\/em><\/div>\n<div class=\"qsc-member-contact\">\n<div class=\"qsc-member-socials\">\n\t\t\t<\/div>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<div class=\"qsc-member-bio\">\n<section id=\"block-views-user-display-block-4\" class=\"block block-views even block-count-3 block-region-content-aside block-user-display-block-4\">\n<div class=\"block-inner clearfix\">\n<div class=\"block-content content\">\n<div class=\"view view-user-display view-id-user_display view-display-id-block_4 view-dom-id-52578ef88cb46ce752a6effe0600ad20\">\n<div class=\"view-content\">\n<div class=\"views-row views-row-1 views-row-odd views-row-first views-row-last Contact Information\">\n<div class=\"views-field views-field-field-short-biography\">\n<div class=\"field-content\">\n<p>Victoria is a Perk Lab alumna, she was an undegraduate student in the Cognitive Science (COGS) Honor&#8217;s program at the Queen&#8217;s School of Computing. She curently works at Microsoft.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/section>\n<section id=\"block-views-user-display-block-1\" class=\"block block-views odd block-count-4 block-region-content-aside block-user-display-block-1\">\n<div class=\"block-inner clearfix\"><\/div>\n<\/section>\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ungi, Tamas;  Greer, Hastings;  Sunderland, Kyle R.;  Wu, Victoria;  Baum, Zachary M C;  Schlenger, Christopher;  Oetgen, Matthew;  Cleary, Kevin;  Aylward, Stephen;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/dx.doi.org\/10.1109\/TBME.2020.2980540\" title=\"Automatic spine ultrasound segmentation for scoliosis visualization and measurement\" target=\"blank\">Automatic spine ultrasound segmentation for scoliosis visualization and measurement<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Biomedical Engineering, <\/span><span class=\"tp_pub_additional_volume\">vol. 67, <\/span><span class=\"tp_pub_additional_number\">no. 11, <\/span><span class=\"tp_pub_additional_pages\">pp. 3234 - 3241, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_48\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('48','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_48\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('48','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_48\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('48','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_48\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Ungi2020,<br \/>\r\ntitle = {Automatic spine ultrasound segmentation for scoliosis visualization and measurement},<br \/>\r\nauthor = {Tamas Ungi and Hastings Greer and Kyle R. Sunderland and Victoria Wu and Zachary M C Baum and Christopher Schlenger and Matthew Oetgen and Kevin Cleary and Stephen Aylward and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/ieeexplore.ieee.org\/document\/9034149<br \/>\r\nhttps:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Ungi2020.pdf},<br \/>\r\ndoi = {10.1109\/TBME.2020.2980540},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-03-01},<br \/>\r\nurldate = {2020-03-01},<br \/>\r\njournal = {IEEE Transactions on Biomedical Engineering},<br \/>\r\nvolume = {67},<br \/>\r\nnumber = {11},<br \/>\r\npages = {3234 - 3241},<br \/>\r\nabstract = {&lt;p&gt;\\emph{Objective:} Integrate tracked ultrasound and AI methods to provide a safer and more accessible alternative to X-ray for scoliosis measurement. We propose automatic ultrasound segmentation for 3-dimensional spine visualization and scoliosis measurement to address difficulties in using ultrasound for spine imaging. \\emph{Methods:} We trained a convolutional neural network for spine segmentation on ultrasound scans using data from eight healthy adult volunteers. We tested the trained network on eight pediatric patients. We evaluated image segmentation and 3-dimensional volume reconstruction for scoliosis measurement. \\emph{Results:} As expected, fuzzy segmentation metrics reduced when trained networks were translated from healthy volunteers to patients. Recall decreased from 0.72 to 0.64 (8.2% decrease), and precision from 0.31 to 0.27 (3.7% decrease). However, after finding optimal thresholds for prediction maps, binary segmentation metrics performed better on patient data. Recall decreased from 0.98 to 0.97 (1.6% decrease), and precision from 0.10 to 0.06 (4.5% decrease). Segmentation prediction maps were reconstructed to 3-dimensional volumes and scoliosis was measured in all patients. Measurement in these reconstructions took less than 1 minute and had a maximum error of 2.2\u00b0 compared to X-ray. \\emph{Conclusion:} automatic spine segmentation makes scoliosis measurement both efficient and accurate in tracked ultrasound scans. \\emph{Significance:} Automatic segmentation may overcome the limitations of tracked ultrasound that so far prevented its use as an alternative of X-ray in scoliosis measurement.&lt;\/p&gt;},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('48','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_48\" style=\"display:none;\"><div class=\"tp_abstract_entry\">&lt;p&gt;<em>Objective:<\/em> Integrate tracked ultrasound and AI methods to provide a safer and more accessible alternative to X-ray for scoliosis measurement. We propose automatic ultrasound segmentation for 3-dimensional spine visualization and scoliosis measurement to address difficulties in using ultrasound for spine imaging. <em>Methods:<\/em> We trained a convolutional neural network for spine segmentation on ultrasound scans using data from eight healthy adult volunteers. We tested the trained network on eight pediatric patients. We evaluated image segmentation and 3-dimensional volume reconstruction for scoliosis measurement. <em>Results:<\/em> As expected, fuzzy segmentation metrics reduced when trained networks were translated from healthy volunteers to patients. Recall decreased from 0.72 to 0.64 (8.2% decrease), and precision from 0.31 to 0.27 (3.7% decrease). However, after finding optimal thresholds for prediction maps, binary segmentation metrics performed better on patient data. Recall decreased from 0.98 to 0.97 (1.6% decrease), and precision from 0.10 to 0.06 (4.5% decrease). Segmentation prediction maps were reconstructed to 3-dimensional volumes and scoliosis was measured in all patients. Measurement in these reconstructions took less than 1 minute and had a maximum error of 2.2\u00b0 compared to X-ray. <em>Conclusion:<\/em> automatic spine segmentation makes scoliosis measurement both efficient and accurate in tracked ultrasound scans. <em>Significance:<\/em> Automatic segmentation may overcome the limitations of tracked ultrasound that so far prevented its use as an alternative of X-ray in scoliosis measurement.&lt;\/p&gt;<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('48','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_48\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9034149\" title=\"https:\/\/ieeexplore.ieee.org\/document\/9034149\" target=\"_blank\">https:\/\/ieeexplore.ieee.org\/document\/9034149<\/a><\/li><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Ungi2020.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Ungi2020.p[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Ungi2020.p[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TBME.2020.2980540\" title=\"Follow DOI:10.1109\/TBME.2020.2980540\" target=\"_blank\">doi:10.1109\/TBME.2020.2980540<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('48','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Wu, Victoria;  Ungi, Tamas;  Sunderland, Kyle R.;  Pigeau, Grace;  Schonewille, Abigael;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/dx.doi.org\/10.1117\/12.2549584\" title=\"Automatic segmentation of spinal ultrasound landmarks with U-net using multiple consecutive images for input\" target=\"blank\">Automatic segmentation of spinal ultrasound landmarks with U-net using multiple consecutive images for input<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">SPIE Medical Imaging, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_47\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('47','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_47\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('47','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_47\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Wu2020a,<br \/>\r\ntitle = {Automatic segmentation of spinal ultrasound landmarks with U-net using multiple consecutive images for input},<br \/>\r\nauthor = {Victoria Wu and Tamas Ungi and Kyle R. Sunderland and Grace Pigeau and Abigael Schonewille and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CWu2020a-manuscript.pdf},<br \/>\r\ndoi = {10.1117\/12.2549584},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-01-01},<br \/>\r\nurldate = {2020-01-01},<br \/>\r\nbooktitle = {SPIE Medical Imaging},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('47','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_47\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CWu2020a-manuscript.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CWu2020a-m[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/CWu2020a-m[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1117\/12.2549584\" title=\"Follow DOI:10.1117\/12.2549584\" target=\"_blank\">doi:10.1117\/12.2549584<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('47','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Wu, Victoria;  Ungi, Tamas;  Sunderland, Kyle R.;  Pigeau, Grace;  Schonewille, Abigael;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/www.imno.ca\/sites\/default\/files\/ImNO2020Proceedings.pdf\" title=\"https:\/\/www.imno.ca\/sites\/default\/files\/ImNO2020Proceedings.pdf\" target=\"blank\">Using multiple frame U-net for automated segmentation of spinal ultrasound images<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">18th Annual Imaging Network Ontario (ImNO) Symposium, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_59\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('59','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_59\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('59','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_59\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Wu2020b,<br \/>\r\ntitle = {Using multiple frame U-net for automated segmentation of spinal ultrasound images},<br \/>\r\nauthor = {Victoria Wu and Tamas Ungi and Kyle R. Sunderland and Grace Pigeau and Abigael Schonewille and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/www.imno.ca\/sites\/default\/files\/ImNO2020Proceedings.pdf<br \/>\r\nhttps:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2020b.pdf},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-01-01},<br \/>\r\nurldate = {2020-01-01},<br \/>\r\nbooktitle = {18th Annual Imaging Network Ontario (ImNO) Symposium},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('59','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_59\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.imno.ca\/sites\/default\/files\/ImNO2020Proceedings.pdf\" title=\"https:\/\/www.imno.ca\/sites\/default\/files\/ImNO2020Proceedings.pdf\" target=\"_blank\">https:\/\/www.imno.ca\/sites\/default\/files\/ImNO2020Proceedings.pdf<\/a><\/li><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2020b.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2020b.pd[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2020b.pd[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('59','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Pigeau, Grace;  Elbatarny, Lydia;  Wu, Victoria;  Schonewille, Abigael;  Fichtinger, Gabor;  Ungi, Tamas<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/www.spiedigitallibrary.org\/conference-proceedings-of-spie\/11315\/1131508\/Ultrasound-image-simulation-with-generative-adversarial-network\/10.1117\/12.2549592.short\" title=\"https:\/\/www.spiedigitallibrary.org\/conference-proceedings-of-spie\/11315\/1131508\/Ultrasound-image-simulation-with-generative-adversarial-network\/10.1117\/12.2549592.short\" target=\"blank\">Ultrasound image simulation with generative adversarial network<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_volume\">vol. 11315, <\/span><span class=\"tp_pub_additional_pages\">pp. 54-60, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_857\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('857','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_857\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('857','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_857\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('857','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_857\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{fichtinger2020g,<br \/>\r\ntitle = {Ultrasound image simulation with generative adversarial network},<br \/>\r\nauthor = {Grace Pigeau and Lydia Elbatarny and Victoria Wu and Abigael Schonewille and Gabor Fichtinger and Tamas Ungi},<br \/>\r\nurl = {https:\/\/www.spiedigitallibrary.org\/conference-proceedings-of-spie\/11315\/1131508\/Ultrasound-image-simulation-with-generative-adversarial-network\/10.1117\/12.2549592.short},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-01-01},<br \/>\r\nvolume = {11315},<br \/>\r\npages = {54-60},<br \/>\r\npublisher = {SPIE},<br \/>\r\nabstract = {PURPOSE <br \/>\r\nIt is difficult to simulate realistic ultrasound images due to the complexity of acoustic artifacts that contribute to a real ultrasound image. We propose to evaluate the realism of ultrasound images simulated using a generative adversarial network. <br \/>\r\nMETHODS <br \/>\r\nTo achieve our goal, kidney ultrasounds were collected, and relevant anatomy was segmented to create anatomical label-maps using 3D Slicer. Adversarial networks were trained to generate ultrasound images from these labelmaps. Finally, a two-part survey of 4 participants with sonography experience was conducted to assess the realism of the generated images. The first part of the survey consisted of 50 kidney ultrasound images; half of which were real while the other half were simulated. Participants were asked to label each of the 50 ultrasound images as either real or simulated. In the second part of the survey, the participants were presented \u2026},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('857','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_857\" style=\"display:none;\"><div class=\"tp_abstract_entry\">PURPOSE <br \/>\r\nIt is difficult to simulate realistic ultrasound images due to the complexity of acoustic artifacts that contribute to a real ultrasound image. We propose to evaluate the realism of ultrasound images simulated using a generative adversarial network. <br \/>\r\nMETHODS <br \/>\r\nTo achieve our goal, kidney ultrasounds were collected, and relevant anatomy was segmented to create anatomical label-maps using 3D Slicer. Adversarial networks were trained to generate ultrasound images from these labelmaps. Finally, a two-part survey of 4 participants with sonography experience was conducted to assess the realism of the generated images. The first part of the survey consisted of 50 kidney ultrasound images; half of which were real while the other half were simulated. Participants were asked to label each of the 50 ultrasound images as either real or simulated. In the second part of the survey, the participants were presented \u2026<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('857','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_857\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.spiedigitallibrary.org\/conference-proceedings-of-spie\/11315\/1131508\/Ultrasound-image-simulation-with-generative-adversarial-network\/10.1117\/12.2549592.short\" title=\"https:\/\/www.spiedigitallibrary.org\/conference-proceedings-of-spie\/11315\/1131508\/[...]\" target=\"_blank\">https:\/\/www.spiedigitallibrary.org\/conference-proceedings-of-spie\/11315\/1131508\/[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('857','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gerolami, Justin;  Wu, Victoria;  Fauerbach, P Nasute;  Jabs, Doris;  Engel, Cecil Jay;  Rudan, J;  Merchant, Shaila;  Walker, Ross;  Anas, Emran Mohammad Abu;  Abolmaesumi, Purang;  Fichtinger, Gabor;  Ungi, Tamas;  Mousavi, Parvin<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9176505\/\" title=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9176505\/\" target=\"blank\">An End-to-End Solution for Automatic Contouring of Tumor Region in Intraoperative Images of Breast Lumpectomy<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_pages\">pp. 2003-2006, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1025\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1025','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_1025\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1025','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1025\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1025','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1025\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{fichtinger2020w,<br \/>\r\ntitle = {An End-to-End Solution for Automatic Contouring of Tumor Region in Intraoperative Images of Breast Lumpectomy},<br \/>\r\nauthor = {Justin Gerolami and Victoria Wu and P Nasute Fauerbach and Doris Jabs and Cecil Jay Engel and J Rudan and Shaila Merchant and Ross Walker and Emran Mohammad Abu Anas and Purang Abolmaesumi and Gabor Fichtinger and Tamas Ungi and Parvin Mousavi},<br \/>\r\nurl = {https:\/\/ieeexplore.ieee.org\/abstract\/document\/9176505\/},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-01-01},<br \/>\r\npages = {2003-2006},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {Breast-conserving surgery, also known as lumpectomy, is an early stage breast cancer treatment that aims to spare as much healthy breast tissue as possible. A risk associated with lumpectomy is the presence of cancer positive margins post operation. Surgical navigation has been shown to reduce cancer positive margins but requires manual segmentation of the tumor intraoperatively. In this paper, we propose an end-to-end solution for automatic contouring of breast tumor from intraoperative ultrasound images using two convolutional neural network architectures, the U-Net and residual U-Net. The networks are trained on annotated intraoperative breast ultrasound images and evaluated on the quality of predicted segmentations. This work brings us one step closer to providing surgeons with an automated surgical navigation system that helps reduce cancer-positive margins during lumpectomy.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1025','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1025\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Breast-conserving surgery, also known as lumpectomy, is an early stage breast cancer treatment that aims to spare as much healthy breast tissue as possible. A risk associated with lumpectomy is the presence of cancer positive margins post operation. Surgical navigation has been shown to reduce cancer positive margins but requires manual segmentation of the tumor intraoperatively. In this paper, we propose an end-to-end solution for automatic contouring of breast tumor from intraoperative ultrasound images using two convolutional neural network architectures, the U-Net and residual U-Net. The networks are trained on annotated intraoperative breast ultrasound images and evaluated on the quality of predicted segmentations. This work brings us one step closer to providing surgeons with an automated surgical navigation system that helps reduce cancer-positive margins during lumpectomy.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1025','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1025\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9176505\/\" title=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9176505\/\" target=\"_blank\">https:\/\/ieeexplore.ieee.org\/abstract\/document\/9176505\/<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1025','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Wu, Victoria;  Asselin, Mark;  Ungi, Tamas;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1007\/s11548-019-01969-3\" title=\"Detection of Spinal Ultrasound Landmarks Using Convolutional Neural Networks\" target=\"blank\">Detection of Spinal Ultrasound Landmarks Using Convolutional Neural Networks<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">33rd International Congress &amp; Exhibition on Computer Assisted Radiology and Surgery (CARS), <\/span><span class=\"tp_pub_additional_volume\">vol. 14, <\/span><span class=\"tp_pub_additional_publisher\">Int J CARS, <\/span><span class=\"tp_pub_additional_address\">Rennes, France, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_68\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('68','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_68\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('68','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_68\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Wu2019b,<br \/>\r\ntitle = {Detection of Spinal Ultrasound Landmarks Using Convolutional Neural Networks},<br \/>\r\nauthor = {Victoria Wu and Mark Asselin and Tamas Ungi and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019b.pdf},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1007\/s11548-019-01969-3},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-01-01},<br \/>\r\nurldate = {2019-01-01},<br \/>\r\nbooktitle = {33rd International Congress & Exhibition on Computer Assisted Radiology and Surgery (CARS)},<br \/>\r\nvolume = {14},<br \/>\r\npublisher = {Int J CARS},<br \/>\r\naddress = {Rennes, France},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('68','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_68\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019b.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019b.pd[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019b.pd[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1007\/s11548-019-01969-3\" title=\"Follow DOI:https:\/\/doi.org\/10.1007\/s11548-019-01969-3\" target=\"_blank\">doi:https:\/\/doi.org\/10.1007\/s11548-019-01969-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('68','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Wu, Victoria;  Ungi, Tamas;  Fichtinger, Gabor<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019a.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019a.pdf\" target=\"blank\">Using Deep Learning for Transverse Process Detection in Spinal Ultrasounds<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">17th Annual Imaging Network Ontario Symposium (ImNO), <\/span><span class=\"tp_pub_additional_publisher\">Imaging Network Ontario (ImNO), <\/span><span class=\"tp_pub_additional_address\">London, Ontario, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_83\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('83','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_83\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('83','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_83\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Wu2019a,<br \/>\r\ntitle = {Using Deep Learning for Transverse Process Detection in Spinal Ultrasounds},<br \/>\r\nauthor = {Victoria Wu and Tamas Ungi and Gabor Fichtinger},<br \/>\r\nurl = {https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019a.pdf},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-01-01},<br \/>\r\nurldate = {2019-01-01},<br \/>\r\nbooktitle = {17th Annual Imaging Network Ontario Symposium (ImNO)},<br \/>\r\npublisher = {Imaging Network Ontario (ImNO)},<br \/>\r\naddress = {London, Ontario},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('83','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_83\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019a.pdf\" title=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019a.pd[...]\" target=\"_blank\">https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/02\/Wu2019a.pd[...]<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('83','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"featured_media":0,"template":"","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-2474","qsc_member","type-qsc_member","status-publish","hentry"],"acf":[],"spectra_custom_meta":{"field_qsc_member_acf_email":[""],"_field_qsc_member_acf_email":["qsc_member_acf_email"],"qsc_member_acf_position":["Undergraduate Student"],"_qsc_member_acf_position":["field_qsc_member_acf_position"],"qsc_member_acf_department":["a:1:{i:0;s:19:\"School of Computing\";}"],"_qsc_member_acf_department":["field_qsc_member_acf_department"],"field_qsc_member_acf_organization":["Queen's University"],"_field_qsc_member_acf_organization":["qsc_member_acf_organization"],"field_qsc_member_acf_linkedin":[""],"_field_qsc_member_acf_linkedin":["qsc_member_acf_linkedin"],"field_qsc_member_acf_gscholar":[""],"_field_qsc_member_acf_gscholar":["qsc_member_acf_gscholar"],"field_qsc_member_acf_github":[""],"_field_qsc_member_acf_github":["qsc_member_acf_github"],"field_qsc_member_acf_researchgate":[""],"_field_qsc_member_acf_researchgate":["qsc_member_acf_researchgate"],"field_qsc_member_acf_web":[""],"_field_qsc_member_acf_web":["qsc_member_acf_web"],"field_qsc_member_acf_program_status":["Past"],"_field_qsc_member_acf_program_status":["qsc_member_acf_program_status"],"field_qsc_member_acf_start_year":["2018"],"_field_qsc_member_acf_start_year":["qsc_member_acf_start_year"],"field_qsc_member_acf_end_year":[""],"_field_qsc_member_acf_end_year":["qsc_member_acf_end_year"],"_uag_css_file_name":["uag-css-2474.css"],"_uag_page_assets":["a:9:{s:3:\"css\";s:263:\".uag-blocks-common-selector{z-index:var(--z-index-desktop) !important}@media (max-width: 976px){.uag-blocks-common-selector{z-index:var(--z-index-tablet) !important}}@media (max-width: 767px){.uag-blocks-common-selector{z-index:var(--z-index-mobile) !important}}\n\";s:2:\"js\";s:0:\"\";s:18:\"current_block_list\";a:7:{i:0;s:11:\"core\/search\";i:1;s:10:\"core\/group\";i:2;s:12:\"core\/heading\";i:3;s:17:\"core\/latest-posts\";i:4;s:20:\"core\/latest-comments\";i:5;s:13:\"core\/archives\";i:6;s:15:\"core\/categories\";}s:8:\"uag_flag\";b:0;s:11:\"uag_version\";s:10:\"1771033544\";s:6:\"gfonts\";a:0:{}s:10:\"gfonts_url\";s:0:\"\";s:12:\"gfonts_files\";a:0:{}s:14:\"uag_faq_layout\";b:0;}"]},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"Doug Martin","author_link":"https:\/\/labs.cs.queensu.ca\/perklab\/author\/"},"uagb_comment_info":0,"uagb_excerpt":"Victoria Wu Undergraduate Student School of Computing Queen&#8217;s University Member from 2018 to present Victoria is a Perk Lab alumna, she was an undegraduate student in the Cognitive Science (COGS) Honor&#8217;s program at the Queen&#8217;s School of Computing. She curently works at Microsoft. Ungi, Tamas; Greer, Hastings; Sunderland, Kyle R.; Wu, Victoria; Baum, Zachary M&hellip;","_links":{"self":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2474","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member"}],"about":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/types\/qsc_member"}],"version-history":[{"count":0,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2474\/revisions"}],"wp:attachment":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/media?parent=2474"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}