{"id":2124,"date":"2024-08-25T20:52:37","date_gmt":"2024-08-25T20:52:37","guid":{"rendered":"https:\/\/labs.cs.queensu.ca\/perklab\/?post_type=qsc_member&#038;p=2124"},"modified":"2024-08-25T20:52:38","modified_gmt":"2024-08-25T20:52:38","slug":"jianmin-yang","status":"publish","type":"qsc_member","link":"https:\/\/labs.cs.queensu.ca\/perklab\/members\/jianmin-yang\/","title":{"rendered":"Jianmin Yang"},"content":{"rendered":"<div class=\"wp-block-columns is-layout-flex wp-block-columns-is-layout-flex qsc-member-single-core-info-container\">\n\t<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow qsc-member-single-photo-column\">\n\t\t<img loading=\"lazy\" decoding=\"async\" width=\"216\" height=\"250\" src=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg\" class=\"qsc-member-single-photo wp-post-image\" alt=\"\" srcset=\"https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg 414w, https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie-259x300.jpg 259w\" sizes=\"auto, (max-width: 216px) 100vw, 216px\" \/>\n\t<\/div>\n\t<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow qsc-member-single-info-column\">\n\t\t<div class=\"qsc-member-name\"><h1>Jianmin Yang<\/h1><\/div>\n\t\t<div class=\"qsc-member-position\">MSc Student<\/div>\n\t\t<div class=\"qsc-member-department\">School of Computing<\/div>\n\t\t<div class=\"qsc-member-organization\">Queen&#8217;s University<\/div>\n\t\t<div class=\"qsc-member-contact\">\n\t\t\t<div class=\"qsc-member-email\"><a href=\"mailto:22bd23@queensu.ca\">22bd23@queensu.ca<\/a><\/div>\n\t\t\t<div class=\"qsc-member-socials\">\n\t\t\t<a href=\"https:\/\/www.linkedin.com\/in\/jianming-yang-190a83242\/\" title=\"LinkedIn\"><i class=\"fa-brands fa-linkedin\"><\/i><\/a>\n\t\t\t<a href=\"https:\/\/github.com\/JianmingY\" title=\"GitHub\"><i class=\"fa-brands fa-github\"><\/i><\/a>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n<div class=\"qsc-member-bio\">\n\t<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Yang, Jianming;  Hisey, Rebecca;  Bierbrier, Joshua;  Law, Christine;  Fichtinger, Gabor;  Holden, Matthew<\/p><p class=\"tp_pub_title\">Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_pages\">pp. 892-898, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1150\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1150','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1150\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1150','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1150\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{yang2024,<br \/>\r\ntitle = {Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks},<br \/>\r\nauthor = {Jianming Yang and Rebecca Hisey and Joshua Bierbrier and Christine Law and Gabor Fichtinger and Matthew Holden},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\npages = {892-898},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {Given the growing volume of surgical data and the increasing demand for annotation, there is a pressing need to streamline the annotation process for surgical videos. Previously, annotation tools for object detection tasks have greatly evolved, reducing time expense and enhancing ease. There are also many initial frame selection approaches for Artificial Intelligence (AI) assisted annotation tasks to further reduce human effort. However, these methods have rarely been implemented and reported in the context of surgical datasets, especially in cataract surgery datasets. The identification of initial frames to annotate before the use of any tools or algorithms determines annotation efficiency. Therefore, in this paper, we chose to prioritize the development of a method for selecting initial frames to facilitate the subsequent automated annotation process. We propose a customized initial frames selection method based on \u2026},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1150','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1150\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Given the growing volume of surgical data and the increasing demand for annotation, there is a pressing need to streamline the annotation process for surgical videos. Previously, annotation tools for object detection tasks have greatly evolved, reducing time expense and enhancing ease. There are also many initial frame selection approaches for Artificial Intelligence (AI) assisted annotation tasks to further reduce human effort. However, these methods have rarely been implemented and reported in the context of surgical datasets, especially in cataract surgery datasets. The identification of initial frames to annotate before the use of any tools or algorithms determines annotation efficiency. Therefore, in this paper, we chose to prioritize the development of a method for selecting initial frames to facilitate the subsequent automated annotation process. We propose a customized initial frames selection method based on \u2026<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1150','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Yang, Jianming;  Hisey, Rebecca;  Bierbrier, Joshua;  Fichtinger, Gabor;  Law, Christine;  Holden, Matthew<\/p><p class=\"tp_pub_title\">Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <span class=\"tp_pub_label_status forthcoming\">Forthcoming<\/span><\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_publisher\">IEEE, <\/span>Forthcoming.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1142\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1142','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1142\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1142','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1142\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks},<br \/>\r\nauthor = {Jianming Yang and Rebecca Hisey and Joshua Bierbrier and Gabor Fichtinger and Christine Law and Matthew Holden},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {Given the growing volume of surgical data and the<br \/>\r\nincreasing demand for annotation, there is a pressing need to<br \/>\r\nstreamline the annotation process for surgical videos. Previously,<br \/>\r\nannotation tools for object detection tasks have greatly evolved,<br \/>\r\nreducing time expense and enhancing ease. There are also many<br \/>\r\ninitial frame selection approaches for Artificial Intelligence<br \/>\r\n(AI) assisted annotation tasks to further reduce human effort.<br \/>\r\nHowever, these methods have rarely been implemented and<br \/>\r\nreported in the context of surgical datasets, especially in cataract<br \/>\r\nsurgery datasets. The identification of initial frames to annotate<br \/>\r\nbefore the use of any tools or algorithms determines annotation<br \/>\r\nefficiency. Therefore, in this paper, we chose to prioritize the<br \/>\r\ndevelopment of a method for selecting initial frames to facilitate<br \/>\r\nthe subsequent automated annotation process. We propose a<br \/>\r\ncustomized initial frames selection method based on feature<br \/>\r\nclustering and compare it to commonly used temporal selection<br \/>\r\nmethods. In each method, initial frames from cataract surgery<br \/>\r\nvideos are selected to train a surgical tool detection model.<br \/>\r\nThe model assists in the automated annotation process by<br \/>\r\npredicting bounding boxes for the surgery video objects in the<br \/>\r\nremaining frames. Evaluations of these methods are based on<br \/>\r\nhow many edits users need to perform when annotating the<br \/>\r\ninitial frames and how many edits users are expected to perform<br \/>\r\nto correct all predictions. Additionally, the total annotation cost<br \/>\r\nfor each method is compared. Results indicate that on average,<br \/>\r\nthe proposed cluster-based approach requires the fewest total<br \/>\r\nedits and exhibits the lowest total annotation cost compared<br \/>\r\nto conventional methods. These findings highlight a promising<br \/>\r\ndirection for developing a complete application, featuring<br \/>\r\nstreamlined AI-assisted annotation processes for surgical tool<br \/>\r\ndetection tasks.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {forthcoming},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1142','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1142\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Given the growing volume of surgical data and the<br \/>\r\nincreasing demand for annotation, there is a pressing need to<br \/>\r\nstreamline the annotation process for surgical videos. Previously,<br \/>\r\nannotation tools for object detection tasks have greatly evolved,<br \/>\r\nreducing time expense and enhancing ease. There are also many<br \/>\r\ninitial frame selection approaches for Artificial Intelligence<br \/>\r\n(AI) assisted annotation tasks to further reduce human effort.<br \/>\r\nHowever, these methods have rarely been implemented and<br \/>\r\nreported in the context of surgical datasets, especially in cataract<br \/>\r\nsurgery datasets. The identification of initial frames to annotate<br \/>\r\nbefore the use of any tools or algorithms determines annotation<br \/>\r\nefficiency. Therefore, in this paper, we chose to prioritize the<br \/>\r\ndevelopment of a method for selecting initial frames to facilitate<br \/>\r\nthe subsequent automated annotation process. We propose a<br \/>\r\ncustomized initial frames selection method based on feature<br \/>\r\nclustering and compare it to commonly used temporal selection<br \/>\r\nmethods. In each method, initial frames from cataract surgery<br \/>\r\nvideos are selected to train a surgical tool detection model.<br \/>\r\nThe model assists in the automated annotation process by<br \/>\r\npredicting bounding boxes for the surgery video objects in the<br \/>\r\nremaining frames. Evaluations of these methods are based on<br \/>\r\nhow many edits users need to perform when annotating the<br \/>\r\ninitial frames and how many edits users are expected to perform<br \/>\r\nto correct all predictions. Additionally, the total annotation cost<br \/>\r\nfor each method is compared. Results indicate that on average,<br \/>\r\nthe proposed cluster-based approach requires the fewest total<br \/>\r\nedits and exhibits the lowest total annotation cost compared<br \/>\r\nto conventional methods. These findings highlight a promising<br \/>\r\ndirection for developing a complete application, featuring<br \/>\r\nstreamlined AI-assisted annotation processes for surgical tool<br \/>\r\ndetection tasks.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1142','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_proceedings\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Yang, Jianming;  Hisey, Rebecca;  Bierbrier, Joshua;  Fichtinger, Gabor;  Law, Christine;  Holden, Matthew<\/p><p class=\"tp_pub_title\">Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks <span class=\"tp_pub_type tp_  proceedings\">Proceedings<\/span> <span class=\"tp_pub_label_status forthcoming\">Forthcoming<\/span><\/p><p class=\"tp_pub_additional\">Forthcoming.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_1141\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1141','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1141\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1141','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1141\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@proceedings{nokey,<br \/>\r\ntitle = {Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks},<br \/>\r\nauthor = {Jianming Yang and Rebecca Hisey and Joshua Bierbrier and Gabor Fichtinger and Christine Law and Matthew Holden},<br \/>\r\nabstract = {\u2014Given the growing volume of surgical data and the<br \/>\r\nincreasing demand for annotation, there is a pressing need to<br \/>\r\nstreamline the annotation process for surgical videos. Previously,<br \/>\r\nannotation tools for object detection tasks have greatly evolved,<br \/>\r\nreducing time expense and enhancing ease. There are also many<br \/>\r\ninitial frame selection approaches for Artificial Intelligence<br \/>\r\n(AI) assisted annotation tasks to further reduce human effort.<br \/>\r\nHowever, these methods have rarely been implemented and<br \/>\r\nreported in the context of surgical datasets, especially in cataract<br \/>\r\nsurgery datasets. The identification of initial frames to annotate<br \/>\r\nbefore the use of any tools or algorithms determines annotation<br \/>\r\nefficiency. Therefore, in this paper, we chose to prioritize the<br \/>\r\ndevelopment of a method for selecting initial frames to facilitate<br \/>\r\nthe subsequent automated annotation process. We propose a<br \/>\r\ncustomized initial frames selection method based on feature<br \/>\r\nclustering and compare it to commonly used temporal selection<br \/>\r\nmethods. In each method, initial frames from cataract surgery<br \/>\r\nvideos are selected to train a surgical tool detection model.<br \/>\r\nThe model assists in the automated annotation process by<br \/>\r\npredicting bounding boxes for the surgery video objects in the<br \/>\r\nremaining frames. Evaluations of these methods are based on<br \/>\r\nhow many edits users need to perform when annotating the<br \/>\r\ninitial frames and how many edits users are expected to perform<br \/>\r\nto correct all predictions. Additionally, the total annotation cost<br \/>\r\nfor each method is compared. Results indicate that on average,<br \/>\r\nthe proposed cluster-based approach requires the fewest total<br \/>\r\nedits and exhibits the lowest total annotation cost compared<br \/>\r\nto conventional methods. These findings highlight a promising<br \/>\r\ndirection for developing a complete application, featuring<br \/>\r\nstreamlined AI-assisted annotation processes for surgical tool<br \/>\r\ndetection tasks.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {forthcoming},<br \/>\r\ntppubtype = {proceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1141','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_1141\" style=\"display:none;\"><div class=\"tp_abstract_entry\">\u2014Given the growing volume of surgical data and the<br \/>\r\nincreasing demand for annotation, there is a pressing need to<br \/>\r\nstreamline the annotation process for surgical videos. Previously,<br \/>\r\nannotation tools for object detection tasks have greatly evolved,<br \/>\r\nreducing time expense and enhancing ease. There are also many<br \/>\r\ninitial frame selection approaches for Artificial Intelligence<br \/>\r\n(AI) assisted annotation tasks to further reduce human effort.<br \/>\r\nHowever, these methods have rarely been implemented and<br \/>\r\nreported in the context of surgical datasets, especially in cataract<br \/>\r\nsurgery datasets. The identification of initial frames to annotate<br \/>\r\nbefore the use of any tools or algorithms determines annotation<br \/>\r\nefficiency. Therefore, in this paper, we chose to prioritize the<br \/>\r\ndevelopment of a method for selecting initial frames to facilitate<br \/>\r\nthe subsequent automated annotation process. We propose a<br \/>\r\ncustomized initial frames selection method based on feature<br \/>\r\nclustering and compare it to commonly used temporal selection<br \/>\r\nmethods. In each method, initial frames from cataract surgery<br \/>\r\nvideos are selected to train a surgical tool detection model.<br \/>\r\nThe model assists in the automated annotation process by<br \/>\r\npredicting bounding boxes for the surgery video objects in the<br \/>\r\nremaining frames. Evaluations of these methods are based on<br \/>\r\nhow many edits users need to perform when annotating the<br \/>\r\ninitial frames and how many edits users are expected to perform<br \/>\r\nto correct all predictions. Additionally, the total annotation cost<br \/>\r\nfor each method is compared. Results indicate that on average,<br \/>\r\nthe proposed cluster-based approach requires the fewest total<br \/>\r\nedits and exhibits the lowest total annotation cost compared<br \/>\r\nto conventional methods. These findings highlight a promising<br \/>\r\ndirection for developing a complete application, featuring<br \/>\r\nstreamlined AI-assisted annotation processes for surgical tool<br \/>\r\ndetection tasks.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1141','tp_abstract')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div>\n\n<\/div>\n","protected":false},"featured_media":494,"template":"","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-2124","qsc_member","type-qsc_member","status-publish","has-post-thumbnail","hentry"],"acf":[],"spectra_custom_meta":{"_edit_lock":["1724619159:11"],"_thumbnail_id":["494"],"_uag_custom_page_level_css":[""],"theme-transparent-header-meta":[""],"adv-header-id-meta":[""],"stick-header-meta":[""],"footnotes":[""],"_edit_last":["11"],"field_qsc_member_acf_email":["22bd23@queensu.ca"],"_field_qsc_member_acf_email":["qsc_member_acf_email"],"qsc_member_acf_position":["MSc Student"],"_qsc_member_acf_position":["field_qsc_member_acf_position"],"qsc_member_acf_department":["a:1:{i:0;s:19:\"School of Computing\";}"],"_qsc_member_acf_department":["field_qsc_member_acf_department"],"field_qsc_member_acf_organization":["Queen's University"],"_field_qsc_member_acf_organization":["qsc_member_acf_organization"],"field_qsc_member_acf_linkedin":["https:\/\/www.linkedin.com\/in\/jianming-yang-190a83242\/"],"_field_qsc_member_acf_linkedin":["qsc_member_acf_linkedin"],"field_qsc_member_acf_gscholar":[""],"_field_qsc_member_acf_gscholar":["qsc_member_acf_gscholar"],"field_qsc_member_acf_github":["https:\/\/github.com\/JianmingY"],"_field_qsc_member_acf_github":["qsc_member_acf_github"],"field_qsc_member_acf_researchgate":[""],"_field_qsc_member_acf_researchgate":["qsc_member_acf_researchgate"],"field_qsc_member_acf_web":[""],"_field_qsc_member_acf_web":["qsc_member_acf_web"],"field_qsc_member_acf_program_status":["Current"],"_field_qsc_member_acf_program_status":["qsc_member_acf_program_status"],"field_qsc_member_acf_start_year":[""],"_field_qsc_member_acf_start_year":["qsc_member_acf_start_year"],"field_qsc_member_acf_end_year":[""],"_field_qsc_member_acf_end_year":["qsc_member_acf_end_year"],"_uag_css_file_name":["uag-css-2124.css"],"_uag_page_assets":["a:9:{s:3:\"css\";s:263:\".uag-blocks-common-selector{z-index:var(--z-index-desktop) !important}@media (max-width: 976px){.uag-blocks-common-selector{z-index:var(--z-index-tablet) !important}}@media (max-width: 767px){.uag-blocks-common-selector{z-index:var(--z-index-mobile) !important}}\n\";s:2:\"js\";s:0:\"\";s:18:\"current_block_list\";a:8:{i:0;s:14:\"core\/shortcode\";i:1;s:11:\"core\/search\";i:2;s:10:\"core\/group\";i:3;s:12:\"core\/heading\";i:4;s:17:\"core\/latest-posts\";i:5;s:20:\"core\/latest-comments\";i:6;s:13:\"core\/archives\";i:7;s:15:\"core\/categories\";}s:8:\"uag_flag\";b:0;s:11:\"uag_version\";s:10:\"1771033544\";s:6:\"gfonts\";a:0:{}s:10:\"gfonts_url\";s:0:\"\";s:12:\"gfonts_files\";a:0:{}s:14:\"uag_faq_layout\";b:0;}"],"_uagb_previous_block_counts":["a:90:{s:21:\"uagb\/advanced-heading\";i:0;s:15:\"uagb\/blockquote\";i:0;s:12:\"uagb\/buttons\";i:0;s:18:\"uagb\/buttons-child\";i:0;s:19:\"uagb\/call-to-action\";i:0;s:15:\"uagb\/cf7-styler\";i:0;s:11:\"uagb\/column\";i:0;s:12:\"uagb\/columns\";i:0;s:14:\"uagb\/container\";i:0;s:21:\"uagb\/content-timeline\";i:0;s:27:\"uagb\/content-timeline-child\";i:0;s:14:\"uagb\/countdown\";i:0;s:12:\"uagb\/counter\";i:0;s:8:\"uagb\/faq\";i:0;s:14:\"uagb\/faq-child\";i:0;s:10:\"uagb\/forms\";i:0;s:17:\"uagb\/forms-accept\";i:0;s:19:\"uagb\/forms-checkbox\";i:0;s:15:\"uagb\/forms-date\";i:0;s:16:\"uagb\/forms-email\";i:0;s:17:\"uagb\/forms-hidden\";i:0;s:15:\"uagb\/forms-name\";i:0;s:16:\"uagb\/forms-phone\";i:0;s:16:\"uagb\/forms-radio\";i:0;s:17:\"uagb\/forms-select\";i:0;s:19:\"uagb\/forms-textarea\";i:0;s:17:\"uagb\/forms-toggle\";i:0;s:14:\"uagb\/forms-url\";i:0;s:14:\"uagb\/gf-styler\";i:0;s:15:\"uagb\/google-map\";i:0;s:11:\"uagb\/how-to\";i:0;s:16:\"uagb\/how-to-step\";i:0;s:9:\"uagb\/icon\";i:0;s:14:\"uagb\/icon-list\";i:0;s:20:\"uagb\/icon-list-child\";i:0;s:10:\"uagb\/image\";i:0;s:18:\"uagb\/image-gallery\";i:0;s:13:\"uagb\/info-box\";i:0;s:18:\"uagb\/inline-notice\";i:0;s:11:\"uagb\/lottie\";i:0;s:21:\"uagb\/marketing-button\";i:0;s:10:\"uagb\/modal\";i:0;s:18:\"uagb\/popup-builder\";i:0;s:16:\"uagb\/post-button\";i:0;s:18:\"uagb\/post-carousel\";i:0;s:17:\"uagb\/post-excerpt\";i:0;s:14:\"uagb\/post-grid\";i:0;s:15:\"uagb\/post-image\";i:0;s:17:\"uagb\/post-masonry\";i:0;s:14:\"uagb\/post-meta\";i:0;s:18:\"uagb\/post-taxonomy\";i:0;s:18:\"uagb\/post-timeline\";i:0;s:15:\"uagb\/post-title\";i:0;s:20:\"uagb\/restaurant-menu\";i:0;s:26:\"uagb\/restaurant-menu-child\";i:0;s:11:\"uagb\/review\";i:0;s:12:\"uagb\/section\";i:0;s:14:\"uagb\/separator\";i:0;s:11:\"uagb\/slider\";i:0;s:17:\"uagb\/slider-child\";i:0;s:17:\"uagb\/social-share\";i:0;s:23:\"uagb\/social-share-child\";i:0;s:16:\"uagb\/star-rating\";i:0;s:23:\"uagb\/sure-cart-checkout\";i:0;s:22:\"uagb\/sure-cart-product\";i:0;s:15:\"uagb\/sure-forms\";i:0;s:22:\"uagb\/table-of-contents\";i:0;s:9:\"uagb\/tabs\";i:0;s:15:\"uagb\/tabs-child\";i:0;s:18:\"uagb\/taxonomy-list\";i:0;s:9:\"uagb\/team\";i:0;s:16:\"uagb\/testimonial\";i:0;s:14:\"uagb\/wp-search\";i:0;s:19:\"uagb\/instagram-feed\";i:0;s:10:\"uagb\/login\";i:0;s:17:\"uagb\/loop-builder\";i:0;s:18:\"uagb\/loop-category\";i:0;s:20:\"uagb\/loop-pagination\";i:0;s:15:\"uagb\/loop-reset\";i:0;s:16:\"uagb\/loop-search\";i:0;s:14:\"uagb\/loop-sort\";i:0;s:17:\"uagb\/loop-wrapper\";i:0;s:13:\"uagb\/register\";i:0;s:19:\"uagb\/register-email\";i:0;s:24:\"uagb\/register-first-name\";i:0;s:23:\"uagb\/register-last-name\";i:0;s:22:\"uagb\/register-password\";i:0;s:30:\"uagb\/register-reenter-password\";i:0;s:19:\"uagb\/register-terms\";i:0;s:22:\"uagb\/register-username\";i:0;}"]},"uagb_featured_image_src":{"full":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg",414,480,false],"thumbnail":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie-150x150.jpg",150,150,true],"medium":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie-259x300.jpg",259,300,true],"medium_large":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg",414,480,false],"large":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg",414,480,false],"1536x1536":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg",414,480,false],"2048x2048":["https:\/\/labs.cs.queensu.ca\/perklab\/wp-content\/uploads\/sites\/3\/2024\/01\/Selfie.jpg",414,480,false]},"uagb_author_info":{"display_name":"Khyle Sewpersaud","author_link":"https:\/\/labs.cs.queensu.ca\/perklab\/author\/"},"uagb_comment_info":0,"uagb_excerpt":"Jianmin Yang MSc Student School of Computing Queen&#8217;s University 22bd23@queensu.ca Yang, Jianming; Hisey, Rebecca; Bierbrier, Joshua; Law, Christine; Fichtinger, Gabor; Holden, MatthewFrame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks Journal Article In: pp. 892-898, 2024.Abstract | BibTeX@article{yang2024, title = {Frame Selection Methods to Streamline Surgical Video Annotation for Tool Detection Tasks},&hellip;","_links":{"self":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2124","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member"}],"about":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/types\/qsc_member"}],"version-history":[{"count":1,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2124\/revisions"}],"predecessor-version":[{"id":2125,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/qsc_member\/2124\/revisions\/2125"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/media\/494"}],"wp:attachment":[{"href":"https:\/\/labs.cs.queensu.ca\/perklab\/wp-json\/wp\/v2\/media?parent=2124"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}