{"id":75,"date":"2022-10-12T20:04:22","date_gmt":"2022-10-12T20:04:22","guid":{"rendered":"https:\/\/etlab.cs.queensu.ca\/?page_id=75"},"modified":"2025-12-10T10:44:17","modified_gmt":"2025-12-10T15:44:17","slug":"events","status":"publish","type":"page","link":"https:\/\/labs.cs.queensu.ca\/etlab\/events\/","title":{"rendered":"Events"},"content":{"rendered":"\n<h2 class=\"wp-block-heading has-large-font-size\">social, ethical and legal issues in computing lecture series<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" style=\"font-size:25px\"><strong>Upcoming Events<\/strong><\/h3>\n\n\n\n<h5 class=\"wp-block-heading\">Machines and Measurements: the Lens of Bias vs the Lens of Oppression<\/h5>\n\n\n\n<p>January 16, 2026<br>Shen-Yi Liao &#8211; University of Puget Sound<\/p>\n\n\n\n<p>This case study for this talk juxtaposes two medical devices, pulse oximeters and spirometers. In isolation, the stories of these two medical devices appear very different. Pulse oximeters have a bias in the physics of its measurement, and oximeters have a bias in interpretation. Pulse oximeters have their bias due to the absence of race correction, and oximeters have their bias due to the presence of race correction. Yet, despite these differences, the devices are also remarkably\u2014yet so obviously\u2014similar in one important respect: they work better for white patients than Black patients.<br>I argue that the juxtaposition of these two devices shows shows the need for centering oppression rather than bias. From the lens of bias, it can seem coincidental that there are multitudes of medical devices like pulse oximeters and spirometers that work less well for nonwhite people, women and nonbinary people, disabled people, and other socially subordinated peoples\u2014after all, they involve very different biases. However, from the lens of oppression, it is no accident that there are structural similarities across different dimensions of social inequality and across different contexts, and interlocking relationship between different dimensions of social inequality.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" style=\"font-size:25px\"><strong>Past Events<\/strong><\/h2>\n\n\n\n<h5 class=\"wp-block-heading\"><strong>Data of the dead: Griefbots and the digital afterlife<\/strong><\/h5>\n\n\n\n<p>October 29, 2025<br><em>Karina Vold<\/em> &#8211; University of Toronto<\/p>\n\n\n\n<p>Generative AI systems can now be easily fine-tuned to try to recreate or mimic deceased individuals, using the data they leave behind. These digital recreations are sometimes being created as \u201cgriefbots\u201d&#8211; to enable grieving individuals to continue to talk to their dead loved ones. This talk examines the ethical and legal challenges such technologies present, with a particular focus on Canadian contexts of health, privacy, and property law. I will consider questions about digital ownership, consent, and mental health. I argue that developing normative and legal frameworks for better management of the data of the dead is both urgent and necessary.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><strong>Copyright law<\/strong><\/h5>\n\n\n\n<p>October 8, 2025<br><em>Alicia Cappello<\/em> &#8211; Queen&#8217;s University<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><strong>Generative AI in Education<\/strong><\/h5>\n\n\n\n<p>September 24, 2025<br><em>Christian Muise<\/em> &#8211; Queen&#8217;s University<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>The Moral Responsibility<\/strong> of AI Scientists<\/h4>\n\n\n\n<p class=\"has-text-align-left\">November 26, 2024<br><em>Catherine Stinson<\/em> &#8211; Queen&#8217;s University<\/p>\n\n\n\n<p style=\"line-height:1.5\">Are scientists the people best positioned to make decisions about the ethical impacts of their research and whether that research should be allowed to proceed? Or should regulatory bodies, governments or the public decide where the line should be drawn? Percy Bridgman famously argued that scientists have a special status that confers them freedom from considering consequences beyond scientific ones. Oppenheimer famously disagreed.<\/p>\n\n\n\n<p>With this background in mind we look at a contemporary case study in the development of AI. Geoffrey Hinton has publicly announced that he is now scared of the potential impacts of deep learning, a technology he helped build. Should technologies like this be granted the same special status that science has, according to Bridgman? Are the developers of such technologies in a privileged position to evaluate the potential impacts of their work? Has the tech industry been living up to its responsibility to self-regulate? We argue that the answers to each of these questions is &#8220;no&#8221; and conclude that external regulation is warranted, even on Bridgman&#8217;s account.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>Copyright &amp; AI: Legalities and Legal Issues<\/strong><\/h4>\n\n\n\n<p class=\"has-text-align-left\">November 19, 2024<br><em>Meaghan Shannon<\/em> &#8211; Queen&#8217;s University<\/p>\n\n\n\n<p style=\"line-height:1.5\">This lecture will provide an overview of what copyright is and how it works, focusing on the interplay between legislation and case law as well as the relationship between copyright law and contract law. Canadian copyright law is intended to balance the rights conferred upon authors of works with the exceptions that are available to users of works. The Canadian fair dealing exception and the American fair use defense will be explored during a deep dive into the relevant case law. Once an understanding of copyright is established, the current legal landscape will be considered and applied to artificial intelligence so that the legalities (legal obligation) and legal issues can be recognized and perhaps reconciled. <\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Who the Computer Sees: Race, Gender and AI<\/strong><\/h4>\n\n\n\n<p>Mar 27, 2024<br><em>Carla Fehr<\/em> &#8211; University of Waterloo, Wolfe Chair in Scientific and Technological Literacy<\/p>\n\n\n\n<p>Facial recognition systems can do a lot more than open your smartphone. They can sort faces into many categories, including emotional state, age, race, and sex. Most Americans are, without their consent, included in government face recognition databases. This paper develops a case study in which scholar, activist, and public figure Joy Buolamwini diagnoses a now-famous failure of facial recognition systems to \u2018see\u2019 and accurately classify Black women\u2019s faces. This case illustrates many important issues in the ethics and politics of AI. In lecture I highlight how this problem is an EDI problem and caution against \u2018easy\u2019 solutions that can both backfire and lead to the exploitation of diverse researchers.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Artificial Intelligence, Artifice, and Art<\/strong><\/h4>\n\n\n\n<p>Mar 20, 2024<br><em>Ted Chiang<\/em> &#8211; Science Fiction writer (<a href=\"https:\/\/en.wikipedia.org\/wiki\/Ted_Chiang\">Bio<\/a>)<\/p>\n\n\n\n<p style=\"line-height:1.5\">Does artificial intelligence deserve to be called intelligence? What are the uses of synthetic text and imagery, and what would it take for those to be artistic mediums?<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Disability, Social Media and AI: Implications For the Computing Sciences<\/strong><\/h4>\n\n\n\n<p>Mar 13, 2024<br><em>Johnathan Flowers<\/em> &#8211; California State University, Northridge<\/p>\n\n\n\n<p style=\"line-height:1.5\">This talk is divided into two parts: the first part talks through some of the social implications of technology and disability with an emphasis on AI and emerging technologies as they intersect with ableism and ableist discourse in society, covering  some of the material in my chapter on AI and disability in the Bloomsbury Guide to Philosophy of Disability. The second part will engage with the computing sciences and disability, specifically the cultural environment of the computing sciences and the ways it relies on a \u201cculture of smartness\u201d which maintains ableism within the field. <\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong><strong>The Art of Digital Capitalism<\/strong><\/strong><\/h4>\n\n\n\n<p>Mar 6, 2024<br><em>Tung-Hui Hu<\/em> &#8211; University of Michigan, English &amp; Digital Studies<\/p>\n\n\n\n<p>While the ultimate compliment to an AI model is that it can write poetry or create art, this talk looks to actual writers and artists who have worked alongside digital technology. Moving from the 1970s, when a group of artists decide to build a decentralized network, to the present moment, when artists and writers are training their own AI models, this talk is structured around these footnotes to the history of computation. These artistic works aren\u2019t just decorative or speculative, though; instead, they have the potential to teach us how to live with (and perhaps turn away from) the devastating consequences of digital capitalism.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Social Media, Polarization and Conflict<\/strong><\/h4>\n\n\n\n<p>Feb 28, 2024<br><em>Jonathan Stray<\/em> &#8211; UC Berkeley Center for Human-compatible AI<\/p>\n\n\n\n<p style=\"line-height:1.5\">It&#8217;s now commonplace to say that ranking algorithms used by major social media and news platforms are tearing us apart, but what does this mean, what is the evidence, and what could we do differently? I&#8217;ll begin with some frameworks for thinking about conflict and polarization, to clearer define what the goals of &#8220;better&#8221; algorithms might&nbsp;be. Then we&#8217;ll look at theories how social media ranking algorithms can affect conflict, and data which might clarify what is actually happening. We&#8217;ll conclude by asking the question: if our algorithms are bad, what would better algorithms look like?<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Pragmatism vs. Principle and How We Get Both Wrong: Inclusive Design Gifts<\/strong><\/h4>\n\n\n\n<p>Feb 14, 2024<br><em>Jess Mitchell<\/em> &#8211; Ontario College of Art and Design<\/p>\n\n\n\n<p style=\"line-height:1.5\">From decision-making, thinking, and the creation of everyday things what are we missing? And what hides in the gaps? An inclusive design perspective gives us an opportunity to approach just about everything differently. Let\u2019s have a chat about approaching things differently.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Virtual Reality as Artistic and Reflexive Media<\/strong><\/h4>\n\n\n\n<p>Feb 7, 2024<br><em>Sojung Bahng<\/em> &#8211; Queen&#8217;s University, Department of Film &amp; Media and DAN School of Drama and Music<\/p>\n\n\n\n<p style=\"line-height:1.5\">This talk will introduce utilization of computational media in artistic and cinematic practices. The primary focus will be on the role of virtual reality as a reflexive material device for storytelling, serving as a lens to reflect our perception and consciousness within socio-cultural contexts.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Algorithmic Bias and Fairness: Exploring Historical Context, Methodological Shortcomings and Future Challenges<\/strong><\/h4>\n\n\n\n<p>Jan 31, 2024<br><em>Rina Khan<\/em> &#8211; Queen&#8217;s University, School of Computing<\/p>\n\n\n\n<p style=\"line-height:1.5\">AI has seen incredible strides in the past decade and is now ubiquitous in various applications we interact with every day. AI applications have also demonstrated the ability for perpetuating harmful biases and stereotypes to even committing actual harm. This can be observed in facial recognition, law enforcement, hiring screening, automated grading, and natural language processing among others. In this talk, I will examine the lessons that can be learned from the history of computing and AI in relation to algorithmic fairness. I will explore the methodological factors that lead to algorithmic bias and harm and the human factors that are intrinsically interwoven. I will finally discuss proposed mitigation strategies, and the challenges that lie ahead towards creating fairer models.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Designing for Coexistence: Adaptability, Equity, and Ethical Pluralism in Sociotechnical Systems<\/strong><\/h4>\n\n\n\n<p>Jan 17, 2024<br><em>Mohammad Rashidujjaman Rifat<\/em> &#8211; University of Toronto, Computer Science<\/p>\n\n\n\n<p style=\"line-height:1.5\">Many technologies today are built on Western scientific principles and empirical data. This approach often overlooks or even discriminates against people whose values are deeply rooted in traditional beliefs and ethics. In his talk, putting faith in the center of analysis, Rifat will examine how the prevailing ethical perspectives in technology development tend to favor some communities while neglecting or marginalizing others worldwide. Rifat will share insights from his diverse research, which spans areas like sustainability, development, privacy, and the prevention of online harms, to highlight how this marginalization occurs. He will then explain his approach to addressing these challenges by integrating theories from postsecular, postcolonial, and decolonial studies with advanced computing techniques, ranging from deep learning to virtual reality. His goal is to develop technologies that are more inclusive, equitable, and plural, especially for those whose ethics and traditions have been overlooked.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Epistemic Corruption and Interested Knowledge<\/strong><\/h4>\n\n\n\n<p>Nov 23, 2023<br><em>Sergio Sismondo<\/em> &#8211; Queen&#8217;s University, Philosophy<\/p>\n\n\n\n<p style=\"line-height:1.5\">When a system that produces and distributes knowledge importantly loses integrity, ceasing to provide the kinds of trusted knowledge expected of it, we can label this \u2018epistemic corruption\u2019. It turns out that such systems are often more fragile than they appear, and they can lose their integrity as a result of internal or external pressures. It also turns out that important actors will often disagree about what constitutes epistemic corruption or which practices are cases \u2013 and hence it is important to look at accusations and defences with a measure of neutrality. I will present a small handful examples of epistemic corruption, in an attempt to understand some of the stakes.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Towards Equitable Language Technologies<\/strong><\/h4>\n\n\n\n<p>Nov 16, 2023<br><em>Su Lin Blodgett<\/em> &#8211; Microsoft Research Montreal<\/p>\n\n\n\n<p style=\"line-height:1.5\">Language technologies are now ubiquitous. Yet the benefits of these technologies do not accrue evenly to all people, and they can be harmful; they can reproduce stereotypes, prevent speakers of \u201cnon-standard\u201d language varieties from participating fully in public discourse, and reinscribe historical patterns of linguistic discrimination. In this talk, I will take a tour through the rapidly emerging body of research examining bias and harm in language technologies and offer some perspective on the many challenges of this work. I will offer some perspective on the many challenges of this work, ranging from how we anticipate and measure language-related harms to how we grapple with the complexities of where and how language technologies are encountered. I will conclude by discussing some future directions towards more equitable technologies.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>An Approach Towards Accessibility and Inclusive Design&nbsp;<\/strong><\/h4>\n\n\n\n<p>Nov 9, 2023<br><em>Matt Jacobs and Eric Kellenberger<\/em> &#8211; Queen&#8217;s University and San Jos\u00e9 State University<\/p>\n\n\n\n<p style=\"line-height:1.5\">It is easy to mistake accessibility and inclusive design auxiliary efforts, where accommodations are simply appended to an existing product. In truth, the most extreme examples of need often provide valuable insight into features that benefit everyone. This concept is the driving force behind &#8216;Universal Design.&#8217; The goal of the present talk is to provide frameworks that broaden our ability to examine accessibility and inclusion, equipping the audience with tools for more iterative, person-driven design approaches. The emphasis will be more on how to interpret and approach the problem, rather than the exact solution itself.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Terraforming Bits &amp; Carbonivorous Clouds: On the Metabolic Rift of Computation<\/strong><\/h4>\n\n\n\n<p>Nov 2, 2023<br><em>Steven Gonzalez Monserrate <\/em>&#8211; Goethe University<\/p>\n\n\n\n<p style=\"line-height:1.5\">In the nineteenth century, Karl Marx formulated the concept of \u201cmetabolic rift\u201d to describe capitalism\u2019s unsustainable expansion as chemical fertilizers depleted soil nutrients and smog from factories choked the skies of an industrializing Europe. Today, much of what society describes as the \u201cCloud\u201d resides in data centers not so unlike Marx\u2019s factories. They are the invisible engines of digital capitalism; their pooled, remote computational power and storage capacity are the informatic backbone of everything from social media to payroll to ChatGPT. Like capitalism, computation is a metabolic process. Drawing on six years of ethnographic research in data centers located in the United States, Puerto Rico, Iceland, and Singapore, this lecture surveys the global and local environmental impacts of cloud computing including carbon emissions, water footprint, electronic waste output, land use, and noise pollution. Inspired by science and technology studies and speculative fiction, alternative data ecologies are presented as a corrective to digital capitalism\u2019s environmental excess.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>A Critical Look at Canada\u2019s Proposed Artificial Intelligence and Data Act<\/strong><\/h4>\n\n\n\n<p>Oct 26, 2023<br><em>Teresa Scassa <\/em>&#8211; University of Ottawa<\/p>\n\n\n\n<p style=\"line-height:1.5\">How do we regulate a technology that crosses all sectors and industries, and that presents considerable risks alongside its promises? Canada\u2019s response to this question, the proposed Artificial Intelligence and Data Act (AIDA), is currently before the INDU committee of Parliament. If passed, AIDA will provide for ex ante regulation of commercial AI in Canada. This presentation offers a critical look at AIDA, placing it within the broader context of other governance work in Canada and abroad.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>The Artificial Sublime<\/strong><\/h4>\n\n\n\n<p class=\"has-text-align-left\">Oct 5, 2023<br><em>Regina Rini <\/em>&#8211; York University<\/p>\n\n\n\n<p class=\"has-text-align-left\" style=\"line-height:1.5\">AI tools like Dall-E, Midjourney, and even ChatGPT can produce objects that look like artwork. But is it really art? Here I will argue that AI is surprisingly well-suited to a particular type of artistic value: the sublime. Sublimity, according to Kant, is the experience of encountering something so vast that the human mind cannot comprehend it. Kant thought that this could be found only in nature, not in art made by humans. But, I will argue, he was wrong about that last part \u2013 and it turns out that AI can produce sublime experiences too.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>The Future of Work in Canada &#8211; A Public Policy Perspective<\/strong><\/h4>\n\n\n\n<p class=\"has-text-align-left\">Sep 28, 2023<br><em>Sunil Johal <\/em>&#8211; University of Toronto <\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Industry Presence and Influence in AI<\/strong><\/h4>\n\n\n\n<p>Sep 7, 2023<br><em>Will Aitken <\/em>&#8211; Queen&#8217;s University<\/p>\n\n\n\n<p style=\"line-height:1.5\">The advent of transformers, higher computational budgets, and big data has engendered remarkable progress in Natural Language Processing (NLP). Impressive performance of industry pre-trained models has garnered public attention in recent years and made news headlines. That these are industry models is noteworthy. Rarely, if ever, are academic institutes producing exciting new NLP models. Using these models is critical for competing on NLP benchmarks and correspondingly to stay relevant in NLP research. We surveyed 100 papers published at EMNLP 2022 to determine whether this phenomenon constitutes a reliance on industry for NLP publications. We find that there is indeed a substantial reliance. Citations of industry artifacts and contributions across categories is at least three times greater than industry publication rates per year. Quantifying this reliance does not settle how we ought to interpret the results. We discuss two possible perspectives in our discussion: 1) Is collaboration with industry still collaboration in the absence of an alternative? Or 2) has free NLP inquiry been captured by the motivations and research direction of private corporations?<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\">Topic: <strong>Copyright and Fair Use<\/strong><\/h4>\n\n\n\n<p>Mar 31, 2023<br><em>John Watkinson <\/em>&#8211; Larva Labs <\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Ethical Issues in the Mass Collection of Human Rights Documentation<\/strong><\/h4>\n\n\n\n<p>Mar 24, 2023<br><em>Yvonne Ng <\/em>&#8211; WITNESS<\/p>\n\n\n\n<p style=\"line-height:1.5\">Investigators, researchers, and archivists around the world are using computingtools and services to collect and preserve large quantities of human rightsdocumentary evidence, often without considering all the potential unintendedconsequences and harms. We will discuss some of the ethical issues that arise,and ways that some have found to approach this work responsibly.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Light-touch ethics: Responsible AI\u2019s role in government<\/strong><\/h4>\n\n\n\n<p>Mar 17, 2023<br><em>Ana Brandusescu<\/em> &#8211; McGill University<\/p>\n\n\n\n<p style=\"line-height:1.5\">A part of the artificial intelligence (AI) ethics movement, responsible AI has become a dominant strategy in governing AI, rooted in corporate social responsibility. One such example is the algorithmic impact assessment (AIA). Created by governments and professional associations, a typical AIA produces a points-based reward system for impact and risk assessment levels for an AI system. This talk will address power and influence in responsible AI and the broader implications for the governance of AI and its ethics.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Why privacy doesn\u2019t matter and understanding what\u2019s really at stake does<\/strong>.<\/h4>\n\n\n\n<p>Mar 3, 2023<br><em>LLana James<\/em> &#8211; University of Toronto<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Artificial Intelligence: Navigating the Intersection of Ethics, Law, and Policy<\/strong><\/h4>\n\n\n\n<p>Feb 17, 2023<br><em>Kassandra McAdams-Roy<\/em><\/p>\n\n\n\n<p style=\"line-height:1.5\">This lecture will examine the legal, ethical and policy considerations surrounding the development and use of Artificial Intelligence (AI). The widespread adoption of AI has created new challenges for society, including issues related to data privacy, algorithmic bias, accountability, human safety and more. The lecture will explore some of the current and emerging laws, regulations and other normative frameworks governing AI. It will also discuss the ethical considerations surrounding the use of AI, the broader policy implications, and will consider how best to balance the benefits of this technology with the need to protect individual rights and interests.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>What is creativity, and what does it have to do with labour and computers?<\/strong><\/h4>\n\n\n\n<p>Feb 10, 2023<br><em>Darren Abramson<\/em> &#8211; Dalhousie University<\/p>\n\n\n\n<p style=\"line-height:1.5\">What does it mean to create something? What do we deserve for our labour? I briefly consider the concept of creativity and argue for a particular view with examples from machine learning. Then I consider the value of labour in programming, and contrast a view from the turn of the millennium with perspectives from recent events.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Inclusive Design, Accessibility and the Outlier Challenge<\/strong><\/h4>\n\n\n\n<p>Feb 3, 2023<br><em>Jutta Treviranus<\/em> &#8211; OCADU<\/p>\n\n\n\n<p style=\"line-height:1.5\">What is inclusive design? How is it situated with respect to other forms of design and accessibility? What approaches does it offer for complexity, uncertainty, disparity, and wicked decisions?<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Anticheat, Antitox, and the Bottom Line<\/strong><\/h4>\n\n\n\n<p>Jan 27, 2023<br><em>Kyle Boerstler<\/em> &#8211; Activision<\/p>\n\n\n\n<p style=\"line-height:1.5\">It seems like a no-brainer that keeping cheaters out of games would be a good idea for companies (and it&#8217;s why jobs like mine exist). However, tensions appear when the population of cheaters overlaps with the population of spenders in games. This problem becomes even worse for Antitox, because the perceived harm is lower, often goes unreported, and does not have an obvious solution. For these reasons, investment in antitox is often below investment in anticheat, which means the solutions are often more heavy-handed, and less likely to be implemented because it is even more common for toxic players and spenders to overlap. In this talk, I will cover these issues from my standpoint as a data scientist, addressing the tensions and discussing the relative effects on our player populations.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>The Problem with Automated Content Moderation<\/strong><\/h4>\n\n\n\n<p>Jan 20, 2023<br><em>Zeerek Talak<\/em> &#8211; Simon Fraser University<\/p>\n\n\n\n<p style=\"line-height:1.5\">Online content moderation using machine learning is a task that is both necessary yet has failed in its mission to protect marginalized communities that are disproportionately at risk of harms. Claims have been made that the issue has been available resources, i.e. datasets and adequately advanced machine learning models. I argue in this talk that the fundamental reason that the power dynamics which govern our social structures have inadequately been subverted to afford the protection of marginalized communities. Through acritical reading of machine learning, I show how the task of protecting marginalized communities is at odds with machine learning without an associated restructuring of the power dynamics that govern the technology. <\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Portrait of the Artist as a Young Algorithm<\/strong><\/h4>\n\n\n\n<p>Nov 21, 2022<br><em>Sofie Vlaad<\/em> &#8211; Queen&#8217;s University<\/p>\n\n\n\n<p>Sofie\u2019s research is firmly rooted in both feminist philosophy and transgender studies. These twin schools of thought inform her work in ways that are both explicit and implicit. Her current project brings together ethics of artificial intelligence, philosophy of creativity, and digital poetics to explore a series of related questions: Might we consider poetry constructed with the assistance of machine learning to be a product of creativity? If so, how is this form of creativity shaped by algorithmic bias? Does computer generated poetry have aesthetic value?<\/p>\n\n\n\n<p>Currently Sofie is working on an article that posits trans poetics as a way of doing trans philosophy, a co-authored piece exploring how we might epistemically ground diversity projects in AI, and a collaborative arts project exploring queer\/mad\/trans\/femme futures.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>What Software Eats: The Banal Violences of Efficiency and How to Bite Back<\/strong><\/h4>\n\n\n\n<p>Nov 14, 2022<br><em>Bianca Wylie<\/em> &#8211; Digital Public, Tech Reset Canada<\/p>\n\n\n\n<p>Bianca is a writer with a dual background in technology and public engagement. She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015.<\/p>\n\n\n\n<p>Bianca\u2019s writing has been published in a range of publications including: Boston Review, VICE, The Globe and Mail, and Toronto Life. She also posts on Medium. She is currently a member of the advisory boards for the Electronic Privacy Information Centre (EPIC), The Computational Democracy Project and the Minderoo Tech &amp; Policy Lab and is a senior fellow at the Centre for International Governance Innovation.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>Darwin&#8217;s Animoji: Histories of Racialization in Facial Analyses Past and Present<\/strong><\/h4>\n\n\n\n<p class=\"has-text-align-left\">Oct 31, 2022<br><em>Luke Stark<\/em> &#8211; University of Western Ontario<\/p>\n\n\n\n<p>Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. His work interrogates the historical, social, and ethical impacts of computing and artificial intelligence technologies, particularly those mediating social and emotional expression. His scholarship highlights the asymmetries of power, access and justice that are emerging as these systems are deployed in the world, and the social and political challenges that technologists, policymakers, and the wider public face as a result.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Computing and Global Development: A Critical Perspective<\/strong><\/h4>\n\n\n\n<p>Oct 17, 2022<br><em>Ishtiaque Ahmed<\/em> &#8211; University of Toronto<\/p>\n\n\n\n<p>His research interest is situated at the intersection of computer science and the critical social sciences. His work is often motivated by social justice and sustainability issues, and he puts them in the academic contexts of Human-Computer Interaction (HCI) and Information and Communication Technology and Development (ICTD). He operates through a wide range of technical and methodological apparatuses from ethnography to design, and from NLP to tangible user interface. &nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>Understanding Conflicts of Interest in Ethics of AI Research<\/strong><\/h4>\n\n\n\n<p>Oct 3, 2022<br><em>Mohamed Abdalla<\/em> &#8211; University of Toronto<\/p>\n\n\n\n<p>As more governmental bodies look to regulate the application of AI, it is important that the incentives of those consulted be clearly understood and taken into account. This talk will explore the role of industry funding on AI research and the incentives such funding creates. To do this, we will: i) discuss how conflicts of interest are treated in other fields of academia, ii) quantify financial relationships between researchers and industry, and iii) discuss how young professionals and future researchers should approach the issue of corporate funding.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-medium-font-size\"><strong>The Automation of Everything<\/strong><\/h4>\n\n\n\n<p>Sep 19, 2022 <br><em>David Murakami Wood<\/em> &#8211; University of Ottawa<\/p>\n\n\n\n<p>Beginning with factory work and the introduction of the production line, this presentation examines how automation within capitalism has progressed from the workplace through to the liminal spaces between work and not-work towards the full automation of the social. It draws on work from fields as diverse as Political Economy, Surveillance Studies, Science and Technology Studies, Geography and Environmental Studies to trace the implications of automation for work and life in the era of platform capitalism.<\/p>\n\n\n\n<h4 class=\"wp-block-heading has-text-align-left has-medium-font-size\"><strong>Refusing AI Contact: Autism, Algorithms and the Dangers of &#8216;Technopsyence&#8217;<\/strong><\/h4>\n\n\n\n<p class=\"has-text-align-left\">Sep 12, 2022<br><em>Os Keyes<\/em> &#8211; University of Washington<\/p>\n\n\n\n<p>Their work focuses on bringing together both the sociology and philosophy of technoscience to examine the interplay of gender, disability, technology and power. Current projects focus on the framing and co-relations between autistic people and artificial intelligence, and the ways trans people are the subject of, and subject to, scientific research. <\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>social, ethical and legal issues in computing lecture series Upcoming Events Machines and Measurements: the Lens of Bias vs the [&hellip;]<\/p>\n","protected":false},"author":70,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-75","page","type-page","status-publish","hentry"],"acf":[],"spectra_custom_meta":{"_edit_last":["2"],"footnotes":[""],"_edit_lock":["1765381465:70"],"_uag_css_file_name":["uag-css-75.css"],"_uag_page_assets":["a:9:{s:3:\"css\";s:263:\".uag-blocks-common-selector{z-index:var(--z-index-desktop) !important}@media (max-width: 976px){.uag-blocks-common-selector{z-index:var(--z-index-tablet) !important}}@media (max-width: 767px){.uag-blocks-common-selector{z-index:var(--z-index-mobile) !important}}\n\";s:2:\"js\";s:0:\"\";s:18:\"current_block_list\";a:8:{i:0;s:12:\"core\/heading\";i:1;s:14:\"core\/paragraph\";i:2;s:11:\"core\/search\";i:3;s:10:\"core\/group\";i:4;s:17:\"core\/latest-posts\";i:5;s:20:\"core\/latest-comments\";i:6;s:13:\"core\/archives\";i:7;s:15:\"core\/categories\";}s:8:\"uag_flag\";b:0;s:11:\"uag_version\";s:10:\"1774292936\";s:6:\"gfonts\";a:0:{}s:10:\"gfonts_url\";s:0:\"\";s:12:\"gfonts_files\";a:0:{}s:14:\"uag_faq_layout\";b:0;}"]},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"cs257","author_link":"https:\/\/labs.cs.queensu.ca\/etlab\/author\/cs257\/"},"uagb_comment_info":0,"uagb_excerpt":"social, ethical and legal issues in computing lecture series Upcoming Events Machines and Measurements: the Lens of Bias vs the [&hellip;]","_links":{"self":[{"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/pages\/75","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/users\/70"}],"replies":[{"embeddable":true,"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/comments?post=75"}],"version-history":[{"count":13,"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/pages\/75\/revisions"}],"predecessor-version":[{"id":464,"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/pages\/75\/revisions\/464"}],"wp:attachment":[{"href":"https:\/\/labs.cs.queensu.ca\/etlab\/wp-json\/wp\/v2\/media?parent=75"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}