Commissioner advocates for firm AI regulations on misinformation, bias, deep fakes, and more
Ontario's Information and Privacy Commissioner, Patricia Kosseim, is voicing her urgency to address the burgeoning risks associated with the rapid advancement of artificial intelligence (AI).
Her concerns are multi-faceted, ranging from the spread of misinformation, potential for duping Canadians, to the entrenchment of biases and discrimination.
In a Global News article, Kosseim highlighted AI chatbots like ChatGPT for their ability to generate detailed text from simple prompts. She points out that the outputs from such systems are not akin to organized, curated reference materials, but rather, they are unpredictable and often lack transparent sourcing and creation processes.
Her remarks align with the observance of Data Privacy Week in Canada, a period that underscores the significant strides made in AI technology and its ubiquitous discussion across various industries. Since the introduction of ChatGPT by OpenAI in November 2022, there has been a surge in corporate interest in AI deployment, alongside regulatory deliberations on balancing public protection against stifling innovation.
Kosseim's concerns are particularly evident in the context of deep fakes, which she identifies as a major area of misuse. Deep fakes, which can alter videos, audio clips, or photos to convincingly depict someone doing or saying something they have not, are ripe tools for conspiracy theorists and purveyors of disinformation.
She cites instances where synthetic mimicry of executive voices and viral fake images, such as a fabricated explosion at the U.S. Pentagon and a false video of director Michael Moore endorsing Donald Trump, have had real-world impacts.
Emphasizing the urgency of addressing these risks, Kosseim describes the current data and AI landscape as a fundamental paradigm shift of the current generation. She stresses that technology is not just a matter for corporate boardrooms or labs but affects everyone.
Legislators share this sentiment, as evidenced by the federal government's bill tabled in June, aimed at regulating AI. Although the bill's implementation is expected no sooner than 2025, the government is promoting a voluntary code of conduct for tech companies in the interim. This code encourages screening datasets for biases and assessing AI for potential adverse impacts.
In Ontario, an AI framework has been established to guide the public sector's use of AI, featuring risk-based rules. Kosseim contributed her insights during its development. The framework emphasizes transparency in AI deployment, risk prevention, and avenues to challenge AI-driven decisions.
However, Kosseim advocates for even more robust measures. Since May, she, alongside the Ontario Human Rights Commissioner, has been urging the province to develop and implement comprehensive, granular, and binding guardrails for public sector AI usage. She believes that enforceable rules are key to incentivizing organizations to prioritize these issues from the outset, rather than as an afterthought.
Kosseim remains hopeful that her calls for action will be acknowledged, saying, "I think they're going to have to."
In response to her push, a spokesperson for the Ministry of Public and Business Service Delivery, Nicholas Rodrigues, stated that Ontario has a working group of experts advising on the province's AI approach, ensuring responsible, transparent, and accountable use of AI by the government.