Keyword analysis shows ethical harms are roughly as important to Canadians as economic growth.
Canadians are just as concerned about AI harms as they are excited about its economic benefits, according to a BetaKit analysis of public consultation feedback compiled by the federal government.
In February, the federal government released the results of the 30-day public consultation it had held last year to inform its upcoming AI strategy. The government’s high-level summary of the consultation was informed by more than 64,600 responses to questions from over 11,300 participants and generated with the help of AI. It suggested that Canadians’ priorities varied widely, from commercializing the technology to addressing its safety risks.
But the relative weight of the issues raised by Canadians was left unclear; the report did not give hard numbers on how many Canadians shared each sentiment, or delve into detail about the AI prompts that were used in its creation. BetaKit ran an independent data analysis to quantitatively assess the prevalence of different themes presented within that data. The numbers suggest a pronounced tension within the Canadian public about AI’s economic potential and the risks the technology could bring.
BetaKit’s keyword-based analysis found that language around AI’s economic benefits was mentioned in 35.6 percent of entries, but AI harms were nearly equal, at 34.6 percent. More specifically, the top four themes we analyzed—ranked by prevalence—were economic growth, ethical harms, environmental harms, and productivity.
Breaking down public feedback
The federal consultation asked the public to weigh in on how the government should safely adopt AI, scale “Canadian AI champions,” attract investment, create sovereign AI infrastructure, and build public trust. Specific engagement was sought from “founders, researchers, workers, creators, students, public servants and community voices.” The majority of respondents (83 percent) were individuals, while the rest participated on behalf of an organization.
Innovation, Science, and Economic Development Canada (ISED) said it analyzed the responses with digital tools and an in-house pipeline of large language models (LLMs) overseen by humans to identify and summarize common themes. To get a better sense of how common these themes were, BetaKit ran a keyword-based code analysis of the submissions to break down how often each topic was mentioned.
BetaKit asked AI and Digital Innovation Minister Evan Solomon’s office at ISED for comment on its findings, and for more detail on how the government’s internal, AI-driven classification was weighted.
A spokesperson for the minister’s office pointed to the public summary, as well as the full data file, which were published online. They added that the government continues to strengthen engagement with underrepresented groups and regions to ensure all voices are heard in shaping Canada’s AI strategy. The spokesperson did not comment on how the government weighted responses to the consultation in the report or summary.
The numbers suggest a pronounced tension within the Canadian public about AI’s economic potential and the risks the technology could bring.
On its website, ISED states that it developed a “scalable, AI-enabled workflow,” called a classification pipeline, that used several LLMs to clean survey responses and categorize them into a structured set of themes and subthemes. Responses were analyzed with the Canadian enterprise survey tool SimpleSurvey, before using the classification pipeline for analysis with LLMs to read through the submissions and identify common themes.
ISED also used manual human review at several stages to ensure that “intents were meaningful and sensible and that the solution had at least a 90 percent success rate in categorizing responses into specific intents.” However, ISED did not clarify which prompts or workflows were used in that pipeline, or how it determined classification success.
This method sped up the process of sifting through more than 64,000 responses to the various questions in the consultation, ISED said. But the results appear as sweeping takeaways. For example, the report states that respondents “strongly emphasized the need for Canada to attract, retain and develop top AI talent” as the first item listed under “online consultations.” But it wasn’t clear how many respondents held this belief. Similarly, no numbers were attached to concerns under the second heading in the same section about “premature deployment and overhyped technologies like generative AI,” or “environmental harm, privacy risks and job displacement.”
RELATED: We read every submission from Canada’s AI task force: here’s what they said
Because ISED stated that “stakeholders were divided between optimism for AI’s potential and skepticism about its risks,” BetaKit looked to assess exactly how that divide broke down by tracking how often certain themes were mentioned. We identified commonly used terms across the responses and built final keyword lists for each theme: economic growth, productivity, ethical harms, and environmental harms.
We then created an extensive list of related keywords—for example, terms like “ethics,” “ethical,” or “privacy” to indicate potential negative sentiment or concern about harm, or terms like “efficiency” and “automation” to indicate concerns regarding productivity. (We’ve provided more detail on our methodology at the bottom of this article).
While mentions of economic growth appeared in 35 percent of entries (more than 89,000 times), mentions of ethical harms showed up in 32 percent of entries (more than 65,000 times), revealing a narrow gap between the two top themes.
Language around ethics (“ethics,” “ethical,” “ethically”) was the most commonly mentioned word family—outpacing other top word families like “sector,” “industry,” and “investment.” The environmental harms of AI, which often refer to the energy and land consumption of data centres, were secondary to ethical harms, but still present in 27 percent of responses.
RELATED: Canada’s new AI strategy is off to a bad start
Meanwhile, submissions discussing “productivity” showed up in just 21 percent of entries, indicating that while it is a priority for Canadians, it may be a less prominent concern than suggested by ISED’s summary report. In the government’s report, “AI adoption across industry and governments” was listed as the second major theme from online consultations, adding that “respondents stressed that successful AI adoption means moving beyond pilots and prototypes to real-world applications that improve productivity and public services.”
The top 10 word families mentioned, in order of frequency, were: ethics, sector, industry, investment, risk, environment, talent, funding, training, and education.
AI strategy in the works
The national public consultations were released alongside recommendations from the federal government’s AI task force, composed of 26 different AI experts in industry and academia.
Together, the public feedback and task force reports are meant to shape the government’s renewal of its AI strategy, which has yet to be unveiled. Solomon’s office originally intended to table the AI strategy by the end of 2025, but later pushed the timeline for its release to 2026. Solomon told The Logic earlier this month that the strategy was “ready to go” and coming “very soon.”
Carole Piovesan, a lawyer and co-founder of INQ Law whose practice focuses on data governance and AI risk management, said in an interview with BetaKit that this kind of public consultation is meant to “get a pulse of some of the priorities in Canada.” Piovesan, who participated in 2018 consultations to develop Canada’s digital charter, a part of the now-dead Bill C-27, said she wasn’t surprised at the level of detail in the feds’ February report, as they’re meant to provide “a flavour of what the discussion was and a flavour of the policy direction.”
She also wasn’t surprised to see that Canadians have duelling priorities of balancing boosting economic growth and preventing ethical harms. “Usually in these types of consultations, we will see a little bit more on harm prevention or risk mitigation…because what you’re trying to do is shape the policy direction in a way that is very alive to the different risks now.”
Lawyer Carol Piovesan said these types of public consultations are meant to provide “a flavour of what the discussion was and a flavour of the policy direction.”
The federal consultation process was criticized by human rights groups, including an open letter that found fault with the short timeline to provide feedback, as well as a perceived industry slant and lack of diversity within appointees to the AI task force. The open letter spawned its own competing public consultation, whose responses can be found here.
Solomon has encouraged AI adoption while also pledging “light, tight, right” regulations and privacy safeguards, such as rules around deepfakes and children’s sensitive data. He said at an Ottawa QueerTech event last week that the government’s approach to regulation will be “airtight” when it comes to bias, racism, and hate, and that the government is looking at algorithmic transparency to help determine if certain AI systems contain built-in bias against marginalized groups.
The AI minister has also summoned leaders from prominent US tech companies to discuss AI safety. Solomon met with OpenAI to push for more transparency and safety measures after it emerged that the mass shooting perpetrator in Tumbler Ridge, BC, had their ChatGPT account flagged by the AI company. Solomon has also met with representatives from Anthropic to discuss the company’s Claude Mythos AI model, which the company has said has the potential to expose vulnerabilities within secure systems. In a statement, Solomon called Anthropic’s decision to limit the model’s release a “responsible path” and lauded its attention to safeguards.
The feds will have to weigh the policy implications of encouraging aggressive AI adoption and AI literacy, Piovesan, all within the “Canadian brand of responsible AI.”
“You have to do that not just in the controls you put in place…but also in how you bridge the trust deficit that people are facing,” Piovesan said.
Methodology
In order to conduct this analysis, we began with the four broad themes identified in the federal government’s summary of the consultation responses: economic growth, productivity, ethical harms, and environmental harms.
To help build keyword lists for each theme, we used the AI-assisted corpus analysis tool SketchEngine to identify commonly used terms across the 11,300+ submissions and 64,000+ responses to questions.
Those results were then reviewed and refined by members of our team, and used to create the final extensive lists of related keywords for each theme to capture multiple word forms—for example, terms like “ethics,” “ethical,” or “privacy” to indicate potential negative sentiment or concern about harm, or terms like “efficiency” and “automation” to indicate concerns regarding productivity.
We then used a simple Python formula (which employed the regular expressions module, also known as REGEX) to scan through all of the submissions and count how often those keywords appeared.
With files from Sarah Rieger.
Feature image courtesy ALL IN.
