When more than 50 technology companies, universities, and startups from around the world came together to form the AI Alliance last December, most of the world still hadn't grasped the rapid advances being made in artificial intelligence.
The industry group aimed to analyze concerns and find practical ways to move AI forward as regulators focus on the technology and questions swirl about whether its use could encourage prejudice and discrimination, cost people jobs, or even mean the end of the human race.
About seven months later, the organization, led by IBM and Meta Platforms Inc., has formed working groups with about 100 members to address everything from AI skills to safety.
The Canadian Press asked members what measures Canada should prioritize as AI evolves.
The greater the risk, the greater the reward
Abhishek Gupta, founder of the Montreal Institute for AI Ethics, sees Canada as the “birthplace of AI.”
Some of the technology's pioneers, such as Yoshua Bengio and Geoffrey Hinton, have conducted much of their research in Canada, and the country was a hotbed of AI research long before it became a hot topic.
But Gupta worries about the country's ability to turn AI into profit.
“Unfortunately, where we've started to lose our edge is in commercialization,” he said.
Part of that is because Canadian talent is seeking higher salaries in the U.S. and other countries, where Gupta has heard of engineers earning just under $1 million a year. U.S. venture capital firms, which have deeper pockets and often take a bolder approach, can spend more than their Canadian counterparts, further deterring local companies, Gupta said.
This pattern continues when investors sell part or all of their ownership in the company, with many Canadian founders choosing an exit strategy to transfer the business to a company outside of Canada, taking into account what buyers are willing to pay in other countries.
As an example of AI talent leaving the country, Gupta pointed to Element AI, a Montreal-based company that developed AI solutions for large organizations and was sold to California-based ServiceNow in 2020.
“It's unfortunate that the company didn't remain a Canadian company because, of course, what we want is to translate our research findings into commercial success,” he said.
Jeremy Burns, former chief technology officer at ElementAI and now vice president of AI at ServiceNow, similarly lamented Canada's inability to capitalize on the advantages it once had.
To turn the situation around, he believes the country needs to stop being so conservative and focus more on ways to “share the profits” of startups, rather than on venture capital firms protecting themselves from losses.
“To win the jackpot, you have to put chips into the game,” he said.
Burns said Canada needs to look beyond the “highly visible companies” and focus its support on lesser known startups with big potential.
Proper Guardrails
When the Alliance was founded, countries had already begun to develop AI regulations.
U.S. President Joe Biden has issued an executive order requiring AI developers to share safety testing results and other information with the government, and the European Union has implemented strict compliance requirements.
Manav Gupta, vice president and chief technology officer at IBM Canada, praised the U.S. government's swift response and the EU policy because it's a layered approach that recognizes that an AI system tied to, say, a weapon, poses very different risks than one involved in a task like processing welfare claims.
He believes the two policies are “paving the way” for other countries and serving as a benchmark for what AI regulation should look like around the world.
Canada has proposed AI-centric legislation for 2022, but it won't be implemented until at least 2025, leaving the country to rely on a voluntary code of conduct signed by IBM and dozens of other companies in the meantime.
Gupta said any policy adopted by the country should have a “clearly defined framework” with a graduated approach to risks.
“The riskier the technology, the higher the risk rating and therefore the more strict the regulation and the more transparency,” he said.
ServiceNow's Burns also said China needs to be careful not to stray too far from global regulatory direction.
“If we do it wrong, it creates friction and makes it harder for Canadian companies to compete with other countries. So to some extent, Canada's role cannot be alone.”
Focus on open source AI
As AI advancements become more frequent, Kevin Chan, director of global policy campaign strategy at Meta, which owns Facebook and Instagram, advocates for the tech industry to embrace the open source model.
The open source model means that the code underlying an AI system is free for anyone to use, modify, and build upon, expanding access to AI, enhancing development and research, and bringing transparency to the technology.
“That's really how innovation happens,” Chan said of the open source philosophy.
“We want to ensure that people have the space to choose to use an open model, allowing us to innovate more quickly and democratize this technology to more people.”
But the open source model has drawbacks — people can use it to do harm, and exposed vulnerabilities allow hackers to attack multiple systems at once — but Chang sees opportunity in the approach.
“The open model is perfect for a country like Canada that doesn't have the resources to build its own frontier model,” he says.
This report by The Canadian Press was first published June 21, 2024.