Nowadays you can’t swing a dead cat without hitting some kind of wondrous “artificial intelligence” solution for all of your social, business, and family problems. Just like every new technology that has come before, it’s touted as a game-changer. Indeed, we’re probably just scratching the surface of the various uses for AI. There’s a race to integrate the technology in every aspect of our digital lives – embedding it in everything from word processing to search engines. The options are dizzying, the implications aren’t clear, and there’s no real roadmap yet for responsible AI use. The legalities of AI use are ambiguous at best.
This article is aimed at small businesses in Ontario who use commercially available AI tools – and not at companies who develop or deploy the tools themselves. There are three main sections:
- AI best practices for small businesses in Ontario
- Describing the state of AI law in Canada; and
- Understanding how existing laws apply to the use of AI tools by businesses.
Best Practices
I’ll start with a quick and dirty list of best practices if you choose to use AI in your business. I’ll explain why they’re a good ideal and where these best practices come from in the rest of the article, but if you read nothing else here, read this section. After, feel free to skip ahead as you like. I’ll never know if you do.
This is by no means a comprehensive list, nor will every point on this list apply to your business, but hopefully you find it to be a useful starting point for things you can do to minimize the legal risks of using AI tools in your business:
- Do a thorough risk assessment, including the following questions:
- What AI tools do you want to use (or do your contractors want to use)?
- Will AI be used to make decisions about high-impact areas like health care, hiring, education, housing, and access to important services?
- What kind of data will be input?
- Whose data is it?
- Who are the other stakeholders?
- Where is the data from, where is it transmitted through, where is it processed, and where it’s stored (including backups)?
- Where will the AI outputs be used?
- What safeguards are in place for privacy and security?
- Is there a risk of bias or discrimination?
- What laws/policies/professional standards apply?
- Do any guidelines/industry best practices apply?
- What promises have you made in contracts about privacy & data security
- What does/doesn’t your insurance cover?
- Develop AI literacy in your business
- Board members & executives must be competent and AI-literate in order to supervise its use effectively
- Read and understand the privacy, IP, security considerations before using AI tools
- Train your employees, staff, and service providers on the issues, key risks, company policy, and client requirements
- Use AI providers proven to be trustworthy in your industry – most of the time, paid is better than free, and higher subscription models offer higher security
- Address AI use in your contracts – whether you’re signing new ones, or updating long-standing contracts:
- Transparent & specific disclosure of AI use, and get consent from people whose data or IP you’re going to input
- Be clear on who is responsible to get consent from the data subjects, if there’s any third-party data being used
- Establish clear limitations on use (___ are the AI things that we’re consenting to, everything else requires further written consent from the AI decision-makers)
- Ask service providers about their use of AI, and have express limits baked in to contracts based on your risk assessments
- Consider AI disclaimers related to intellectual property in AI-generated outputs
- Address AI specifically in NDAs, or pre-contract documents to explore business opportunities like LOI/MOUs
- Partnerships/JVAs – align on AI policies, and spell it out in the contract
- Anonymize or redact inputs
- Never input personal information or health information unless expressly authorized, you have consent from the data subjects, and you’re certain it meets privacy and data security requirements
- Generalize prompts – instead of “draft an email to customer Jimmy Smith at Jimmy’s Contracting” with “Draft an email campaign for a general contractor customer”
- Be transparent with people using or affected by your use of AI
- Ensure end-users are aware when they’re interacting directly with AI – (chatbots, email tracking, recording/transcription, etc)
- If there is significant or unusual AI use/risk – bring it to their attention – don’t bury it deep in a contract
- Identify when (and what) AI has been used in creating content or making decisions
- Update your privacy policy to address AI use outside of a contract relationship, such as processing information from website tracking, cookies, contact forms, etc
- Develop an AI policy for the company which applies to employees & contractors
- Identify approved AI tools
- Identify who can use AI, and under what circumstances
- Establish protocols for inputting data and using AI-generated outputs
- Establish guidelines on ethical AI usage, data security, and intellectual property
- Establish guidelines on review of outputs
- Identify who’s responsible to oversee and monitor AI use
- Oversee the use of AI in the business
- Monitor use to ensure your policies are actually followed, contracts complied with, etc.
- Human review of outputs before putting them to use
- Use diverse data sets (or diverse AI tools), and bias detection tools when there’s a risk of AI bias impacting your work product
- Copyright protection – monitor the use of your content
- Insurance – ensure you have appropriate insurance coverage in place which will actually respond if any of your AI risks become liabilities
What the problem is?
There are two main causes of ambiguity about AI:
- The Wild Wild West Problem. There is direct regulation on the use of AI by Canadian business, which gives the impression of it being a free-for-all; and
- The Black Box Problem. Most AI technology is a black box into which we put our data, something happens, and we get an output. Most of us don’t know much about what’s used, where it goes, what it’s used for, who has access, or anything else that happens in between.
Remember those two points – you’ll see them a lot in this article.
So what sorts of things should small businesses think about before, during, and after using AI? There are two sides of the coin to think about:
- Your legal responsibilities to others when handling their data; and
- Protecting your own data from misuse by others.
You should think about both before you integrate AI tools into your business operations.
A few caveats here. I’m definitely not an early adopter of technology – I’m skeptical of the hype cycle of emerging technologies before the downstream consequences become clear. I’m old enough to remember the promise that computers in the office would free up so much of our time… I approach new technology with caution. I’m not an expert on AI, or really any technology (unless you count a splitting maul as technology, in which case I’m a solid 7/10).
While I can’t predict exactly how things will play out, the legal considerations for AI are really no different than for any other technology. The basic principles of our legal system won’t change. It’s been developed over a thousand years, and has absorbed countless new technologies in its time. AI isn’t likely to be any different.
AI Laws in Canada
The source of the Wild Wild West Problem. There isn’t an AI law in Canada. So, there is no one-stop shop to understand the legal considerations for AI in business.
There was an attempt to pass a Canadian AI law, similar to what they did in the EU, but it was rushed, highly flawed, and died in committee. There will be a law eventually, but right now, it’s a patchwork – more like a badly crocheted blanket full of holes – of various laws, policies, and guidelines, which makes it very difficult for small businesses to understand how they’re affected, and what they should do to ensure their butts are covered.
The EU has passed an AI law, and various international organizations like the OECD have published standards for responsible AI use, and all of them centre around a similar set of principles. If and when Canada does pass a law, it’ll probably involve things like:
- Human oversight of design & implementation
- Can be overridden for repair/decommissioning
- Transparency standards about use
- Fairness & equity in outcomes
- Respect human rights & democratic values
- Safety – proactively identify and mitigate harms
- Accountability – documentation & enforcement
- Traceable – datasets, processes, decisions
- Validity – does what it says
- Robust – stable and resilient
Be Reasonable
The starting point for responsible AI use is to be reasonable. You as a business owner must take reasonable steps to minimize the risk of loss to people who might reasonably be affected by your AI use. You don’t have to be perfect, but you do have to make the effort to figure out:
- Who could be affected by your actions – like customers (and their customers), system users, supplier/service providers, employees, partners, shareholders, and creditors
- What they could be affected by
- How they could be affected, and
- How you can minimize the risks that could reasonably occur.
If you don’t, and something happens, then you’re probably negligent, and could be sued, and even charged criminally as a result. You can’t just put your head in the sand and hope for the best.
What’s reasonable is super subjective – but generally the more that’s at stake (higher risk, more affected people), the higher standard you’ll be held to. There are several things we can use to help figure out what’s reasonable for your business:
- How existing laws apply to your specific uses of AI (discussed more below)
- Government policies or guidance:
- Government of Canada guide on the responsible use of AI in government
- Ontario Government principles on responsible AI use in government
- Health Canada guiding principles for the use of AI in health
- Industry standards and guidance:
- Guidance published by regulators such as the College of Physicians and Surgeons of Ontario, and the Law Society of Ontario
- Guidance and briefing papers published by industry associations
- ISO certification requirements
- Foreign requirements
Who’s Liable?
The most important thing to know is that if you choose to use AI tools, you’re responsible for how you use them. You can also be responsible for the impacts of the use of your work product in the real world. In AI liability, as in all liability – if you’re responsible for 1% of the damage, you can be on the hook to pay for 100% of it. So tread carefully.
As for who else may be liable, that’s a Wild Wild West Problem, to which there is currently no good answer. The answers won’t start to become clear until years from now, when cases filter their way through the courts.
The most likely outcome is that the same principles that apply to other virtual technologies will apply to AI too. Designers, developers, and owners of the AI tools themselves will certainly have some sort of liability if there’s errors or flaws in the system itself which lead to the outcome. There will probably be successful class actions against some of the worst offenders, but it’s unlikely that the average small business owner or customer would be successful in suing designers or deployers individually.
It’s safe to assume that unless something particularly awful happens to you or your customers which is caused by the AI tools you use, you’re probably the one who’s going to wear it.
Existing Laws
If you’re still with me, you’re probably starting to think that responsible AI use is really just a best guess right now. And you’re mostly right. There are, however, a few areas of law where the requirements are clear, or at least clear enough to do better than a best guess.
Privacy
Existing privacy laws, and the promises you’ve made about confidentiality, personal information, and data security apply most clearly and directly to processing data with AI.
Canadian privacy laws across Canada, which deal with:
- Personal information (age, name, ID numbers, income, ethnic origin, or blood type, opinions, evaluations, comments, social status, or disciplinary actions, employee files, credit records, loan records, medical records, existence of a dispute between a consumer and a merchant, intentions (for example, to acquire goods or services, or change jobs)
- Personal Health Information;
- Employee information (Alberta and BC only so far, as of the time I posted this)
- Industry-specific laws, primarily covering banking, credit unions, and consumer credit reporting.
Personal information, by law, belongs to the person it’s about. If you’re going to use AI to process it, you must first secure the informed consent of whoever’s data it is, to the same standards as any other use of that information.
People also have a right to consent to the use of their image for commercial purposes. Similar to celebrity personality rights (discussed below), mis-using someone’s image – whether feeding it in to AI or using it as an output of AI – could give them a right to sue.
Contracts you’ve signed with customers, suppliers, partners/joint-venturers, and so forth may have specific privacy standards and data residency requirements (ie, no transmission or storage of data outside of Canada).
If your business handles information which has a protected or confidential government designation, express consent from the responsible government agency will be needed before you can use AI to process it.
International privacy laws may also apply, depending on where the data is coming from, whose data it is and where they live, and where the results are being distributed. Are there adequate safeguards in place where the data is being transmitted through, stored, processed, and where the output is being used? It’s possible that multiple laws apply, and you’ll have to meet the highest standard.
So far, EU privacy and data protection laws are seen as the gold standard, and they publish a list of other places that meet their standards, which makes a useful starting point when dealing internationally. https://commission.europa.eu/law/law-topic/data-protection/international-dimension-data-protection/adequacy-decisions_en
Data security
It’s important to think about both the security of in the AI tools that you use, and the vulnerability of your data to unauthorized use and access by AI tools.
Most data security laws – unless specific to an industry – relate to personal information, covered above. Most of your other security obligations will come from contracts you’ve signed, and the requirement to take reasonable care, discussed above.
Taking reasonable steps for data security in AI use means using reasonable technical, physical and administrative measures to protect personal information against loss or theft, unauthorized access, disclosure, copying, use, modification or destruction. The more data you store, or the more sensitive that data is, the higher standard you’ll have to meet to be “reasonable”. Depending on the nature of your business and the data you’re processing, reasonable steps can include:
- Data Encryption: to render data unreadable to unauthorized entities.
- Access Controls: Limiting access to sensitive information only to necessary personnel.
- Firewall Protections: To block unauthorized attempts to infiltrate network systems.
- Regular Security Audits: To identify and address system vulnerabilities.
- Penetration Testing: To test your security and your responses.
Whatever standard you have to meet, you’ll need to ensure that your AI tools meet those same standards – otherwise you may be breaking the promises you’ve made in your contracts, or not meeting the standards you’re required to by law, or both.
This is a Black Box Problem. Most of the time, especially with “off the shelf” AI tools, you’ll have no information, nor any promises from them about their data security standards. So, as a starting point, assume nothing you input into an AI tool is secure, private, or won’t be exploited – just like any other digital data. Everything can be hacked. AI tools can be vulnerable to “data leakage” and data theft, especially given how most are rushed to market before detailed testing.
The only way to minimize risk is limit what you collect and process to the bare-bones of what you actually need for the job at hand.
Very generally speaking, companies that are experienced in collecting, processing, and using data, particularly those who service large companies and government, will be safer than one-off or new entrants. Paid services or subscriptions will generally be safer than free ones. Among paid services, there are often higher tiers which offer greater security. None of this is guaranteed though.
Intellectual Property
Intellectual property, or “IP” includes copyright on creative work (drawings, video, writing, code, music), trademarks such as brand names or logos; patents/patents pending/patentable ideas, and industrial designs. You want to avoid violating others’ IP, and protect your own IP.
Avoiding violating others’ IP
Copyright infringement. This is a hot button issue these days. You’ve probably head about artists, authors suing AI developers for training their systems on the authors’ works without consent. It’ll be years before we have clear answers on what, if anything, is owed to the authors as a result.
There are two key risks for you as a user of AI tools:
- Violating IP rights if you’re feeding other people’s IP into AI tools in the course of your work without their consent.
- That the AI output you receive and use may contain other people’s IP, and unknowingly breach their IP rights – another Black Box Problem – which can have real consequences, especially when your work product is publicly available.
Here again, you should make sure that you have the informed consent of the person whose IP it is before you feed it to AI. If you’re using AI in the creative sphere, consider disclaimers in your contracts in case AI outputs you use violate third party IP.
Personality/publicity rights. In Canada, public figures have the right to control the commercial use of their identity – name, images, voice – in endorsements. If you use AI outputs that look, sound, or quack like a celebrity in the course of promoting a product or service, you may be violating their personality rights.
Protecting your own IP.
Copyright. In Canada, only work produced with human skill gets copyright protection. There is at least one human/AI co-authored work based on “substantial human control” over authorship which has been registered copyright though. The jury is still out on how copyright laws will change to account for AI. If you use AI tools in creative work (coding, design, video, even research), you may not actually end up owning the copyright in it – or you may end up with an AI co-author who also rights in it, and owe it a share of the profits.
Patents. Canadian patent law holds – as of the date of posting this article – that an inventor must be human, and contribute substantially to the creation of the idea. It also leans towards owners of AI systems not being treated like employees (where the employer owns IP they create in the course of their work). What happens if AI is used in the generation of patentable ideas is still an open question though. The law is far from settled here, so, similar risks apply as with copyright – if you use AI tools in developing patentable ideas, you may not be the owner in full of the patent rights, or your idea may not be patentable at all.
Bias & discrimination
Human rights laws in Canada make it illegal to treat people unequally in providing goods, services, or facilities based on factors like race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status or disability.
AI tools carry a risk of bias and discriminatory output resulting from the type of data that it was trained on. If there was bias in the materials it learned from, then its outputs may also be biased. The risk of bias is particularly high when using AI tools in making decisions on hiring, education, housing, and access to important services. It’s a Black Box Problem if the AI tool itself isn’t transparent about the data used to train it, and how it makes decisions.
Businesses must be accountable, transparent, and fair in their deployment of AI systems. This can include:
- Conducting impact assessments to understand the potential bias of their AI tools.
- Taking reasonable steps to minimize bias & discriminatory outcomes
- Being transparent in how they use AI to make decisions, particularly when these decisions impact rights or access to important services or facilities.
As always, it falls on you, as the user of the tool, to put in the legwork to ensure that the tools you’re using – especially in hiring, education, housing, and access to important services – are free from bias.
Summary
I must confess, I thought this would be a much shorter, simpler article than it turned out to be. My initial impressions were “privacy & data security, contracts, badaboom-badabing” and I’d be done. Obviously that’s not the case – but I figured that I’d put myself through the wringer of pulling together all the different threads of the badly crocheted blanket so that you don’t have to. It was weirdly fun though, and I’m indebted to my friend Shannon Lee Simmons at the New School of Finance for putting the bug in my ear about it in the first place.
Hopefully it’s a useful starting point for you.
As always, this article is for your information only, and is not legal advice. If you’d like some help sussing out the specifics of how you can minimize your risk in using AI tools in your business, I happen to know a guy.
Mike Hook
Intrepid Lawyer
https://intrepidlaw.ca
Leave a Reply