ETIP #307

Why your company needs an AI usage policy

Why your company needs an AI usage policy

Artificial intelligence (AI) is the hot new topic, and we’ve written quite a bit about it – from our top tips for AI creativity to ideas for engaging customers to the best ways to harness an AI chatbot

But as AI continues to offer new tools and capabilities, it’s important to explore responsibly. Even though Starmark has built our more than 40-year reputation on experimenting with what’s next, we always use our exploration to define good rules of the road for ourselves and our clients. 

That’s why your company needs to define an AI usage policy. Clear guardrails and recommendations create an environment where your employees can test, experiment and learn without fear of getting it wrong or creating risks for the business. 

And lest you think it’s OK to leave things to chance, here are several risk factors to consider if you let AI use run rampant:

Legal concerns

One of the reasons that AI is so controversial is its potential for copyright infringement. Many popular LLMs (large language models) and AI image generators were trained on a wide array of copyrighted works. Many authors, artists, and musicians have expressed public opposition to this, and there are several lawsuits from book publishers, newspapers and record labels who believe that AI-generated works are infringing on the rights of copyright owners.

Look at it this way, being able to tell Midjourney to imagine a work in the style of Jean-Michel Basquiat or requesting ChatGPT to deliver an article in the style of Stephen King simply wouldn’t be possible without the models that power these projects being trained on these creators’ works.

Even if the prompt you enter doesn’t explicitly tell the AI tool to copy the style of copyrighted works, it most likely still relies upon the use of infringing elements — considering major players in the AI space won’t definitively state that their models avoid copyrighted works. 

This can open you up to legal liability, as using copyrighted material without permission in any commercial usage is not OK.

But that doesn’t mean use of generative AI (Gen AI) is confined to non-commercial passion projects. Adobe Firefly mitigates this risk with an image generation tool that has been trained on only public domain images and stock Adobe owns the rights to.

Bias

Depending on the data the AI tool has been trained with, it could potentially reinforce systemic bias. AI synthesizes existing content as opposed to truly thinking for itself. So in many cases, entering the word “doctor” into an image generation prompt would show primarily white males with a stethoscope, even if that doesn’t fit the need of your specific campaign or brand. More specific prompting can help alleviate this issue — but only if the humans who are part of the creative and approval processes think to manage this issue.

In addition to this demographic bias, language bias is also an issue. The most popular LLM models are primarily trained on either English or Chinese material. So while some tools do offer translation services, using one to translate your sales materials from English into another language can potentially introduce inaccuracies. And that’s not even accounting for dialectical differences between regions or countries. Suffice it to say, Generative AI is not a viable substitute for a professional translator in public-facing work.

As you craft your AI experimentation policies, be sure to include sections on mitigating biases — while outlining a clear framework where employees can try the tech for internal purposes versus commercial or customer-facing uses.

Privacy and protecting internal data

So, if there’s one piece of information here that should tell you how important it is to have an AI usage policy in place, let it be this tidbit. Because an effective AI usage policy will specify which tool(s) are allowed to be used by your employees — and with what types of information.

Most public LLMs, such as the freely accessible version of ChatGPT, will train themselves on any information a user submits. 

Yes, that’s how it works. Seriously. 

Translation: anything your employees put into that query or prompt bar becomes fodder for a language model you have no control over or oversight into. And if this data is not meant to be public, it can be a major problem. 

The terms and conditions of different AI tools lay out rights and permissions based on use. And the solution to this glaring security hole is typically to use a paid version of the service (which promises not to use your input for training data). Heck, you can even create custom AI tools that are trained specifically on proprietary assets, which would generate results more suited to your needs than general models. 

And, sure, using your company’s paid version or custom-trained model of ChatGPT avoids this unfortunate problem — but how can you expect an intern on his first day to know he needs to use our special version of ChatGPT versus the ChatGPT he already has bookmarked in his browser? 

This is why written AI usage guidelines are so essential. They give you the ability to dispel these types of misunderstandings before your very private data becomes public data.

The hallucination problem

Once upon a time, mankind used the hallucinations of ascetics, sages and prophets to guide the course of our destiny. And hallucinations today are still valuable. They can show truth in a way that’s not possible within the realm of conventional reality. But what’s useful on an individual level doesn’t translate to the corporate world. Just like you wouldn’t want your brand-responsible art director working while on DMT, you don’t want an AI model generating hallucinations that are then sent out to your clients or customers. 

Hallucinations are the byproduct of the way LLMs make inferences based on available information. Sometimes they’re funny. Sometimes they’re trippy. Sometimes they’re deeply upsetting. In all cases, they’re not the type of work product that’s ready for the rest of the world to see. If AI-generated work is sent directly to a stakeholder — or released publicly — without additional review and editing, it can have a disastrous effect on your brand reputation.

Just like the work product of a new team member, AI work product should be thoroughly vetted. The only difference is, unlike a human employee, an AI model might never be able to function without this level of oversight. 

As you’re experimenting, make sure your oversight process is clearly outlined in your AI usage guidelines. Not only is it going to make sure public-facing images created with Midjourney don’t contain any gibberish text in the background, it’ll make sure your content is actually effective at persuading humans.

What an AI usage policy should include

If you’re not yet convinced you need an AI usage policy at your company, there’s simply no helping you. But for the rest of us, here are some vetted recommendations for a standard AI usage policy that you can use as a starting point.

  • Specify when and how data should be used in a LLM or other AI tool. With a subscription plan, it’s possible to create an API path for a tool like ChatGPT that protects your company’s data from being used to train the publicly available large language model. You can also create an entirely custom-trained in-house model using your own data set as the exclusive training data. These two methods protect your data security in a way that entering queries into public tools does not. Make sure to avoid using confidential information, such as company salaries, client research or anything subject to an NDA, in any tool that does not have a private data path.
  • Ensure sufficient human authorship examples to guide future work.That’s the standard for copyrightable work. In instances where you’re using AI-assisted work for commercial purposes, it’s best to err on the side of caution and ensure any final draft includes no AI assistance, even if an early outline or draft was created with the help of AI. 
  • Direction on which tools can be used — and where. The output quality and data training methods of some tools (in particular tools you pay for and have vetted) are probably acceptable to use in commercial applications. On the other hand, some tools, like the free version of ChatGPT, use the data entered to train their models and shouldn’t be used with proprietary company data. Always make sure you specify tools for different uses and make it crystal clear which ones are for internal use only. 
  • Be open to feedback and suggestions from employees who want to pilot a new tool for work, so it can later be added to the list of approved tools. As the market changes, it’s a great way to consistently test and vet new tools that can improve your workflow.
  • Create a review process to make sure that AI-generated content is accurate and fair. This can avoid distributing potentially harmful hallucinations or biased imagery that may not properly reflect your company’s values.
  • A commitment to disclosure when AI is used. Some governmental bodies, such as the EU, have already passed legislation that requires a disclaimer be shown whenever AI-generated content is used in an advertising setting. Wherever you are based, honesty is the best policy.

If you’d like to discuss starting or expanding your company’s use of AI, get in touch.