AI governance checklist: Answering those inevitable questions from management

That” executive is walking down the hall towards your office. Phone in hand. ChatGPT open. You know what they are about to ask – “When do I get access to our AI?” “What are we doing to integrate this?”. He proceeds to show you how he used it to fix a leaky tap, asked for its opinion on a document and cleaned up the spelling and grammar in his daughter’s essay. However, the real question is: how will you respond?

Artificial Intelligence is a big question and, whilst there are thousands of companies promising to inject AI into their software, the truth is that your adoption of AI will likely be very controlled. Why? Because there are extensive AI governance, privacy, security and cost concerns uncovered by the uncontrolled adoption of certain platforms and tools.

Before answering ANY questions concerning AI development and systems for your business, make sure you have these 5 key aspects covered.

1. Understand what AI can and can’t currently do

The most common use cases for Generative AI are Content Creation, Design and Prototyping, Data Visualisation, Translation and Summarisation, R&D, Automation, and Efficiency. More commonly, you’ll see organisations leaning on content generation (from legal framework to blog ideas), the summation of documents, the visualisation of data and code generation within the IT world.

AI is a fantastic tool for augmenting, but not entirely replacing, a professional individual. It can dramatically increase an individual’s efficiency, get them started in content creation, summarise content, explain difficult ideas and write code, but it’s ultimately limited in what it can do.

AI provides a starting point and expects you to finish it off. It is an expert in comparison to a layperson, but not in comparison to an expert in a field. The best way to learn where AI models are strong and weak is through using them and reading others’ experiences.

Robust AI governance makes responsible AI development easier 

2. Understand how AI technologies are being used within your industry

Look to the industry to understand what other organisations are doing with AI, learn from their missteps and successes, standing atop the shoulders of others. This is an emerging field which means there is a lot of experimentation and sharing of these results happening.

The industry is in an excited phase, where new use cases, techniques and capabilities seem to arrive almost daily.  In such an environment, some research will go a long way.

You also need to know how your industry’s AI regulation plans as much as how it will use it. Before speaking to your organisation’s executives, learn about existing laws and governance practices in your industry. This way, you can ensure that your organisation’s AI aspirations comply with the regulations and ethical standards which apply to your organisation, industry, and/or jurisdiction.

3. Start small

Choose a modest goal, remembering that the experience and results with your first project may influence the organisation’s confidence regarding further investments in AI.

For instance, your first AI project may take longer because it involves your staff getting to grips with new technologies and there may be some pre-requisite infrastructure, processes, etc. to establish before you can begin. Therefore, if your organisation is eager to implement AI, you should begin by targeting a more modest, specific use case or goal, and focus on delivering incremental value.

In addition to delivering an outcome, your team will be able to lay a path for further innovation by developing their abilities, inspiring others, and encouraging further investment.

In essence, the more modest an outcome you’re driving towards, the more likely you are to hit it. The same is also true of your AI risk management framework. Start with the core security issues and build out from there.

Robust AI governance makes responsible AI development easier

4. Develop a plan (specifically, a data governance plan)

Like any tool, we need to ensure we can govern its usage and the data within it. The more powerful the tool, the larger its capacity for disruption within your business. Be it private GPT models, Co-pilot or SaaS AI offerings, you need to put effective AI governance structures in place that stipulate how your organisation will use each tool.

Define exactly how the data will be controlled, and what measures will be put in place to maintain the privacy, integrity and utility of the data itself. This stage is a good opportunity to address ethical concerns and mitigate AI-related risks.

You may want to remind AI-eager executives of data security. When implementing an AI tool that has the potential to access ALL the data in your environment, the last thing you’d want is for sensitive information (e.g. executive pay packages) to be at anyone’s fingertips.

And whilst you may feel the security controls you’ve put around your data are sufficient, permissions simply aren’t enough in an ever-changing environment. It may be advisable, therefore, to incorporate additional data privacy and security measures that modern cloud platforms provide.

5. Remediate and Iterate

As new AI services and features become available, practices evolve and use cases are identified. Thereby creating further opportunities for value creation value by developing new,  or building on existing, AI solutions. For example, you may discover a compelling reason to chain multiple AI operations together to further achieve better quality outputs, or realise additional efficiencies.

Once you’ve established your governance plan, gained control and visibility over the data within your environment and established the necessary AI integrations, you’re now in a position to continue iterating; making small consistent improvements over time.

However, some iterations may entail introducing additional data types into your solution.  In this case, it’s essential that the iteration’s impact on, or requirements concerning, data governance are considered and appropriately addressed. For example, a new iteration may introduce a new data source, for which a particular regulatory compliance framework needs to be complied with.

Talking to management about governing AI

When it comes to Artificial Intelligence governance, each industry is still finding its feet. When certain executives drop by with excitement, you may need to take on the role of the realist.

To ensure AI is an effective tool for your business, and that adoption doesn’t stall due to a complex and protracted initial foray, start by ensuring they understand its inherent limitations as much as its myriad possibilities, and give priority to some of the more modest, though still valuable, use cases, before taking on the more expansive ones.

By progressively adopting new use cases for AI systems and tying that adoption to value generation, you can measure the difference in how much time it saves on an hourly, daily, weekly or even yearly basis.

Taking a commercial view of these products builds the case for further investment in each toolset. Hence, focus on the modest adoption of AI because it is an emerging technology and use cases are still very much being explored and established.

The IT industry is prone to craze and infatuation with exotic and emerging technologies. You want to take a measured approach to the adoption of these tools to ensure that innovation is coupled with intelligent allocation of resources and protection of your data assets.

For more information on AI best-practice for your organisation, reach out to the team today.

Recent Articles