Now is the time to help Government navigate the risks of AI

I know all too well how hard and slow it has been to turn government focus towards a National AI Strategy (or AI in general) through my past work as the Executive Director of the AI Forum. 

The fact is, it’s now too late

The ship has truly sailed and no amount of strategy will turn it around. 

One of the only ways to prevent NZ becoming mass consumers of AI produced by countries with different values and ethics to our own, is for the private and public sector to come together to innovate NZ-made AI.

We know the government has a lot of barriers to overcome in progressing their adoption of AI and Data Driven Innovation. Until there is regulation, legislation and policy in place, it will continue to be hard and slow for them to innovate safely. 

I’m driven to work in environmental and social spaces because we want to have a positive impact through our work. 

I find this situation heartbreaking as there is so much potential for data driven innovation and AI to transform public service, healthcare, environmental and social outcomes. 

But there is a solution – and it is for the private sector to support the public sector.
We need to actively understand and, more importantly, provide what will help them adopt faster and innovate with us.

Changing the relationship between the public and private sectors

For New Zealand to succeed in embracing the benefits technology can offer, particularly those in AI, we have to shift from the typical lens of the public sector as the customer and the private sector as the supplier. 

Rather we need to collaborate as partners – not sectors – and collectively hold individual responsibility for making it easier to collaborate and innovate together.

As part of our responsibility to public sector partnership, we have embarked on a journey to establish our own Ethical AI and Data Innovation Framework and Governance. 

This means that when we develop AI for our public sector partners, we’re able to provide the assurance and evidence that what we develop is trustworthy, low risk, safe and explainable. 

This also enables us to be more curious in our own AI endeavours because we have the guardrails and processes in place to support innovation rather than hinder it. 

There is often the perception that policy/regulation and legislation will inhibit innovation, acting as a stranglehold, but in fact it’s often the opposite. 


The benefits of frameworks and governance for AI

Knowing your boundaries, ethical principles and safe operating zones is, in fact, the perfect fertiliser/incubator for innovation.

AI acceleration equals accelerated calls and needs for governance, as evidenced by the recent creation of the Interim Centre for Data Ethics and Innovation by 

Yet there is a serious lag in practical work to enable such governance, bypassing strategic leadership to eventuate the long term gains that technology brings.

We read and hear over and over again the risks to humankind, communities and organisations that biassed, black-box algorithms can have. 

However, many org’s delay implementation of governance frameworks and this is a big mistake. 

Establishing governance from the outset to provide oversight and robust process is a must for building public trust.

We know that defining values and ethical principles are often the basis of trust but governance is the best way to build and strengthen that trust. 

This is why we’re taking steps to ensure we govern from the outset rather than an afterthought.