Artificial intelligence (AI) encapsulates a broad and diverse technology within the financial services industry. From chatbots and robo-advisors to automated mortgage underwriting and identity verification biometrics, there is no doubt that AI is transforming how individuals, industries, and organisations operate [1]. Interestingly though, the rapid adoption of AI technologies in New Zealand financial organisations occurs in an unregulated environment.
As competition drives organisations to improve efficiency in the face of financial pressures, they are finding AI can provide multiple potential benefits for internal processes and oversight, including:
However, just as AI can benefit organisations, it carries significant risks that must be considered carefully. Multiple industry papers [2,3] have raised concerns associated with the use of AI. These include:
Currently, there are no AI-specific laws in New Zealand [3]. AI is covered under the Privacy Act (2020) regarding the collection, storage, access and use of personal information. The New Zealand Office of the Privacy Commissioner issued guidance (May 2023) to help organisations build an initial framework prior to AI implementation but "expects organisations to conduct due diligence and privacy analysis before using generative AI”[4].
Accordingly, The New Zealand Institute of Directors encourages organisations to begin fostering management discussions in their board agendas regarding strategic alignment, ethical considerations, data governance and privacy, human-AI collaboration and risk management. In addition, organisations should consider having responsive components in place such as development standards, systems governance controls and oversight, and security and resiliency [5].
In summary, despite AI's appeal and wide-ranging benefits in the financial services sector, the absence of targeted regulation in New Zealand, mandates entities and their boards to start being proactive. Even if businesses are not actively seeking to incorporate AI technologies into their daily operations “yet”, AI may still infiltrate via third parties, creating vulnerabilities and risks that may prove costly.
At Mint, we take seriously the need to adhere to best practices concerning protecting confidential information relating to our business and clients. We also understand the significance of proactively managing systems and technology risks, including prioritising investment in robust cybersecurity protocols to ensure business longevity and customer trust. In addition, as a part of our approach to managing system risks from third-party providers who retain client information, we conduct a thorough due diligence process to assess potential risks and mitigation strategies associated with their system operations.
Disclaimer: Gina Delgado is the Compliance Manager at Mint Asset Management Limited. The above article is intended to provide information and does not purport to give investment advice.
Mint Asset Management is the issuer of the Mint Asset Management Funds. Download a copy of the product disclosure statement here.
You've reached us out of office hours, but we're here for you...
Fill out the form below to log a request for a follow-up call or more information.
Our contact staff are available to provide fast follow-up to your questions from 8:30am to 5pm, Monday to Friday.