Earlier this year, AI experts and tech executives, including Elon Musk, signed an open letter calling for a pause in the development of AI systems like ChatGPT, citing ‘profound risks to society and to humanity,’ and that ‘powerful AI systems should be developed only once we are confident that their effects will be positive, and risks will be manageable.’
In part two of this two-part episode, Tom and Matt discuss the concerns, risks and uncertainties associated with generative AI, [05:35] which are causing some organisations, sectors and even countries to ban tools like ChatGPT.
[02:51] China is one such country, and is opting instead to develop its own LLMs in an effort to combat AI’s cultural biases and ensure its outputs reflect China’s world views and culture. [08:31] Meanwhile, the education sector has concerns about plagiarism. Matt explains why providers must strike a balance between harnessing the educational benefits of generative AI while also protecting individual thought.
AI’s ‘black box’ problem continues to stoke wide-reaching distrust and uncertainty. [11:29] Matt tells Tom why we need more robust discussions about the appropriate use of AI and how to verify its sources, and that ‘guard rails’ should be put in place to limit its application to certain use cases.
[13:15] Matt makes the case for greater regulation as a guard rail, and why New Zealand should take advantage of international developments in this space. [14:24] In the EU, for example, legislation is in the works seeking to reverse the onus on proof onto the developers of AI where there is an alleged harm arising from its use.
[15:56] He who has the best AI wins... Large firms with deep pockets are leading the way in the development of AI systems, raising serious questions about the democratisation of AI; who gets access to this ground-breaking technology, who misses out, and most importantly, what should we do about it?
For an easy-to-use glossary of the terms discussed in this episode, the one before, click here.
Please contact Tom Maasland or a member of our Technology team if you need legal advice and guidance on any of the topics discussed in this episode.
Please get in touch to receive an episode transcript, and don’t forget to rate, review or follow the Tech Suite wherever you get your podcasts. You can also sign up to receive technology updates via your inbox here.
About our guest
Matt Ensor is passionate about the intersection of technology and society.
Matt is the Business Lead, Transport at BECA where his focus is the use of AI to accelerate business. He is also Chair of the Large Language Models working group as part of AI Forum NZ, where he leads the collaborative development of a White Paper for Aotearoa New Zealand on LLMs.
In 2020 Matt launched Frankly AI, a user-led, conversational AI tool that connects organisations with staff, stakeholders and indigenous communities.
Additional resources
[00:37] Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast
[10:40] New York lawyer cites false case law provided by ChatGPT
[14:24] Liability Rules for Artificial Intelligence
[15:10 and 16:32] Pause Giant AI Experiments: An Open Letter