Accountability, Liability and Trust
The recent Post Office scandal in the UK, which led to the wrongful conviction of numerous post office workers, starkly highlights the pitfalls of over-reliance on electronic systems and the critical need for robust safeguards against potential system failures. This scandal, while indirectly connected to existing legislation, stresses the broader implications for current and future legislation governing electronic communications and data reliance.
The core issue in this scandal was the malfunction of the Horizon IT system, deployed by the Post Office. Its significant flaws resulted in financial discrepancies that were mistakenly attributed to employee wrongdoing. This situation brings to light the challenges and risks associated with advanced IT systems in legal and business contexts. It illustrates the necessity for ongoing legislative adaptation to keep pace with technological advancements, ensuring that laws comprehensively address the reliability and security of such systems.
The incident raises imperative questions about accountability and legal responsibility in the age of increasingly complex AI and electronic systems. It emphasises the need for future legislation to provide clear guidelines on liability when AI systems contribute to incorrect or harmful outcomes. Organisations must develop and adhere to stringent policies, informed by both existing and emerging legal frameworks, to govern the implementation and maintenance of their IT systems and the emerging use of Generative AI in both back office and customer facing situations.
In essence, while current laws like the Electronic Communications Act 2000 lay the groundwork for integrating digital technology in various sectors, the Post Office scandal serves as a reminder that legislative bodies and organizations alike must evolve and adapt their practices. This includes implementing comprehensive IT strategies, conducting regular system audits, and proactively addressing vulnerabilities, thereby ensuring the reliability and trustworthiness of IT systems in a rapidly advancing digital world.
Current Lawsuits
It isn’t just about the reliability and integrity of AI systems that is a concern. The New York Times is just one organisation currently engaged in legal action against Open AI and Microsoft. The lawsuit alleges that millions of its articles were used to train and develop Open AI’s chatbot and that results from its LLM mimic the Times’s style and recite its content without acknowledgement.
In a similar action against both Meta and Open AI, Sarah Silverman, a successful prominent American comedian, alleges the LLM’s were trained on illegally acquired data sets containing her work.
These, and other lawsuits, are addressing key fundamentals about the training of AI models on copyrighted materials. It’s addressing whether this constitutes fair usage of the data and also who is liable for copyright infringement when LLMs produce content that infringes. These cases are hugely significant and will likely set precedence for how AI is developed and used in the future.
Emerging Regulation
Whilst the building blocks for generative AI have been around for several years, even decades, it’s only in the last 18 months that its potential use in mainstream applications has exploded. Given the likely rapid rise in business applications, it’s crucial to implement new regulations to oversee its implementation and governance. Around the world, different countries are taking varied approaches to this regulatory challenge:
In California, the California Privacy Protection Agency (CPPA) has proposed regulations under the California Consumer Privacy Act (CCPA) to govern automated decision-making technology, granting consumers various rights to address conflicts of interest in the use of AI.
The EU is finalizing the AI Act, classifying AI usage based on risk levels, including special requirements for high-risk applications. The proposed AI Liability Directive aims to modify rules specific to AI, establishing criteria for damages caused by AI systems.
The UK government has released a white paper proposing a regulatory framework that empowers existing sectoral regulators to regulate AI, focusing on principles like safety, transparency, fairness, accountability, and redress.
The Cyberspace Administration of China (CAC) has issued regulations for Generative Artificial Intelligence Service, which has already taken effect and introduced significant obligations for providers of generative AI services, including content monitoring and control to reflect core socialist values and ensure data used for training AI models does not discriminate.
Multinational organisations will have to consider all these varying regulations as they operate and exploit AI in different markets. Keeping track of this will become an increasingly important and require substantial time and resource investment.
Customer Operations Use Cases
The true value of generative AI is not inherent in the technology itself but in its application. The significance of identifying clear, relevant use cases for generative AI cannot be overstated. It is through these practical applications that generative AI will transition from technology fad to long-term trend addressing real-world business challenges.
In specialised areas like fraud detection there are several examples where the deployment of AI models offers opportunities. For instance, generative AI excels in recognising patterns and detecting anomalies, identifying irregular behaviours in data that may signal fraudulent activities. These capabilities often surpass traditional systems, especially in detecting nuanced or complex fraud indicators.
Generative AI can be utilised to automatically scrutinise and confirm the validity of documents involved in claims processing. Through Natural Language Processing (NLP), it can examine claim texts for any inconsistencies or irregularities that could suggest fraudulent actions. Existing fraud detection systems can be substantially improved by integrating generative AI. This integration not only enhances the precision of these systems but also contributes deeper, more refined insights into fraud detection processes.
There are examples across all sectors, from healthcare and finance to insurance and education. It’s evident that the success and relevance of generative AI will be measured not by the sophistication of its algorithms but by the significance and scale of the problems it solves.
Policy and Governance
There are inherent dangers that the data used to train these models will perpetuate and even enhance biases. It will be important for organisations to mitigate the underlying biases by ensuring diverse and representative data sets are used to train the models and that any bias is detected and corrected in the algorithms. It will be a constant process to detect emerging biases in the system, this shouldn’t be underestimated.
Generative AI will mean a significant change to the operating model. Identifying use cases and delivering capability using generative AI and will require new policies and governance to deal with data privacy, emerging regulations, bias & transparency, intellectual property rights and accountability that comes with embracing this transformative technology.
One thing is certain, at least in the UK, public trust in computer systems has been affected and this will have a knock-on impact to trust in AI and generative AI. Organisations will need to invest in building and maintaining public trust by demonstrating the reliability, safety, and beneficial uses of their AI systems.
How to get started
Organisations should start with small, targeted Generative AI projects, allowing for a manageable evaluation of their impact and scalability. The integration of Generative AI will require adjustments to data strategies and may lead to the creation of new governance roles, such as AI Ethics Officer and Data Protection Officer, to oversee compliance and ethical considerations whilst also seeing the responsibilities of existing governance roles expand.
This approach enables businesses to explore the benefits of Generative AI cautiously, ensuring that its deployment aligns with business goals and regulatory requirements while mitigating risks.
At Davies – Consulting Division we help businesses in driving the implementation of GenAI models into workstreams to be as effective as possible. We align this introduction with the recommended regulatory guidelines to ensure companies are building a strong foundation for GenAI in business as we continue to integrate and evolve the expectations of corporate practices.
Click here for information on our AI assessment.