Generative AI in Business: with great power, comes great responsibility - Davies

Sionic has become Davies Learn More

Generative AI in Business: with great power, comes great responsibility

Read more below

Generative AI (GenAI) is a booming area of interest for increasing simplicity and reducing friction within customer service. However, as with anything new, the industry is moving fast and breaking things.  

Regulation is on the horizon; the European Union plans to regulate the market via its AI Act, which will likely come into force over the next couple of years. While cutting corners with GenAI may seem lucrative, aligning best practice to ensure sustainable business models are built to withstand long-term regulatory changes is important. 

UK organisations looking to implement GenAI should consider aligning closely with the proposed EU AI Act. Its transparency requirements are likely to be the ‘global standard for regulation’ and therefore organisations should be working as closely as possible to ensure the use of AI and any owned AI models are built to improve business processes without breaching ethical boundaries.  

When integrating GenAI into customer operations, the content generated through it must be disclosed to customers.

This is becoming an industry norm and the moral thing to do to ensure customers do not feel misled. It is currently up to the organisation to regulate its AI practices and understand how to get the most out of it. Currently, AI can prove to be a valuable capability to organisations who use it to support the customer facing teams, with a ‘digital shoulder’. But within the wider body of a business it doesn’t have the operational capabilities or the regulatory backing to drive autonomous workflows.  However, it can be a useful catalyst for underpinning low-risk volume-heavy tasks to help free up workforce capacity. 

Copyrighted data used in training the GenAI model must be published.

With large datasets and autonomy, comes a significant risk of copyrighted material being misallocated. It is important to summarise any copyrighted data when training the GenAI model to mitigate the risk of legal consequences or accusations of plagiarism when implementing AI practices.  

GenAI models cannot generate illegal content.

It is important when designing and launching a GenAI model that it poses no risk of generating illegal content. This is vital for staying on the right side of upcoming regulations and creating a sustainable AI model for global implementation.  Following these transparency requirements will ensure a strong foundation is built for a long-term owned AI model.  Many businesses may choose to implement a pre-existing open-source AI model such as ChatGPT or Google Gemini. AI models of this nature leverage a broad-reaching collection of technologies, developers, models, and resources. This means there are risks associated with open-source AI integration so a business-specific generative AI model might be a wiser decision.  

Open-source AI models can be more easily misused.

As the software is readily available and all-encompassing, it has more use cases that can span into generating misinformation, creating deep fakes, and automating high volumes of spam. This can raise the risk of implementation across an organisation, as these practices might creep into organisation workflows and result in internal and external misinformation that could harm the business. 

Bias in public data becomes a company issue if you let open-source AI in.

As AI is reliant on volume to produce answers, there is a high chance that biased answers or discriminatory results will be prevalent when working with an AI model. It would require a far heavier level of due diligence and even then, not reduce the possibility of something falling through the cracks.  

Open-source AI is not directly accountable.

It is difficult to initiate a tribunal process or access the necessary support networks if something were to go wrong when using open-source AI. Large institutions make it clear the use of Generative AI is at the risk of the individual and therefore can become a liability if it is unable to serve its purpose or fails produce adequate results.  

There are greater data security risks attributed to open-source AI models.

When training the model with sensitive data, there is a greater risk of this information being exposed during the development process, which can be detrimental to a business, especially in a high-risk sector.  

While there are significant risks attributed to open-source AI models, there are still many benefits of using an owned generative AI model in a business setting.  

An owned AI model can dissect and extract valuable insights from intricate documents, making it easier for companies to remain compliant and extrapolate vital details from large bodies of text. It is also able to amend long-form documents and reflect any changes throughout, to ensure ever-evolving regulatory documentation can simply be updated and recirculated at will, freeing up workforce capacity elsewhere. 

AI can provide vital aid in Anti-Money Laundering and Know Your Customer checks, which can speed up onboarding journeys for new customers and clients, as well as reduce the risk of human error. It is also useful in automating high levels of quality assurance as it can monitor a high volume of calls at once that can then be used for training and identifying pain points in the customer journey.  

GenAI is likely to become a vital tool in business practices over the coming years. While there are many risks and plenty of unknowns, sticking to regulatory guidelines as they become available and training the AI with high levels of due diligence can unlock new avenues for business development, speeding up arduous tasks and increasing quality control and security within an organisation.  

At Davies Consulting Division we help businesses in driving the implementation of GenAI models into workstreams to be as effective as possible. We align this introduction with the recommended regulatory guidelines to ensure companies are building a strong foundation for GenAI in business as we continue to integrate and evolve the expectations of corporate practices.  

 Click here for information on our AI assessment. 

Meet the expert

David Ilett

Director

Customer Experience

I am an experienced and accomplished transformation leader specialising in digital and customer experience.

Explore more blogs

Customer Experience

Reflecting on the FCA’s Consumer Duty Anniversary Presentations 

Last week marked the 1-year anniversary of the FCA’s Consumer Duty regulation, a milestone commemorated with a series of presentations ...

Customer Experience
London building with graphic, depicting market abuse and conduct risk

Generative AI: The Hidden Costs

What Every Business Must Know

Customer Experience

Connected Intelligence #2: Digital Twins vs Static Models

The power of a digital twin lies in the centralisation and connectedness of Operational Intelligence in a single platform.