The UK’s National Cyber Security Centre (NCSC) outlined the risks of integrating AI-powered large language models (LLMs) for businesses, warning developers had still not fully got to grips with weaknesses and vulnerabilities of the systems.

In a blog, the security agency acknowledged LLMs have been attracting global interest and curiosity since the release of OpenAI’s ChatGPT in 2022, leading to organisations in all sectors investigating use of the technology within their services.

However, as a rapidly developing field, NCSC experts found models are constantly being updated in an uncertain market. This could mean a start-up offering a service today might not exist in two-years.

Organisations building services using LLMs, therefore need to account for the fact models might change behind the API being used, resulting in a key part of the integration ceasing to exist at some point.

The agency further noted LLMs could carry certain risks when plugged into an organisation’s business processes: it noted researchers found LLMs “inherently cannot distinguish between an instruction and data provided to help complete the instruction”.

Providing an example, NCSC claimed an AI-powered chatbot used by a bank could be tricked into sending money to an attacker or make unauthorised transactions if the prompts entered are structured in the right way.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” added NCSC.