Skip to main content

Study warns cost-cutting use of generative AI could increase cyber-attack risks

An AI themed image with blue wires with bright lights at the end of them that look like neurons,

Newly published research from a leading computer scientist warns that the use of generative AI to design, train or perform steps within a machine learning system could increase serious risks.

Michael Lones, professor at Heriot-Watt University’s School of Mathematical and Computer Sciences, has argued in a new paper that generative AI could expose organisations and the public to unintended harm.

These include cyber-attacks, data breaches and bias against underrepresented groups, despite potential cost and efficiency benefits.

Companies will deploy these systems to do things like cut costs, and this may improve the experience that end users get, but it may also have negative consequences, such as bias and unfairness.

Michael Lones

Professor, School of Mathematical and Computer Sciences

Professor Lones’ study has been published in the international journal Patterns and explores how generative AI is increasingly being used to design, build and operate machine learning systems across a wide range of sectors.

Professor Lones said: “Machine learning developers need to be aware of the risks of using Gen AI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that.

“Given the current limitations of generative AI, I’d say this is a clear example of just because you can do something doesn’t mean you should.”

Machine learning systems are algorithms that learn to recognise patterns in data, which they can then use to make predictions and decisions regarding new data.

Machine learning has been around for decades, and most people encounter it in their daily lives in the form of spam filters, product recommendations on e-commerce websites, and social media newsfeeds. But it’s also used in high stakes situations, such as assigning patients to drug trials and processing insurance claims.

In the last two or so years, there has been a push to incorporate generative AI (mainly in the form of LLMs) into machine learning systems, but doing so carries risks and limitations that developers and the general public should be aware of.

Professor Lones adds: “If you have Gen AI working in a number of different ways within your machine learning workflows or system, then they can interact in unpredictable and hard to understand ways.

“My advice at the moment is to avoid adding too much complexity in terms of how we use Gen AI in machine learning, particularly if you're in a sector that has high stakes that impact people’s lives and livelihood.”

Professor Lones’ work explores four ways in which generative AI is currently being applied in machine learning: as a component within a machine learning pipeline, to design and code machine learning pipelines, to synthesise training data, and to analyse machine learning outputs.

All of these applications carry risks, and these risks are compounded if LLMs are used for multiple tasks within a machine learning system, or if LLMs are “agentic”, meaning they can autonomously use external tools to solve problems.

One of the biggest risks is simply that LLMs sometimes make mistakes, bad decisions, and fabricate or “hallucinate” information.

These errors aren’t necessarily predictable and may be difficult to evaluate because LLMs operate in a non-transparent way, which presents an additional issue for legal compliance.

Professor Lones added: “In areas like medicine or finance, there are laws about being able to show that the machine learning system is reliable, and that you can explain how it reaches decisions.

“As soon as you start using LLMs, that gets really hard, because they're so opaque. It's important for people in the general public to be aware of the limitations of GenAI systems.

“Companies will deploy these systems to do things like cut costs, and this may improve the experience that end users get, but it may also have negative consequences, such as bias and unfairness.”

Contact

Lewis Robertson

Media Relations Officer