New rules warn of AI data poisoning, attacks and theft

Disclosure:Ā Lifestyle Wealth Partners Pty Ltd and its advisers are authorised representatives of Fortnum Private Wealth Ltd ABN 54 139 889 535 AFSL 357306.Ā General Advice Warning:Ā Any information on this website is general advice and does not take into account any person's objectives, financial situation or needs. Please consider your own circumstances and consider whether the advice is right for you before making a decision. Always obtain a Product Disclosure Statement (If applicable) to understand the full implications and risks relating to the product and consider the Statement before making any decision about whether to acquire the financial product.

Australian businesses are being warned by the nationā€™s leading cybersecurity organisation about threats to privacy, property and attacks on their operation due to the use of artificial intelligence technology.

The Australian Signals Directorate released the AI guidelines on Wednesday in collaboration with foreign security agencies, including the US Federal Bureau of Investigation, the UKā€™s National Cyber Security Centre and Israelā€™s National Cyber Directorate.

The 15-page report notes that AI ā€œpresents both opportunities and threatsā€ to Australian businesses and outlines five concerns about the technology that could put businesses at risk.

The guidelines arrive one week after the federal government released its Safe and Responsible AI interim report that outlined mandatory and voluntary regulations planned for using the technology.

The ASDā€™s Engaging with Artificial Intelligence report, which was designed for small, medium and large organisations as well as government agencies, detailed a series of AI risks.

They included ā€œdata poisoningā€ or manipulating training data to produce incorrect results, ā€œinput manipulation attacksā€ involving hidden commands to access more of an AI model than allowed, and generative AI ā€œhallucinationsā€ in which the technology delivered incorrect data.

The report gave the example of a case in which a New York lawyer created a legal brief using ChatGPT but found six cases in the documents had been ā€œhallucinatedā€ by the program.

ā€œTo take advantage of the benefits of AI securely, all stakeholders involved with these systems ā€¦ should take some time to understand what threats apply to them and how those threats can be mitigated,ā€ the report said.

The guidelines recommended businesses using AI hire qualified staff, conduct regular ā€œhealth checks,ā€ maintain data backups and question how its use will affect privacy obligations.

Australian Institute for Machine Learning director Simon Lucey welcomed the guidelines, saying the risks were real but, if they could be overcome, the technology could unlock significant economic benefits.

Professor Lucey said data poisoning and hallucinations could prove to be a significant threat and anyone using the technology should take care to choose a transparent AI model.

ā€œOne of the challenges that the technology has at the moment is that it has so much potential but itā€™s such an alien technology in the sense that previous technologies have given us a sense of how they operate, how they work,ā€ he said.

ā€œWhen AI makes a mistake, itā€™s often very difficult to trace back to find why that happened.ā€

University of the Sunshine Coast computer science lecturer Erica Mealy called the guidelines a ā€œgreat first stepā€ in helping businesses to understand generative AI technology, particularly as it was being adopted faster than expected.

ā€œThereā€™s definitely security risks involved in AI for businesses in terms of trademarks and intellectual property,ā€ Dr Mealy said.

ā€œWe need to develop a global understanding of what it is good for and what it isnā€™t good for and we need to keep an eye on data ownership and privacy.ā€

Ā 

Jennifer Dudley-Nicholson
(Australian Associated Press)

0

Like This