Guardrails and guidelines as AI embeds itself deeper

Disclosure: Lifestyle Wealth Partners Pty Ltd and its advisers are authorised representatives of Fortnum Private Wealth Ltd ABN 54 139 889 535 AFSL 357306. General Advice Warning: Any information on this website is general advice and does not take into account any person's objectives, financial situation or needs. Please consider your own circumstances and consider whether the advice is right for you before making a decision. Always obtain a Product Disclosure Statement (If applicable) to understand the full implications and risks relating to the product and consider the Statement before making any decision about whether to acquire the financial product.

Business leaders, scientists and politicians know the spread of artificial intelligence is unstoppable and are trying to erect guardrails before it transgresses community expectations.

The impact of AI at work and school is uncertain and trust is lagging, even as the technologies bring advances in automation and breakthroughs in medicine, fraud detection and road and air safety.

Experts released guidelines at an industry event in Canberra on Tuesday as digital technologies become more deeply embedded in everyday life.

To build trust, the advice written by consultants KPMG and the Australian Information Industry Association calls for strong governance around how AI is developed and used.

Education and health care are key users along with financial services, agriculture, mining and logistics.

But opinion is divided on whether tools such as ChatGPT are job-killers or economic growth accelerators.

The user guide and checklist launched by Science Minister Ed Husic alongside industry leaders come amid fears AI is helping would-be cyber criminals and the spread of misinformation.

Chatbots that can draft computer code are ripe for criminal abuse, according to law enforcement agencies.

To reduce fear and win trust the new industry guide urges governments to promote the benefits, such the success of AI in helping people fill in tax forms, and to educate consumers about technology safeguards.

With no specific laws in place, organisations are being asked to self-regulate and be more open about what tools they’re using.

A recent study by KPMG and the University of Queensland found just over one-third (35 per cent) of people believe there are enough safeguards, laws or regulations in place to make AI use safe, while 40 per cent of respondents trusted the use of AI at work.

The survey also found Australians want an independent regulator to monitor its use, and need more information on data privacy.

 

Marion Rae
(Australian Associated Press)

0

Like This