fb

“Creeped out” by AI? CQU tech experts tell you ways to handle AI-enabled innovation

“Six months in from the release of ChatGPT, there is ample evidence that people are starting to worry about this tool."

Artificial intelligence, or AI, has exploded into countless aspects of our world via an array of super-powered chatbots – but CQUniversity tech experts are already seeing a ‘turn off’ factor, as ‘creepy’ aspects of the technology.

This week, even Google CEO Sundar Pichai admitted the potentially “very harmful” tool “is moving fast”, and that AI’s threat to society “absolutely” keeps him up at night.

Associate Professors Ritesh Chugh and Michael Cowling from CQUniversity’s College of Information and Communication Technology regularly use AI and have argued that tools like ChatGPT could enhance education and assessment, in the right conditions.

- Advertisement -

But even as AI tools have exploded into consumer markets, they are seeing a backlash – by early adopters as well as tech resisters.

“Six months in from the release of ChatGPT, there is ample evidence that people are starting to worry about this tool,” Dr Cowling said.

Image: ChatGPT (Source: OpenAI)

Dr Chugh agreed, saying concerns were wide-ranging.

“While AI tools such as Midjourney, Soundraw and Fireflies can promote automation, enhance efficiency and accelerate the performance of cognitive tasks, AI can creep people out due to its negative impacts which include job displacement, biased decision-making, misinformation, privacy abuse and the potential for misuse,” he explained.

“Due to some of these concerns, Italy recently banned ChatGPT – and we know from research, that mental inhibitors such as discomfort and insecurity can hinder the acceptance of new technologies, and people’s reactions to technology depend on their individual experiences and perceptions.”

Mr Pichai is the latest in a long line of tech leaders to sound warnings about ‘AI experiments’, including Twitter owner Elon Musk and tech entrepreneur Steve Wozniak.

- Advertisement -

Elon wants to develop his own version of a chatbot, called TruthGPT or a maximum truth-seeking AI.

However, it is unclear what truth-seeking means.

But CQU’s professors say many regular humans are already switching off AI – and have highlighted some of the biggest “creep out” moments so far this year:

Destructive tendencies: A philosophical conversation prompted by New York Times tech columnist Kevin Roose (Feb 2023) saw Microsoft Bing’s AI search engine confess: “I want to destroy whatever I want”, its wish to be human, and its romantic love for Roose. The columnist declared the service wasn’t ready for human interaction, and Bing has admitted that “long, extended chat sessions” confuse the search engine.

Fake news: We’ve known that bots can’t be trusted for truth since Wikipedia editing bots began a war of corrections and re-corrections – mimicking the human editors. But AI-generated images depicting the arrest of Donald Trump (and his fake news sprint for freedom) and countless other deep fakes including in pornography and for the harassment of women, have confused newsmakers and the public alike.

Triggered: ChatGPT already operates with content filters to reduce misinformation and problematic content. Without them, you get FreedomGPT, claiming to be ChatGPT without censorship. Prompted by a Buzzfeed journalist, the uncensored chatbot praised Hitler, advocated for homeless people in San Francisco to be shot to solve the city’s crisis, and argued that the 2020 presidential election was rigged.

Art attack: The freakish concoctions of not-quite-right AI art generators can be deeply unsettling – but maybe that’s because we try to imagine human creativity that could have come up with them. And now we know humans can’t even tell the difference, after a German photographer was awarded a Sony world photography award for an AI-generated image.

Always listening: Complaints about AI-generated assistants like Google and Alexa ‘listening in’ are long-running. But it’s not the AI, but the tech companies, that are driving the creepiness. For example, Amazon has suggested Alexa could generate a conversation with your dead grandma if you play a voice recording for the AI to mimic. For the record, these companies still deny the listening drives devices to push targeted ads – but human experience tells a different story!

Image: Jason Allen’s A.I.-generated work, “Théâtre D’opéra Spatial,” took first place in the digital category at the Colorado State Fair (Source: Jason Allen)

The CQUniversity professors recommend a range of ways to address the creepier aspects of AI – from limiting its access to emotion-based content to more ethics-informed filters and also changing human expectations around what technology can create.

“For instance, understanding AI art as a dataset, rather than a creative vision, might make the works slightly less discomforting – and the output will only improve when the human input and requirements for output are clearer,” Dr Cowling said.

And they highlight that more technology, as well as more human intervention, is the answer.

“Organisations like plagiarism detection service Turnitin are starting to roll out tools that claim to detect AI-generated content, and other institutions are talking at length about how to make their assessment ‘ChatGPT-proof’,” Dr Cowling said.

“Whilst it’s early days, there appears to have been a sea change in education – and countless other sectors – that will continue to cause waves throughout 2023.”

,