What is this new software and should we be concerned about its use?
The software was created by Microsoft-backed company Open A-I, which describes Chat G-P-T as intelligence "trained to follow an instruction and provide a detailed response".
Professor Timothy Miller is an artificial intelligence expert from the University of Melbourne. He explains how the program uses large language models to create text-based responses.
"All that large language models really do at a high level is they sort of have relationships about what words occur close to each other. In other words, how likely it is that a sentence would be produced from the data that it's trained on. And in the terms of Chat GPT, the data it's trained on is basically the internet."
Professor Jeannie Paterson specialises in privacy law at the University of Melbourne and is also co-director at the Centre for A-I and Digital Ethics.
She says lack of information around data collection and storage is not only concerning for breaches of personal information, but also in terms of how that data is used to influence human beliefs and choices.
"This is a technology that has the capacity to analyse the data that is collected and then use that information back to us, and that can be used in the commercial interests of the person that is providing the technology, or the political interest of the person who is providing the technology. So that sort of concern about prompting extreme political views, and nudging people to political commercial decisions I think is really problematic."
Professor Paterson says privacy law needs to be designed to keep up with changing technology.
Professor Margaret Bearman is from the Centre for Research and Assessment in Digital Learning at Deakin University.
She says she wasn't involved in the university's decision around Turnitin, but doesn't believe there is a huge concern around AI replacing student's work at scale.
Professor Bearman says it will instead prompt universities to come up with more creative assessments, that also work to minimise cheating.
"Gen A-I is going to be out there in the workplace, and I spoke to someone on the tram and they said, 'Oh my boss is writing a policy using ChatGPT'. So if people in workplaces are using ChatGPT and people are using it in their daily lives, our students need to be able to navigate that world. And I think it's about time our assessments started to match this world they're going to be entering into. And we've had some disconnect from that for some time I'd say."
Professor Bearman insists that generative AI provides a standard response rather than a critical, evaluative one.