15 ways AI could ruin scholarly communication - and what we can do about it

Presenter Information

Deborah FITCHETT, Lincoln University

Loading...

Media is loading
 

Start Date

1-11-2023 9:45 AM

Publication Date

2023-11-01

End Date

1-11-2023 10:15 AM

Description

Despite the dreams of science-fiction fans worldwide, the thing being marketed as "artificial intelligence" is no more than high-powered predictive text. What it gets right is thanks to its input data created by billions of humans, and to an invisible and underpaid workforce of content moderators. What it gets wrong threatens privacy, exacerbates sexism, racism and other inequities, and may even be environmentally damaging. There are situations that are well enough defined that machine models can be useful, but scholarly communication by its nature is full of new and unique information, relying on precisely reported data, that algorithms based on probabilities can't deal with. So as a community we need to come with ways to prevent machine-generated fake papers from poisoning the well of science - and we need to be healthily sceptical of vendors selling us machine-based solutions to problems that can still only be addressed by human intelligence.

The video is not available for download

Share

COinS
 
Nov 1st, 9:45 AM Nov 1st, 10:15 AM

15 ways AI could ruin scholarly communication - and what we can do about it

Despite the dreams of science-fiction fans worldwide, the thing being marketed as "artificial intelligence" is no more than high-powered predictive text. What it gets right is thanks to its input data created by billions of humans, and to an invisible and underpaid workforce of content moderators. What it gets wrong threatens privacy, exacerbates sexism, racism and other inequities, and may even be environmentally damaging. There are situations that are well enough defined that machine models can be useful, but scholarly communication by its nature is full of new and unique information, relying on precisely reported data, that algorithms based on probabilities can't deal with. So as a community we need to come with ways to prevent machine-generated fake papers from poisoning the well of science - and we need to be healthily sceptical of vendors selling us machine-based solutions to problems that can still only be addressed by human intelligence.