• Latest
  • Trending
The limitations of AI-generated text

The limitations of AI-generated text

December 16, 2021
Inaugural AfCFTA Conference on Women and Youth in Trade

Inaugural AfCFTA Conference on Women and Youth in Trade

September 6, 2022
Instagram fined €405m over children’s data privacy

Instagram fined €405m over children’s data privacy

September 6, 2022
8 Most Common Causes of a Data Breach

5.7bn data entries found exposed on Chinese VPN

August 18, 2022
Fibre optic interconnection linking Cameroon and Congo now operational

Fibre optic interconnection linking Cameroon and Congo now operational

July 15, 2022
Ericsson and MTN Rwandacell Discuss their Long-Term Partnership

Ericsson and MTN Rwandacell Discuss their Long-Term Partnership

July 15, 2022
Airtel Africa Purchases $42M Worth of Additional Spectrum

Airtel Africa Purchases $42M Worth of Additional Spectrum

July 15, 2022
Huawei steps up drive for Kenyan talent

Huawei steps up drive for Kenyan talent

July 15, 2022
TSMC predicts Q3 revenue boost thanks to increased iPhone 13 demand

TSMC predicts Q3 revenue boost thanks to increased iPhone 13 demand

July 15, 2022
Facebook to allow up to five profiles tied to one account

Facebook to allow up to five profiles tied to one account

July 15, 2022
Top 10 apps built and managed in Ghana

Top 10 apps built and managed in Ghana

July 15, 2022
MTN Group to Host the 2nd Edition of the MoMo API Hackathon

MTN Group to Host the 2nd Edition of the MoMo API Hackathon

July 15, 2022
KIOXIA Introduce JEDEC XFM Removable Storage with PCIe/NVMe Spec

KIOXIA Introduce JEDEC XFM Removable Storage with PCIe/NVMe Spec

July 15, 2022
  • Consumer Watch
  • Kids Page
  • Directory
  • Events
  • Reviews
Tuesday, 7 February, 2023
  • Login
itechnewsonline.com
  • Home
  • Tech
  • Africa Tech
  • InfoSEC
  • Data Science
  • Data Storage
  • Business
  • Opinion
Subscription
Advertise
No Result
View All Result
itechnewsonline.com
No Result
View All Result

The limitations of AI-generated text

by ITECHNEWS
December 16, 2021
in Data Science, Leading Stories
0 0
0
The limitations of AI-generated text

Artificial intelligence has reached a point where it can compose text that sounds so human that it dupes most people into thinking it was written by another person. These AI programs—based on what are called autoregressive models—are being successfully used to create and deliberately spread everything from fake political news to AI-written blog posts that seem authentic to the average person and are published under human-sounding byline.

However, though autoregressive models can successfully fool most humans, their capabilities are always going to be limited, according to research by Chu-Cheng Lin, a Ph.D. candidate in the Whiting School of Engineering’s Department of Computer Science.

YOU MAY ALSO LIKE

Inaugural AfCFTA Conference on Women and Youth in Trade

Instagram fined €405m over children’s data privacy

“Our work reveals that some desired qualities of intelligence—for example, the ability to form consistent arguments without errors—will never emerge with any reasonably sized, reasonably fast autoregressive model,” said Lin, a member of the Center for Language and Speech Processing.

Lin’s research showed that autoregressive models have a linear thought process that cannot utilize reasoning because they are designed to very quickly predict the next word using previous words. This is an issue because the models are not built to backtrack, edit, or change their work, the way humans do when writing something.

“[Human] professionals in all fields do this. The final product may display spotless work, but it is also likely that the work was not done in a single pass, without editing here and there,” Lin said. “But when we train these [AI] models by having them mimic human writing, the models do not observe the multiple rewritings that happened before the final version.”

Lin’s team also showed that current autoregressive models have another weakness: They do not give the computer enough time to “think” ahead about what it should say after the next word, so there is no guarantee that what it says will not be nonsense.

“Autoregressive models have proven themselves very useful in certain scenarios, but they are not appropriate computational models for reasoning. I also find it interesting that our results suggest certain elements of intelligence do not emerge if all we do is try to get machines to mimic how humans speak,” he said.

The result is that the more text that autoregressive models produce, the more obvious their mistakes become, putting the text at risk of being flagged or noticed by another, even less advanced computer programs that require fewer resources to be effective at distinguishing between what was written by an autoregressive models, and what was written by a human.

Because computer programs can decipher what was written by an autoregressive model and what was written by a human, Lin believes that the positives of having AI that can use reasoning far outweigh the negatives, even though a negative could be the spread of misinformation. He says that a process called “text summarization” provides an example of how AI that was capable of using reasoning would be useful.

“These tasks have a computer read a long article, or a table that contains numbers and texts, and then the computer can explain what’s going on in a few sentences. For example, summarizing a news article, or a restaurant’s ratings on Yelp, using a few sentences,” Lin said. “Models that are capable of reasoning can generate texts that are more on the spot, and more factually accurate, too.”

Lin has been working on this research, which is part of his thesis, for several years with his adviser, Professor Jason Eisner. He hopes to use these findings to help design a neural network architecture for his thesis research called “Neural Regular Expressions to help AI more effectively understand the meaning of words.”

“Among many things, NREs can be used to build a dialog system where machines can deduce unobserved things, such as intent, from conversation with humans, using a rule set predefined by humans. These unobserved things can subsequently be used to shape the machine’s response,” Lin said.

by Wick Eisenberg,  Johns Hopkins University

ShareTweetShare
Plugin Install : Subscribe Push Notification need OneSignal plugin to be installed.

Search

No Result
View All Result

Recent News

Inaugural AfCFTA Conference on Women and Youth in Trade

Inaugural AfCFTA Conference on Women and Youth in Trade

September 6, 2022
Instagram fined €405m over children’s data privacy

Instagram fined €405m over children’s data privacy

September 6, 2022
8 Most Common Causes of a Data Breach

5.7bn data entries found exposed on Chinese VPN

August 18, 2022

About What We Do

itechnewsonline.com

We bring you the best Premium Tech News.

Recent News With Image

Inaugural AfCFTA Conference on Women and Youth in Trade

Inaugural AfCFTA Conference on Women and Youth in Trade

September 6, 2022
Instagram fined €405m over children’s data privacy

Instagram fined €405m over children’s data privacy

September 6, 2022

Recent News

  • Inaugural AfCFTA Conference on Women and Youth in Trade September 6, 2022
  • Instagram fined €405m over children’s data privacy September 6, 2022
  • 5.7bn data entries found exposed on Chinese VPN August 18, 2022
  • Fibre optic interconnection linking Cameroon and Congo now operational July 15, 2022
  • Home
  • InfoSec
  • Opinion
  • Africa Tech
  • Data Storage

© 2021-2022 iTechNewsOnline.Com - Powered by BackUPDataSystems

No Result
View All Result
  • Home
  • Tech
  • Africa Tech
  • InfoSEC
  • Data Science
  • Data Storage
  • Business
  • Opinion

© 2021-2022 iTechNewsOnline.Com - Powered by BackUPDataSystems

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Go to mobile version