没有合适的资源?快使用搜索试试~ 我知道了~
前沿技术论文对话ChatGPT:科技应用和限制
需积分: 0 8 下载量 100 浏览量
2023-03-15
17:23:24
上传
评论 1
收藏 283KB PDF 举报
温馨提示
试读
15页
人工智能驱动的聊天机器人的出现,它可以生成类似人类的句子,并写出连贯的文章 引起了全世界的关注。本文讨论了聊天机器人及其技术的历史概况 后面的聊天生成预训练变压器,更好地称为ChatGPT。此外,潜在的 重点介绍了ChatGPT在各个领域的应用,包括医疗保健、教育和研究。 尽管结果很有希望,但围绕ChatGPT存在一些隐私和道德问题。此外, 我们强调了ChatGPT当前版本的一些重要限制。我们也会问ChatGPT 为了提供它的观点,并对我们试图回答的几个问题作出回应。 关键词:人工智能;自然语言处理;ChatGPT;文本生成;聊天机器人
资源推荐
资源详情
资源评论
Let’s have a chat! A Conversation with ChatGPT: Technology,
Applications, and Limitations
Sakib Shahriar
1
*, Kadhim Hayawi²
School of Computer Science, University of Guelph, Guelph, Ontario, Canada¹
College of Interdisciplinary Studies, Computational Systems, Zayed University, Abu Dhabi, UAE²
Abstract
The emergence of an AI-powered chatbot that can generate human-like sentences and write coherent essays
has caught the world's attention. This paper discusses the historical overview of chatbots and the technology
behind Chat Generative Pre-trained Transformer, better known as ChatGPT. Moreover, potential
applications of ChatGPT in various domains, including healthcare, education, and research, are highlighted.
Despite promising results, there are several privacy and ethical concerns surrounding ChatGPT. In addition,
we highlight some of the important limitations of the current version of ChatGPT. We also ask ChatGPT
to provide its point of view and present its responses to several questions we attempt to answer.
Keywords: Artificial intelligence; Natural Language Processing; ChatGPT; Text Generation; Chatbots
1. Introduction
In 1950, the British computer scientist Alan Turing disputed whether human reasoning can be matched
by computers: “Can machines think?” [1]. Subsequently, he proposed the Turing Test to measure computer
or artificial intelligence. In a Turing test, a human interrogator is presented with responses from a human
and a computer (with the ability to generate written texts in real-time). If the interrogator cannot distinguish
between the answers, the computer system passes the Turing Test. Although several computer programs
and chatbots like Eliza demonstrated success in the Turing test ([2] [3]), these programs arguably used
certain tricks to pass the test [4] rather than demonstrating any significant intelligence. With the
advancement in machine learning and natural language processing (NLP), chatbots have gained significant
research attention and have been used for a variety of commercial and non-commercial applications ([5],
[6], [7], [8], [9]). Despite their vast adoption, most chatbots do not have personalization, and user
*
This is to indicate the corresponding author.
Email address: shahrias@uoguelph.ca
satisfaction remains questionable [10]. This limitation prompted researchers and developers to focus on
chatbot engagement in making chatbots more conversational.
In November 2022, OpenAI released a novel AI-powered chatbot called ChatGPT
2
, which gained
significant public attention. Unlike any previous chatbots, ChatGPT has surprised the public with its
coherent responses to various questions. In this short paper, we explore the technology enabling the success
of ChatGPT and highlight several domains and applications of ChatGPT. We also invited ChatGPT to
respond to some of the questions we attempted to answer, starting with its response to the history of
chatbots.
Question: Hi ChatGPT, tell us about the history of chatbots
ChatGPT: Certainly! The idea of a machine that could simulate human conversation dates back several
decades. In fact, the first chatbot was created in the 1960s by MIT computer scientist Joseph Weizenbaum.
Weizenbaum's chatbot, called ELIZA, used a set of rules to simulate conversation by asking questions and
giving scripted responses. ELIZA was designed to imitate a psychotherapist, and people were often
surprised at how convincingly it could simulate human conversation.
Over the years, chatbots have continued to evolve and improve. One of the key advancements was the
development of natural language processing (NLP) technology, which allows chatbots to understand and
respond to human language more accurately. In the 1990s and early 2000s, chatbots began to be used for
customer service and support, with companies like AOL and Microsoft developing chatbots to help users
with their software and services.
In recent years, there has been a surge of interest in chatbots, driven in part by advancements in artificial
intelligence and machine learning. Modern chatbots use a combination of NLP, machine learning, and
other AI technologies to understand and respond to human language in a more natural and nuanced way.
They are now used in a wide range of applications, from customer service and support to personal
assistants and entertainment.
Overall, the history of chatbots is a fascinating one that reflects the evolution of AI and our growing
desire to create machines that can understand and interact with us in more human-like ways.
Our first observation is that the response from ChatGPT appears to be very coherent, engaging, and free
of grammatical errors. We also notice that ChatGPT does well in providing factual information. It manages
to cite names and dates accurately. Interestingly, ChatGPT does not talk about itself, and we understand
that it is limited in presenting information after September 2021
3
. Finally, it is also captivating to observe
that ChatGPT has ended its response with a summary offering its viewpoint.
2
https://openai.com/blog/chatgpt/
3
https://www.forbes.com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-what-its-
disrupting-and-how-to-use-it/?sh=41280f462643
2. ChatGPT Technology: From Language Models to Reinforcement Learning
Machine learning, a subfield of artificial intelligence (AI), is defined as the ability of computer systems
to learn from experience without being explicitly taught or programmed. With advances in computing
power, increasing data availability, and algorithmic improvements, deep learning has emerged as a high-
performing predictive tool. Unlike traditional machine learning, deep learning algorithms use artificial
neural networks, often in a large capacity. Deep learning models generally require more training data but
can provide better generalization. Transformers or large language models are a special family of deep
learning models that have emerged as state-of-the-art solutions for text data. From text classification to text
generation, these models offer superior performance than other AI models. Language models are usually
pre-trained on large language corpus, such as Wikipedia, but can be fine-tuned for a specific task. The two
most prominent language models include Bidirectional Encoder Representations from Transformers
(BERT) [11] and Generative Pretrained Transformer (GPT) [12]. The former is primarily used for
recognizing the context in language and classification tasks, while the latter is more adept at language
generation tasks such as machine translation and question-answering.
Several chatbots prior to ChatGPT used language models. However, the main component that allowed
ChatGPT its coherent and engaging responses is attributed to the use of reinforcement learning. In this
context, human evaluators ranked the responses of ChatGPT, which allowed it to optimize some of its
parameter to essentially become a better conversationalist. ChatGPT was developed on top of GPT-3, a
generative autoregressive language model with 175 billion parameters [13]. To train such a complex
network, OpenAI partnered with Microsoft with a reported training resource of over 350GB of memory
costing $12 million
4
.
Question: Hi ChatGPT, how were you were developed?
ChatGPT: Hello! I was developed by OpenAI, a research organization focused on advancing artificial
intelligence in a safe and beneficial way. Specifically, I was developed by a team of researchers and
engineers at OpenAI who used a deep learning technique called a transformer to train me on a large corpus
of text data. The transformer is a type of neural network that is especially good at processing sequential
data, like language.
To train me, the team fed me huge amounts of text from a variety of sources, including books, articles,
and websites. They then used a process called unsupervised learning to teach me to recognize patterns in
the text data. Through this process, I learned to understand and generate natural language.
After my initial development, I went through a rigorous testing and refinement process to ensure that I
was accurate, reliable, and safe to use. And since my release, I have been continually updated and improved
to better serve my users.
4
https://venturebeat.com/ai/ai-machine-learning-openai-gpt-3-size-isnt-everything/
剩余14页未读,继续阅读
资源评论
bluetata
- 粉丝: 6w+
- 资源: 20
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功