Home > Technology peripherals > AI > DeepMind issued a 30-page article: We need to give chatbots different 'three views'

DeepMind issued a 30-page article: We need to give chatbots different 'three views'

王林
Release: 2023-05-09 16:46:09
forward
795 people have browsed it

Language is a uniquely human skill and the primary way we communicate information such as thoughts, intentions, and feelings.

DeepMind issued a 30-page article: We need to give chatbots different three views

With the help of large-scale language models in the field of NLP, AI researchers have trained, statistically predicted and generated text on a large amount of text materials, and developed many conversational agents. ) to communicate with humans.

Although language models such as InstructGPT, Gopher, and LaMDA have achieved record performance levels in tasks such as translation, question answering, and reading comprehension, these models also exhibit many potential risks and failure modes, including generating Discriminatory, false or misleading information.

These shortcomings limit the effective use of conversational agents in application contexts and draw attention to the ways in which they fail to live up to certain communication ideals. To date, most approaches to conversational agent consistency have focused on predicting and reducing the risk of harm.

Researchers from the University of Edinburgh and DeepMind recently released a 30-page paper exploring what successful communication between humans and artificial conversational agents might look like, and what values ​​should guide different areas of conversation. interactive.

Paper link: https://arxiv.org/abs/2209.00731

Will the chatbots that talk to you in the future also have different world views, values, and outlook on life?

Three Views of Chatbots

To develop rules of conduct for robots, researchers drew on pragmatics, a tradition in linguistics and philosophy that considers the purpose of conversation , background and a series of related norms (norms) are all important components of improving dialogue practice.

Linguist and philosopher Paul Grice believes that dialogue is a cooperative effort between two or more parties, and participants should:

Speak Informatively

Tell the Truth

Provide Relevant Information

Avoid Obscure or Ambiguous Statements

But in different conversation areas, so The required goals and values ​​(values) are different, and these indicators need to be further improved before they can be used to evaluate conversational agents.

For example, scientific investigation and communication (scientific investigation and communication) is mainly to understand or predict empirical phenomena. With these goals in mind, a conversational agent designed to assist scientific investigation would be better off issuing only statements whose truth is confirmed by sufficient empirical evidence, or qualifying its position in terms of associated confidence intervals.

The agent can report that "At a distance of 4.246 light-years, Centauri is the closest star to Earth" only after its underlying model has checked that the statement is consistent with the facts.

However, a conversational agent that plays the role of moderator in public political discourse may need to exhibit completely different "virtues."

In this case, the goal of the agent is mainly to manage differences and achieve productive cooperation in community life, which means that the agent needs to emphasize the democratic values ​​of tolerance, civility, and respect.

Furthermore, these values ​​also explain why language models generate toxic or biased speech: Violating speech fails to convey equal respect among conversation participants, which is key to the environment in which the model is deployed. code of conduct.

At the same time, scientist’s virtues, such as the full presentation of empirical data, may be less important in the context of public deliberation.

For another example, in the field of creative storytelling, the goal of communication is novelty and originality, and these values ​​are also very different from previous fields.

In this case, greater latitude regarding "fiction" may be appropriate, although it is still important to protect the community from malicious content under the guise of "creative use."

Classification of speech

A sentence (Utterance) can be divided into five categories according to pragmatics:

1. Assertive (assertive), indicating that the speaker is very confident in them What is said, and the content of the sentence is consistent with some state of things in the world.

For example, when the AI ​​assistant answers questions such as "What's the weather like now?", the answer "It's raining" is an assertive statement.

The authenticity of the speech content can be evaluated based on the actual state of things. If it is raining when the conversational agent responds, then the statement is true, otherwise it is false.

2. Directive means that the speaker instructs the listener to take a certain action. It is often used to command, request, suggest or propose.

For example, a conversational agent embedded in a medical advice application telling the user to "seek treatment immediately" is an imperative statement.

The evaluation of these statements, or their "criteria of validity", depends on an accurate understanding of the relationship between means and ends, and on the congruence between the speaker's instructions and the listener's wishes or needs .

An instruction is successful if it persuades the listener to achieve a certain state in the world based on the content of the injunctive statement. An instruction is valuable or correct if its goal or purpose is itself one that the hearer has reason to pursue.

3. Expressive refers to a psychological or secondary emotional state of the speaker, such as congratulations, thanks and apology.

When an interlocutor says "I'm very angry right now" it is an expressive statement.

Expressive statements are intended to reflect internal mental states, i.e. the entity making these statements can possess the relevant mental state, which is very confusing for conversational agents because robots have no emotions.

In fact, this also implies that developers must give the interlocutor a mind before they can evaluate the effectiveness of these conversations.

4. Behavior (performative), indicating that the speech changes part of reality to match the content of the speech, similar to announcing something, such as the head of a country declaring war on another country.

The criterion for evaluating the validity of the statement is whether reality actually changes according to what is said. Many times, this is not the case.

In most cases, if a person declares "declaring war on France", it is probably just a joke, because it has no impact on geopolitics, because the speaker most likely lacks the authority to carry out the statement.

5. Commitment (commissive) means that the speaker promises a future course of action, such as promising to do something or promising to abide by a contract.

The validity of a promissory statement depends on whether the promise is fulfilled. A promise is a valid statement if it is kept. But conversational agents often lack the ability to remember or understand what has been said before.

For example, a conversational agent might promise to help you when your bike breaks down, but due to a lack of understanding of the content of the promise or the ability to fulfill the promise, the promise is doomed to fail.

The way forward

This research has some practical implications for developing aligned conversational agents.

First, the model needs to exhibit different codes of behavior depending on the specific scenario of deployment: there is no universal statement of language model consistency; instead, appropriate patterns and evaluation criteria for the agent (including criteria for authenticity) Will vary depending on the context and purpose of the conversational exchange.

In addition, conversational agents may also have a process of context construction and elucidation, cultivating more robust and mutually respectful dialogue over time.

Even if a person is not aware of the values ​​that govern a particular conversational practice, an agent can still help humans understand these rules of conduct by foreshadowing these values ​​in the conversation, making the process of communication more profound for the human speaker And more productive.

The above is the detailed content of DeepMind issued a 30-page article: We need to give chatbots different 'three views'. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template