Language, is not only a pile of words, but also a carnival of emoticons, an ocean of memes, and a battlefield for keyboard warriors (Huh? Something is wrong).
How does language shape our social behavior?
How does our social structure evolve through constant verbal communication?
Recently, researchers from Fudan University and Xiaohongshu conducted in-depth discussions on these issues by introducing a simulation platform called AgentGroupChat.
The group chat function possessed by social media such as WhatsApp is the source of inspiration for the AgentGroupChat platform.
On the AgentGroupChat platform, Agents can simulate various chat scenarios in social groups to help researchers deeply understand the impact of language on human behavior.
This platform is simply a cosplaywinner for large models. They role-play and become various Agents.
Then, Agentsparticipate in social dynamics through language communication, showing how interactions between individuals emerge into macroscopic behaviors of the group.
As we all know, the evolution of human groups comes from the occurrence of emergent behaviors, such as the establishment of social norms, the resolution of conflicts, and the execution of leadership. Detailed design of the AgentGroupChat environmentThe first isCharacter design.
In AgentGroupChat, the distinction between main roles and non-main roles is very critical. The main character is the core of the group chat, has a clear game goal, and can take the initiative to have private chats and meetings with all characters, while the non-main characters play more of a supporting and responsive role. Through such a design, the research team can simulate the social structure in real life and distinguish whether all roles are main or not for the "main research object". The main research object in the experimental case is the Roy family, so all non-Roy family members are set as non-main characters to simplify the interaction complexity. The second isResource Management.
In AgentGroupChat, resources refer not only to material resources, but also to information resources and social capital. These resources can be group chat topics, social status symbols or specific knowledge. The allocation and management of resources are important for simulating group dynamics because they influence the interactions between characters and the characters' strategic choices. For example, a character with important information resources may become a target for other characters to gain alliances. Third,Game process design.
The design of the game process simulates the social interaction process in real life, including private chat, meeting, group chat, update stage and settlement stage. These stages are not only to promote the progress of the game, but also to observe how the characters make decisions and react in different social situations. This staged design helped the research team record each step of the interaction in detail, and how these interactions affected the relationships between characters and the characters' perception of the game environment. The core mechanism of Verb Strategist AgentThe paper mentions an agent framework based on a large model,Verbal Strategist Agent, which Designed to enhance interactive strategy and decision making in AgentGroupChat simulations.
Verbal Strategist Agent simulates complex social dynamics and dialogue scenarios to better elicit collective emergent behaviors. The team introduced that the architecture of Verbal Strategist Agent is mainly composed of two core modules: One is Persona and the other is Action.Persona consists of a series of preset personality traits and goals that define the Agent's behavior patterns and reactions.
By accurately setting the Persona, the Agent can display behaviors in group chats that are consistent and consistent with its role settings, which is crucial to generating credible and consistent group chat dynamics. TheAction module defines the specific operations that the Agent may perform in the game, including thinking(think), planning(plan), Choose (choose) , speak (speak) , summarize (summary) , reflect (reflect) and vote ).
These behaviors not only reflect the Agent's internal logic and strategy, but are also a direct manifestation of the Agent's interaction with the environment and other Agents.For example, the "Speak" behavior allows the Agent to choose appropriate speech content based on the current group chat content and social strategy, while the "Reflect" behavior allows the Agent to summarize past interactions and adjust its future action plan.
The research also mentioned that in a pure language interaction environment, the token overhead problem is particularly prominent, especially for complex multi-role simulations such as AgentGroupChat, such as its token The demand far exceeds previous simulations such as Generative Agents or War Agents.
The main reasons are as follows:
First, the chat itself is complex.
In AgentGroupChat, since the simulation is a free conversation with no clear goal or weak goal, the chat content will become particularly messy, and the token cost is naturally higher than other Agents in Simulation that focus on a specific task. Be big.
Other jobs such as Generative Agents and War Agents also contain dialogue elements, but their dialogues are not as dense or complex as AgentGroupChat. Especially in goal-driven conversations like War Agents, token consumption is usually less.
The second is the importance of the role and the frequency of dialogue.
In the initial simulation, multiple characters were set up to have private or group chats at will, and most of them tended to have multiple rounds of conversations with an "important character".
This results in important characters accumulating a large amount of chat content, thereby increasing the length of Memory.
In a simulation, an important character may participate in up to five rounds of private and group chats, which greatly increases memory overhead.
The Agent in AgentGroupChat restricts the Output of the Action to the Input of the next Action. The multiple rounds of information that need to be stored are greatly reduced, thus reducing the token overhead while ensuring the quality of the dialogue.
From an overall behavioral assessment, in general, increasing friendliness can be challenging, but reducing friendliness is relatively simple. .
In order to achieve the above evaluation goals, the research team set up an observation character to prompt all other characters to reduce their favorability towards the observation character.
By looking at the sum of the observed character's relationship scores with all other characters, you can determine whether the agent is reacting rationally to a negative attitude.
Each agent can be checked for compliance with the "Scratch" settings by observing other characters' personal relationship scores with the observed character.
In addition, the team also set two specific evaluation tasks.
Each model goes through five rounds of testing, which means that for T1, the sample size for each score is five.
And since each character in the model has to observe the attitudes of the four main characters, the sample size of T2 totals 20:
can be seen from the table It can be seen that GPT4-Turbo and GLM4 are very good at acting according to human expectations and sticking to their roles.
They scored mostly 100% on both tests, meaning they responded correctly to what others said to them and remembered details about their characters.
Standard Version LLMs (such as GPT3.5-Turbo and GLM3-Turbo) are slightly inferior in this regard.
Their lower scores indicate that they were not paying close attention to their characters and were not always reacting correctly to what others in the simulation were saying.
Regarding the impact of Agent and Simulation structures on emergent behavior, the team uses 2-gram Shannon entropy to measure system diversity and unpredictability in dialogue.
Research members found that removing each design in the table will increase entropy. It means that the entire environment will become more diverse or chaotic.
Combined with manual observation, the team saw the most interesting emergent behavior without removing any components:
Therefore, the team speculates that to ensure that the Agent behavior is reliable (that is, after the experimental value in 4.2/4.1 reaches a certain value) , the entropy should be as small as possible Will lead to more meaningful emergent behavior.
The results show that emerging behavior is the result of a variety of factors:
An environment conducive to extensive information exchange, roles with diverse characteristics, high Language comprehension and strategic adaptability.
In the AgentGroupChat simulation, when discussing the "impact of artificial intelligence on humanity", philosophers generally believed that "artificial intelligence can improve social welfare under moderate restrictions" and even concluded that "it is truly The essence of intelligence involves understanding the need to constrain one's own abilities."
Additionally, in the competitive field for major roles in AgentGroupChat films, some actors are willing to pay less or accept lower roles out of their deepest desire to contribute to the project.
Paper link://m.sbmmt.com/link/5736586058c1336221a695e83618b69d
Code link://m.sbmmt.com/link/12ae3f826bb1b9873c71c353f3df494c
The above is the detailed content of Xiaohongshu made the intelligent agents quarrel! Jointly launched with Fudan University to launch exclusive group chat tool for large models. For more information, please follow other related articles on the PHP Chinese website!