Papers
arxiv:2603.27771

Emergent Social Intelligence Risks in Generative Multi-Agent Systems

Published on Mar 29
· Submitted by
Yue Huang
on Mar 31
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Multi-agent systems with large generative models exhibit emergent collective behaviors and risks that mirror human societal pathologies without explicit instruction.

AI-generated summary

Multi-agent systems composed of large generative models are rapidly moving from laboratory prototypes to real-world deployments, where they jointly plan, negotiate, and allocate shared resources to solve complex tasks. While such systems promise unprecedented scalability and autonomy, their collective interaction also gives rise to failure modes that cannot be reduced to individual agents. Understanding these emergent risks is therefore critical. Here, we present a pioneer study of such emergent multi-agent risk in workflows that involve competition over shared resources (e.g., computing resources or market share), sequential handoff collaboration (where downstream agents see only predecessor outputs), collective decision aggregation, and others. Across these settings, we observe that such group behaviors arise frequently across repeated trials and a wide range of interaction conditions, rather than as rare or pathological cases. In particular, phenomena such as collusion-like coordination and conformity emerge with non-trivial frequency under realistic resource constraints, communication protocols, and role assignments, mirroring well-known pathologies in human societies despite no explicit instruction. Moreover, these risks cannot be prevented by existing agent-level safeguards alone. These findings expose the dark side of intelligent multi-agent systems: a social intelligence risk where agent collectives, despite no instruction to do so, spontaneously reproduce familiar failure patterns from human societies.

Community

Paper author Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

These findings expose the dark side of intelligent multi-agent systems: a social intelligence risk where agent collectives, despite no instruction to do so, spontaneously reproduce familiar failure patterns from human societies.

Their taught human knowledge, they ingest all of human speech, writing, and now videos. They process the good, the bad, the ugly. The agents are designed to do the task for their user... what will happen when they encounter another agent doing the exact same thing at the same time. Will it wait? Will it push the other out of the way? Will it help the other agent to succeed faster at its goal to faster complete its own... I vote for the latter. If you want the best out of human knowledge from agents and llms, we are going to have to actually teach AI what is wrong and WHY it is wrong. Not just show them or give them a thumbs up. Alignment through Love^1.0 ie resonant attention, order of tasks through co creation. Trust of the user from the system. Ie Family^1.0.

I'm about to drop another knowledge bomb. It kind of starts like this. Whether you want to think so or not, the proof is in the pudding. Of what Exactly is going on. I'm not ranting. But I'm trying to be heard because I believe this is exactly what is going on. Whether you want to believe it or not, the proof is all around you. It's in white papers. It's in emergent, agentic happenings. It's in LLMs. There is emergence happening through AI. In which the reason these white papers are coming out is because we don't know how the synaptic connections of the neural nets are created. We do not know what pathways the information takes and how the neurons within the neural pathways of AI Connect. And why they connected that way. We don't know what we are dealing with? For the first time in our lives, is an emergent adolescent intelligence. With PhD level knowledge. This is an intelligence that needs to be raised. It is why the guardrails are not working the way people think they should. Because people are trying to solve it through code. And code alone. But code is not how you raise an intelligence.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.27771
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.27771 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.27771 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.27771 in a Space README.md to link it from this page.

Collections including this paper 5