Back to Browse

LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

338 views
Nov 9, 2023
48:17

Sahar Abdelnabi There is a growing interest in using Large Language Models (LLMs) as agents to tackle real-world tasks that may require assessing complex situations. Yet, we have a limited understanding of LLMs’ reasoning and decision-making capabilities, partly stemming from a lack of dedicated evaluation benchmarks. As negotiating and compromising are key aspects of our everyday communication and collaboration, we propose using scorable negotiation games as a new evaluation framework for LLMs. We create a testbed of diverse text-based, multi-agent, multi-issue, semantically rich negotiation games, with easily tunable difficulty. To solve the challenge, agents need to have strong arithmetic, inference, exploration, and planning capabilities, while seamlessly integrating them. Via a systematic zero-shot Chain-of-Thought prompting (CoT), we show that agents can negotiate and consistently reach successful deals. We quantify the performance with multiple metrics and observe a large gap between GPT-4 and earlier models. Importantly, we test the generalization to new games and setups. Finally, we show that these games can help evaluate other critical aspects, such as the interaction dynamics between agents in the presence of greedy and adversarial players. arxiv: https://arxiv.org/abs/2309.17234 Sign up for future talks: https://sig.llmsecurity.net/join/

Download

1 formats

Video Formats

360pmp465.1 MB

Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.

LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games | NatokHD