Can LLM personalities improve honeypot effectiveness?
Inducing Personality in LLM-Based Honeypot Agents: Measuring the Effect on Human-Like Agenda Generation
This paper introduces SANDMAN, a framework for creating "Deceptive Agents" – AI-powered decoys that mimic human behavior in digital environments to mislead attackers. These agents utilize Large Language Models (LLMs) to generate realistic schedules and activities based on assigned personality traits derived from the Five-Factor Model (OCEAN). Key findings relevant to LLM-based multi-agent systems include the ability to induce distinct personalities in LLMs which significantly influence task planning, the potential for using LLMs to create dynamic and believable simulated human behavior, and the importance of memory and a decision-making engine in agent architectures for creating plausible activity sequences. The research also highlights the need for further exploration into multi-agent communication and dynamic adaptation to environment and context.