How can I make LLMs better tutors?
Towards the Pedagogical Steering of Large Language Models for Tutoring: A Case Study with Modeling Productive Failure
October 8, 2024
https://arxiv.org/pdf/2410.03781This research paper explores how to make Large Language Models (LLMs) better tutors by controlling their conversational strategies.
Key points for LLM-based multi-agent systems:
- LLMs lack inherent pedagogical skills: They're better at giving satisfying answers than promoting learning.
- The paper introduces StratL: An algorithm that guides LLMs to follow effective teaching strategies over multiple conversation turns.
- StratL uses "tutoring intents": Specific goals like giving hints or encouraging deeper thinking, which are then translated into prompts for the LLM.
- Focus on "Productive Failure": A teaching method where students explore solutions before being taught, which goes against LLM's natural inclination to provide answers directly.
- Promising results, but limitations: StratL successfully steered LLMs toward the desired teaching style, but more work is needed on intent selection and social/scalability aspects.