Model Collapse Ends AI Hype
AI "thinking" and "reasoning" are illusions—here's what recent research says is really going on. By watching this talk, you'll become immune to most of the AI hype coming out of Silicon Valley. More AI Videos from George Montañez: https://www.youtube.com/playlist?list=PLd3oVxpqxB4PJPjWjeo4SZBmEfy5yj-a8 Abstract: Do Large Language Models (LLMs) think and reason? Are they perpetual information machines, producing endless coherent and correct text from finite training data? We explore how LLMs work and whether they produce rational thought and endless information. We show how theoretical considerations and experimental results from philosophy, statistics, information theory, and machine learning argue against the thesis that LLMs are rational, information-generating entities. Speaker: George D. Montañez, PhD Book a talk: Message me on LinkedIn. linkedin.com/in/georgemontanez Referenced papers: ------------------------------- Claude Shannon, "A Mathematical Theory of Communication" - https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf Duc et al., "Mathematics with large language models as provers and verifiers" - https://arxiv.org/abs/2510.12829 Zhang et al., "On the Paradox of Learning to Reason from Data" - https://arxiv.org/abs/2205.11502 Mirzadeh et al., "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models" - https://arxiv.org/abs/2410.05229 Pournemat et al., “Reasoning Under Uncertainty: Exploring Probabilistic Reasoning Capabilities of LLMs” - https://arxiv.org/abs/2205.11502 Palod et al., "Performative Thinking? The Brittle Correlation Between CoT Length and Problem Complexity" - https://arxiv.org/abs/2509.07339 Turpin et al., "Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting" - https://arxiv.org/abs/2305.04388 Chen et al., "Reasoning Models Don't Always Say What They Think" - https://arxiv.org/abs/2505.05410 Kambhampati et al., "Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces!" - https://arxiv.org/abs/2504.09762 Shumailov et al., "The Curse of Recursion: Training on Generated Data Makes Models Forget" - https://arxiv.org/abs/2305.17493 Keisha et al., "Knowledge Collapse in LLMs: When Fluency Survives but Facts Fail under Recursive Synthetic Training" - https://arxiv.org/abs/2509.04796 Gertsgrasser et al., "Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data" - https://arxiv.org/abs/2404.01413 Montañez, "The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm" - https://arxiv.org/abs/2404.01413 Montañez et al., "The Futility of Bias-Free Learning and Search" - https://arxiv.org/abs/1907.06010 Demsbki, "The Law of Conservation of Information: Search Processes Only Redistribute Existing Information" - https://bio-complexity.org/ojs/index.php/main/article/viewArticle/BIO-C.2025.2 == Subscribe! It's free! https://www.youtube.com/@TheTheosTheory?sub_confirmation=1 If you like our videos, please consider subscribing and clicking like! Doing so recommends the video to others! #AI #aihype #llm #chatgpt #siliconvalley #conservationofinformation #modelcollapse #georgemontanez
Download
1 formatsVideo Formats
Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.