*We're social animals, and people in general don't want to stray too far from the "Overton window" of their friends and family* View quoted note β†’
Interesting but not a word about Chinese offerings. Is Google The New Leader of the AI Race?
*a moneyless system without balance becomes: ... Hunger Games with better software* View quoted note β†’
*When we focus on the old system, we feed it* View quoted note β†’
I used 'ai' to produce this summary about an llm task with 'zero errors' Sorry, not sorry Paper: arxiv. org/pdf/2511.09030 The paper "Solving a Million-Step LLM Task with Zero Errors" by Elliot Meyerson et al., on arXiv (arXiv:2511.09030), presents a framework called MAKER for enabling large language models (LLMs) to execute very long sequences of reasoning steps with zero errors. It addresses the fundamental challenge that LLMs have an inherent error rate that makes completing millions of dependent steps without failure nearly impossible when done naively. Key elements of the approach include: - Massively decomposing tasks into the smallest possible subtasks to minimize errors. - Employing error correction and "red-flagging" invalid outputs to discard potentially erroneous reasoning steps. - Using a voting scheme called "first-to-ahead-by-k" to ensure the correctness of each step through multiple sampled outputs. - Applying this strategy specifically to the Towers of Hanoi problem with 20 disks, which requires over one million steps, and successfully completing the task with zero errors. The results demonstrate that scaling LLM-based systems to extremely long tasks is feasible by combining extreme decomposition and error correction, which contrasts with relying solely on continual LLM improvements. MAKER also suggests future research directions for automating decomposition and handling various types of steps and error correlations. In summary, this work marks a breakthrough in achieving error-free long-horizon sequential reasoning with LLMs by architecting an ensemble-based, massively decomposed process, making it viable for safety-critical or large-scale AI applications [1][2]. Citations: [1] Solving a Million-Step LLM Task with Zero Errors [2] Solving a Million-Step LLM Task with Zero Errors [3] computational costs increase with ensemble size and error ... [4] Solving a Million-Step LLM Task with Zero Errors [5] Cognizant Introduces MAKER: Achieving Million-Step, Zero ... https://www.reddit.com/r/mlscaling/comments/1owcnsn/cognizant_introduces_maker_achieving_millionstep/ [6] New paper on breaking down AI tasks into tiny steps for ... [7] Future plans to integrate MAKER/MDAP abstractions? [8] MAKER Achieves Million- Step, Zero-Error LLM Reasoning [9] [PDF] PyTorch: An Imperative Style, High-Performance Deep Learning Library | Semantic Scholar https://www.semanticscholar.org/paper/PyTorch:-An-Imperative-Style,-High-Performance-Deep-Paszke-Gross/3c8a456509e6c0805354bd40a35e3f2dbf8069b1 [10] [PDF] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | Semantic Scholar https://www.semanticscholar.org/paper/Sleeper-Agents:-Training-Deceptive-LLMs-that-Safety-Hubinger-Denison/9363e8e1fe2be2a13b4d6f5fc61bbaed14ab9a23
*The debt-pacifier. It's given to the masses to keep them quiet, to keep them sucking, to give them the illusion of nourishment and comfort, while it actually stunts their growth and keeps them in a state of perpetual, infantile dependence* View quoted note β†’