chain-of-thought
10 mentions across all digests
Chain-of-thought is a prompting and reasoning technique where language models generate intermediate steps before a final answer, with active research into pruning redundant reflection steps and applying it to knowledge editing tasks.
LLM Reasoning Is Latent, Not the Chain of Thought
Reasoning in large language models occurs internally as latent computation rather than in visible chain-of-thought outputs, challenging conventional assumptions about model interpretability.
Learning to Edit Knowledge via Instruction-based Chain-of-Thought Prompting
Chain-of-thought reasoning enables language models to edit their own factual knowledge through structured prompting, circumventing the need for expensive retraining cycles.
Graph-Based Chain-of-Thought Pruning for Reducing Redundant Reflections in Reasoning LLMs
Graph-based pruning method removes redundant reflection steps from chain-of-thought reasoning, improving inference efficiency while preserving answer quality.
Inclusion-of-Thoughts: Mitigating Preference Instability via Purifying the Decision Space
Inclusion-of-Thoughts improves LLM accuracy on reasoning tasks by pre-filtering implausible multiple-choice options, yielding substantial gains in arithmetic and commonsense reasoning with minimal computational overhead.
Shorter, but Still Trustworthy? An Empirical Study of Chain-of-Thought Compression
Empirical research demonstrates that chain-of-thought reasoning can be compressed without losing performance, offering significant inference efficiency gains for language models.