SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs

1Georgia Tech, 2Microsoft
† Corresponding authors

Figure 1: Pass@1 accuracy under unlimited token budgets. On mathematics and STEM reasoning benchmarks, SwiReasoning yields improvements of up to +2.8% and +2.0%, respectively.

Figure 2: Token efficiency (accuracy per token compared to standard CoT) under limited token budgets. Across reasoning LLM families and sizes, SwiReasoning brings average efficiency improvements of up to +79%.

👀 TL;DR

SwiReasoning is a training-free method for Pareto-superior reasoning LLMs that dynamically switches between explicit and latent thinking, with a switch count control to suppress overthinking.

🎥 SwiReasoning Demo

Video 1: Comparison of solving the same question with the same reasoning LLM (6s vs. 1min).

🌟 Abstract

Recent work shows that, beyond discrete reasoning through explicit chain-of-thought steps, which are limited by the boundaries of natural languages, large language models (LLMs) can also reason continuously in latent space, allowing richer information per step and thereby improving token efficiency. Despite this promise, latent reasoning still faces two challenges, especially in training-free settings: 1) purely latent reasoning broadens the search distribution by maintaining multiple implicit paths, which diffuses probability mass, introduces noise, and impedes convergence to a single high-confidence solution, thereby hurting accuracy; and 2) overthinking persists even without explicit text, wasting tokens and degrading efficiency. To address these issues, we introduce SwiReasoning, a training-free framework for LLM reasoning which features two key innovations: 1) SwiReasoning dynamically switches between explicit and latent reasoning, guided by block-wise confidence estimated from entropy trends in next-token distributions, to balance exploration and exploitation and promote timely convergence. 2) By limiting the maximum number of thinking-block switches, SwiReasoning curbs overthinking and improves token efficiency across varying problem difficulties. On widely used mathematics and STEM benchmarks, SwiReasoning consistently improves average accuracy by 1.5%–2.8% across reasoning LLMs of different model families and scales. Furthermore, under constrained budgets, SwiReasoning improves average token efficiency by 56%-79%, with larger gains as budgets tighten.

🔍 SwiReasoning Pipeline

Figure 3: SwiReasoning framework. (a) Dynamic mode switching alternates between explicit and latent thinking based on block-wise confidence estimated from entropy trends. (b) A switch count control mechanism limits the maximum number of thinking-block transitions, suppressing overthinking before the final answer.

📊 Pass@1 accuracy

Table 1: Comparison of SwiReasoning and CoT with sampling, CoT with greedy decoding, and Soft Thinking on mathematics and STEM benchmarks. SwiReasoning improves accuracy by +2.17% on average.

📈 Token efficiency

Figure 4: Token efficiency comparisons. SwiReasoning achieves the highest token efficiency throughout all token budgets in 13 out of 15 evaluations, with an efficiency improvement of +84% over CoT on average.

📈 Pass@k accuracy

Figure 5: Pass@k accuracy evaluation with Qwen3-8B on AIME 2024 and 2025 benchmarks. SwiReasoning achieves maximum reasoning accuracies +50% earlier compared to CoT on average.

✨ BibTeX

@misc{shi2025swireasoningswitchthinkinglatentexplicit,
      title={SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs}, 
      author={Dachuan Shi and Abedelkadir Asi and Keying Li and Xiangchi Yuan and Leyan Pan and Wenke Lee and Wen Xiao},
      year={2025},
      eprint={2510.05069},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.05069}, 
}