AI News

LLMs Can Now Reason Beyond Language: Researchers Introduce Soft Thinking to Replace Discrete Tokens with Continuous Concept Embeddings - MarkTechPost

## LLMs Evolve: Reasoning Beyond Language Barriers Researchers are pushing the boundaries of Large Language Models (LLMs), enabling them to reason in ways that transcend the limitations of…

LLMs Can Now Reason Beyond Language: Researchers Introduce Soft Thinking to Replace Discrete Tokens with Continuous Concept Embeddings - MarkTechPost

May 28, 2025

LLMs Can Now Reason Beyond Language: Researchers Introduce Soft Thinking to Replace Discrete Tokens with Continuous Concept Embeddings - MarkTechPost

## LLMs Evolve: Reasoning Beyond Language Barriers Researchers are pushing the boundaries of Large Language Models (LLMs), enabling them to reason in ways that transcend the limitations of…

## LLMs Evolve: Reasoning Beyond Language Barriers Researchers are pushing the boundaries of Large Language Models (LLMs), enabling them to reason in ways that transcend the limitations of language. The core innovation involves shifting from *discrete tokens* to *continuous concept embeddings*, a process dubbed "soft thinking." > This advancement marks a significant step towards aligning LLM reasoning with human cognition, which relies on abstract, non-verbal concepts.

### The Problem with Token-Based Reasoning Current LLMs typically generate text one token at a time, constrained by a predefined vocabulary. This "token-by-token" approach restricts: * The model's expressive power. * The range of reasoning pathways it can explore, particularly in complex or ambiguous situations.

Standard Chain-of-Thought (CoT) methods, for example, are limited by this token-centric approach, forcing a single reasoning path at each step. ### Soft Thinking: A New Paradigm "Soft thinking" aims to overcome these limitations by: * Replacing discrete tokens with continuous concept embeddings.

* Allowing for more flexible and nuanced reasoning. This approach allows LLMs to move beyond the rigid constraints of language and engage in more human-like reasoning processes. The implications for AI are significant, promising more robust and adaptable models capable of tackling complex problems.