developer/ writer/ systems thinker/ building at the edge/ write@imiel.app/ x.com/imiel_visser/ linkedin.com/in/imiel/ developer/ writer/ systems thinker/ building at the edge/ write@imiel.app/ x.com/imiel_visser/ linkedin.com/in/imiel/ developer/ writer/ systems thinker/ building at the edge/ write@imiel.app/ x.com/imiel_visser/ linkedin.com/in/imiel/ developer/ writer/ systems thinker/ building at the edge/ write@imiel.app/ x.com/imiel_visser/ linkedin.com/in/imiel/

Abstract

As of August 2025, Large Language Models (LLMs) have achieved extraordinary proficiency in natural language processing, multimodal data integration, autonomous agentic behaviors, and complex reasoning tasks, yet they remain significantly distant from Artificial General Intelligence (AGI). This whitepaper proposes that a critical missing component is a subconscious layer: a latent, implicit processing mechanism inspired by the human subconscious, which drives approximately 95% of cognitive activity through intuitive processes, memory consolidation, pattern recognition, and situational awareness.

1. Introduction

The pursuit of Artificial General Intelligence (AGI), defined as computational systems capable of matching or surpassing human intelligence across arbitrary and diverse tasks, has reached a critical juncture in 2025. Expert predictions estimate a 50% likelihood of achieving AGI by 2060, with optimistic projections suggesting competent AGI could emerge by 2030.

This whitepaper argues that the absence of a subconscious layer — a mechanism for implicit, background processing akin to the human subconscious — represents a fundamental barrier to achieving AGI. In human cognition, the subconscious governs approximately 95% of decisions, enabling intuitive responses, habit formation, memory consolidation, pattern recognition, and situational awareness.

2. Defining AGI

Artificial General Intelligence is characterized by the following core capabilities:

  • Adaptability — learning novel tasks without retraining
  • Deep Understanding — causality, abstraction, and context beyond surface-level patterns
  • Autonomy — self-directed goal-setting
  • Generalization — proficiency across diverse domains
  • Ethical Reasoning — value-aligned decisions in ambiguous scenarios

3. Current LLM Capabilities

3.1 Advanced Reasoning

Modern LLMs employ sophisticated reasoning strategies to tackle complex problems:

  • Chain-of-Thought (CoT) — achieving 80% accuracy on mathematical benchmarks through step-by-step reasoning
  • Tree-of-Thought (ToT) — exploring branching solution paths for multi-step problems
  • Meta-Prompting — iterative prompt optimization for improved output quality

3.2 Agentic Systems

Autonomous agents built on LLMs now handle coding, planning, and tool use. However, they lack intrinsic motivation and rely entirely on predefined objectives.

4. LLM Limitations

Despite remarkable progress, current LLMs suffer from fundamental limitations that prevent the leap to general intelligence:

  • Hallucinations — generating plausible but factually incorrect outputs
  • Static Learning — retraining required to incorporate new knowledge
  • Lack of Persistent Memory — stateless sessions with no continuity across interactions
  • No True Innovation — recombination of training data rather than genuine creative leaps
  • No Grounded World Knowledge — absence of embodied understanding of physical reality

5. The Human Subconscious

The human subconscious processes approximately 95% of decisions implicitly, operating through several interconnected mechanisms rooted in the limbic system:

  • Automatic Processing — habits, emotional responses, and reflexes executed without conscious deliberation
  • Memory Consolidation — transferring and strengthening memories during rest and sleep
  • Pattern Recognition — detecting regularities and anomalies below the threshold of awareness
  • Situational Awareness — maintaining a continuous model of the environment and one's place within it
  • Creativity and Intuition — generating novel associations and gut-level judgments from accumulated experience

6. Proposal: Subconscious Layer for LLMs

6.1 Conceptual Framework

We propose a latent space module operating in parallel to the conscious output generation pathway. This subconscious layer would maintain persistent internal state, perform background inference, and modulate the model's overt responses through implicit context.

6.2 Technical Implementation

The subconscious layer combines two complementary architectures:

  • Latent Diffusion Model — a Variational Autoencoder (VAE) trained on interaction trajectories to learn compressed representations of contextual patterns and user intent
  • Mixture-of-Experts (MoE) — specialized expert networks for pattern recognition, memory retrieval, goal consistency, and ethical alignment, activated dynamically based on the current processing context

6.3 Subconscious-Conscious Attention Bridge

A cross-attention mechanism bridges the subconscious and conscious layers. Subconscious outputs take the form of implicit prompt vectors — dense representations that modulate the conscious output without appearing as explicit text. This enables the model to incorporate intuitive context, remembered patterns, and goal-relevant information into its responses seamlessly.

6.4 Training Framework

The subconscious layer is trained through three complementary objectives:

  1. Task 1: Predictive Self-Supervision — learning to predict future interaction states from current context
  2. Task 2: Latent State Reinforcement Learning (LSRL) — optimizing latent representations through reward signals tied to task success and coherence
  3. Task 3: Consistency Regularization — ensuring stable, non-contradictory internal representations across extended interactions

7. Implicit Self-Prompting & Situational Awareness

The subconscious layer enables a novel capability: implicit self-prompting. Rather than relying solely on user-provided input, the system generates its own contextual cues, decomposes complex queries into tractable sub-problems, and adapts its internal prompts dynamically based on evolving context.

This mechanism also supports persistent situational awareness — building internal world models and maintaining state across sessions. The system can track goals, remember relevant context from prior interactions, and adjust its behavior based on an evolving understanding of the user and environment.

8. Benefits & Challenges

Benefits

  • Enhanced reasoning — deeper, more nuanced responses informed by implicit context
  • Autonomy — self-directed goal management and task decomposition
  • Ethical alignment — dedicated expert modules for value-aligned decision-making
  • Efficiency — MoE architecture activates only relevant experts, reducing computational overhead
  • Innovation — latent space exploration enables novel combinations beyond training data

Challenges Addressed

  • Interpretability — auditable latent states provide transparency into subconscious processing
  • Training Complexity — multi-task learning framework distributes the optimization burden across complementary objectives
  • Scalability — hardware-aware classical scaling strategies ensure practical deployment

9. Quantum Computing Potential

Quantum computing offers compelling advantages for subconscious layer processing:

  • Parallel Exploration — superposition enables simultaneous evaluation of multiple latent states
  • Enhanced Pattern Recognition — quantum interference amplifies relevant patterns while suppressing noise
  • Optimized Memory — quantum annealing accelerates memory retrieval and consolidation in large state spaces

Currently in the Noisy Intermediate-Scale Quantum (NISQ) era, a hybrid classical-quantum approach represents the most practical path forward for near-term integration.

10. Roadmap

  • Short-Term (2025–2027) — prototype classical subconscious layers, validate on reasoning benchmarks
  • Mid-Term (2027–2030) — scale to multimodal inputs, develop quantum prototypes, target 50% human-level performance on general tasks
  • Long-Term (2030–2035) — deploy competent AGI systems, integrate fault-tolerant quantum computing for full subconscious processing

11. Conclusion

The absence of a subconscious layer confines current LLMs to narrow intelligence. By integrating Latent Diffusion, Mixture-of-Experts, and a Subconscious-Conscious Attention Bridge, we can enable implicit self-prompting, persistent memory, situational awareness, and autonomous action — the foundational capabilities required for general intelligence.

Through iterative prototyping and interdisciplinary collaboration spanning neuroscience, cognitive science, and machine learning, we can transform Large Language Models into true general intelligences by 2035.

References

The full paper includes 159 citations spanning foundational and contemporary work from Vaswani et al., Bengio, Chollet, LeCun, OpenAI, and others.