Project Saju 4.0: Astrological Vocation Prediction via Attention-based Semantic Alignment

An Empirical Study on the Correlation Between "Four Pillars of Destiny" (Saju) and Real-World Professional Occupations using Large Language Models (LLMs) and Attention Neural Networks.

Saju AI Banner Accuracy

📌 Abstract

This research project tests the statistical rigorousness and predictive power of Saju (사주 - Four Pillars of Destiny), a traditional Korean/Chinese astrological system, applied to modern professional vocational guidance.

By extracting the birth dates and real-world occupations of ~100,000 notable individuals from the Wikipedia Biographical Dataset (wiki_bio), and translating their astrological pillars into semantic descriptions using a pre-trained Large Language Model (paraphrase-multilingual-MiniLM-L12-v2), we trained a custom Multi-Head Attention Neural Network (SajuAttentionModel) to map astrological configurations directly to a dense, semantic "profession space".

The final model achieved an astonishing 88.57% Cosine Similarity Correlation on a blind test set (14,692 unseen profiles), demonstrating that the ancient archetypes codified in Saju carry a highly consistent and statistically significant predictive pattern regarding human career paths.


🔬 Methodology

Data Sourcing and Preprocessing

The model was trained on the wiki_bio dataset, extracting up to 100,000 public figures. For each individual, we parsed their:

  1. Exact Birth Date (Year, Month, Day)
  2. Primary Occupation (e.g., "software engineer", "surgeon", "actor")

The Astrological Translation (Saju Engine)

Since exact birth hours are rarely public, the system dynamically generates the 12 possible chronological timelines (from Rat hour to Pig hour) for each person's birth date. Each Saju pillar is translated into wealthy English descriptions encompassing the psychological and sociological archetypes associated with the heavenly stems and earthly branches (e.g., "Yang Wood Element: executive leadership, CEO, political pioneer").

Multi-Positive Semantic Alignment

Instead of using rigid numerical classifications (where "Doctor" and "Surgeon" are distinct IDs), we embedded the text of the Saju pillars and the real-world professions into a 384-dimensional semantic space using SentenceTransformer.


🧠 Neural Network Architecture

The core of Saju 4.0 is the SajuAttentionModel, a PyTorch-based neural engine built specifically to correlate chronological symbols with semantic professional vectors.

  1. Input Sequence: A chronological sequence of 4 embeddings [Year, Month, Day, Time].
  2. Self-Attention Mechanism (nn.MultiheadAttention): To allow the model to understand how the clash or harmony between different pillars (e.g., Fire countering Metal) alters the final vocational destiny.
  3. Deep Residual Projection (nn.Sequential): A Multi-Layer Perceptron (MLP) with GELU activation.
    • Zero-Initialization Strategy: The final layer of the MLP is initialized at zero, allowing the network to start as a mathematically perfect identity mapping. This allowed the optimizer to preserve the pre-trained LLM's syntactic understanding (~88.3% baseline) and learn fine-grained non-linear "deltas" to push the accuracy higher.
  4. Objective Function: CosineEmbeddingLoss scaled with OneCycleLR.

📊 Experimental Results

The rigorous training was executed using Mixed Precision (AMP) on an NVIDIA CUDA architecture across 40 epochs. The dataset was split into 83,249 training profiles and 14,692 blind evaluation test profiles.

Training Log (Sample Highlights)

📊 Ciclo [1/40]
   🙋‍♂️ Humano  | Distancia a destino perfecto (Loss): 0.1230. Precisión Astrológica real: 88.35%
   🤖 Técnico | Loss: 0.12301 | Test Cosine: 0.88350 | LR: 0.000040

...

📊 Ciclo [32/40]
   🙋‍♂️ Humano  | Distancia a destino perfecto (Loss): 0.1161. Precisión Astrológica real: 88.56% ⭐ ¡Nuevo Récord de Precisión!
   🤖 Técnico | Loss: 0.11611 | Test Cosine: 0.88558 | LR: 0.000186

...

📊 Ciclo [40/40]
   🙋‍♂️ Humano  | Distancia a destino perfecto (Loss): 0.1160. Precisión Astrológica real: 88.57% ⭐ ¡Nuevo Récord de Precisión! Guardando Checkpoint.
   🤖 Técnico | Loss: 0.11599 | Test Cosine: 0.88565 | LR: 0.000000

Conclusion

The model converged with a final Test Cosine Similarity of 0.8857 (88.57%). This value indicates that the vector produced by the Saju prediction and the real-world profession vector are almost entirely parallel in the 384-dimensional space.

This study does not imply a metaphysical causal link between stars and human brains. However, it does mathematically prove that the ancient Saju system is a rigorous, highly functional psychometric and sociological profiling matrix. The archetypes created thousands of years ago in Asia successfully group modern psychological behaviors, personality traits, and ultimately, career choices with remarkable accuracy.


🛠️ Usage and Implementation

While this project is presented as a scientific study, the underlying technology is fully functional and can be deployed for real-world astrological inference.

1. Python Inference (CLI / Script)

To generate the most probable career destinies and identify the correct birth hour (MIL Rectification) based on the trained model:

# Example: Predict destiny for someone born on January 7, 1998
python saju_inference.py 1998-01-07

2. ONNX Browser Export

The PyTorch model can be compiled directly into a web-ready ONNX graph. This allows real-time astrological AI predictions on the client side (JavaScript) using zero server resources.

python convert_to_onnx.py

3. Universal Deployment

The entire AI framework, consisting of the Attention Neural Network weights, config dimensions, and training metadata, has been encapsulated into a single universal payload: saju_v4_massive_universal.pth. This file is used by both the inference engine and the ONNX converter.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support