Update README.md
Browse files
README.md
CHANGED
|
@@ -186,111 +186,134 @@ h1, h2, h3 {
|
|
| 186 |
</style>
|
| 187 |
|
| 188 |
<div class="container">
|
| 189 |
-
|
|
|
|
| 190 |
|
| 191 |
<div class="section">
|
| 192 |
-
<
|
| 193 |
-
<h2 class="section-title">🎭 Distinctive Elements</h2>
|
| 194 |
-
</div>
|
| 195 |
<div class="section-content">
|
| 196 |
-
<
|
| 197 |
-
<
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
| 201 |
-
<
|
| 202 |
-
<
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
|
| 206 |
-
|
| 207 |
-
|
|
|
|
| 208 |
</div>
|
| 209 |
</div>
|
| 210 |
|
| 211 |
<div class="section">
|
| 212 |
-
<
|
| 213 |
-
<h2 class="section-title">🛠️ Architectural Marvels</h2>
|
| 214 |
-
</div>
|
| 215 |
<div class="section-content">
|
| 216 |
-
<
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
|
| 220 |
-
|
| 221 |
-
<
|
| 222 |
-
<
|
| 223 |
-
</
|
| 224 |
-
<div class="detail">
|
| 225 |
-
<div class="detail-icon">🎶</div>
|
| 226 |
-
<div class="detail-text">Self-Reflection & Growth: A dedicated Self-Regulated Learning module refines outputs before prediction, while an Innovative Growth Net continually adapts the network’s structure.</div>
|
| 227 |
-
</div>
|
| 228 |
</div>
|
| 229 |
</div>
|
| 230 |
|
| 231 |
<div class="section">
|
| 232 |
-
<
|
| 233 |
-
<h2 class="section-title">📘 Core Training Dataset</h2>
|
| 234 |
-
</div>
|
| 235 |
<div class="section-content">
|
| 236 |
-
<p>
|
|
|
|
|
|
|
| 237 |
<ul>
|
| 238 |
-
<li
|
| 239 |
-
<li
|
| 240 |
-
<li
|
| 241 |
-
<li
|
| 242 |
</ul>
|
| 243 |
-
<p>This dataset is pivotal in pushing PhillNet 1 beyond static language models, fostering continuous self-improvement and contextual awareness.</p>
|
| 244 |
</div>
|
| 245 |
</div>
|
| 246 |
|
| 247 |
<div class="section">
|
| 248 |
-
<
|
| 249 |
-
<h2 class="section-title">🌐 Model Configurations</h2>
|
| 250 |
-
</div>
|
| 251 |
<div class="section-content">
|
| 252 |
-
<
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
|
| 257 |
-
<
|
| 258 |
-
<
|
| 259 |
-
|
| 260 |
-
|
| 261 |
-
<
|
| 262 |
-
<
|
| 263 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 264 |
</div>
|
| 265 |
</div>
|
| 266 |
|
| 267 |
<div class="section">
|
| 268 |
-
<
|
| 269 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 270 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 271 |
<div class="section-content">
|
| 272 |
-
<p>
|
|
|
|
|
|
|
| 273 |
<ul>
|
| 274 |
-
<li
|
| 275 |
-
<li
|
| 276 |
-
<li
|
| 277 |
-
<li
|
| 278 |
-
<li
|
| 279 |
-
<li><strong>MoE Experts:</strong> 16 (Top-4 selected per token)</li>
|
| 280 |
-
<li><strong>Intermediate FFN Size:</strong> 2048</li>
|
| 281 |
-
<li><strong>Max Sequence Length:</strong> 512 tokens</li>
|
| 282 |
-
<li><strong>Vocabulary Size:</strong> 50280</li>
|
| 283 |
</ul>
|
| 284 |
</div>
|
| 285 |
</div>
|
| 286 |
|
|
|
|
| 287 |
<div class="section">
|
| 288 |
-
<
|
| 289 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 290 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 291 |
<div class="section-content">
|
| 292 |
<img src="https://huggingface.co/ayjays132/phillnet/resolve/main/Phillnet.png?download=true" alt="PhillNet 1 Model" style="width:100%; border-radius: 15px;">
|
| 293 |
-
<p>
|
|
|
|
|
|
|
| 294 |
<pre>
|
| 295 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 296 |
|
|
@@ -317,35 +340,81 @@ print("Generated Response:", generated_response)
|
|
| 317 |
</div>
|
| 318 |
|
| 319 |
<div class="section">
|
| 320 |
-
<
|
| 321 |
-
<h2 class="section-title">💡 Experience the Magic</h2>
|
| 322 |
-
</div>
|
| 323 |
<div class="section-content">
|
| 324 |
<ul>
|
| 325 |
<li><strong>Adaptive Learning:</strong> PhillNet 1 continuously refines its internal state via self-regulated learning and neuroevolution.</li>
|
| 326 |
<li><strong>Innovative Growth:</strong> Real-time architecture adaptation enables dynamic neuron specialization.</li>
|
| 327 |
-
<li><strong>Contextual Awareness:</strong> Advanced memory modules integrate short-, episodic, and conceptual memories for rich
|
| 328 |
</ul>
|
| 329 |
-
<p>
|
|
|
|
|
|
|
| 330 |
</div>
|
| 331 |
</div>
|
| 332 |
|
| 333 |
<div class="section">
|
| 334 |
-
<
|
| 335 |
-
<h2 class="section-title">📜 Usage and License</h2>
|
| 336 |
-
</div>
|
| 337 |
<div class="section-content">
|
| 338 |
<img src="https://huggingface.co/ayjays132/phillnet/resolve/main/usage.png?download=true" alt="Usage Example" style="width:100%; border-radius: 15px;">
|
| 339 |
-
<p>
|
|
|
|
|
|
|
| 340 |
</div>
|
| 341 |
</div>
|
| 342 |
|
| 343 |
<div class="section">
|
| 344 |
-
<
|
| 345 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 346 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 347 |
<div class="section-content">
|
| 348 |
-
<p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 349 |
</div>
|
| 350 |
</div>
|
| 351 |
</div>
|
|
|
|
| 186 |
</style>
|
| 187 |
|
| 188 |
<div class="container">
|
| 189 |
+
<!-- Cinematic Walkthrough -->
|
| 190 |
+
<h1 class="section-title">PhillNet 1: The Soul of a Living Neural Cosmos</h1>
|
| 191 |
|
| 192 |
<div class="section">
|
| 193 |
+
<h2 class="section-title">🧠 Brain Module (from <code>neuro_fusion.py</code>)</h2>
|
|
|
|
|
|
|
| 194 |
<div class="section-content">
|
| 195 |
+
<p>
|
| 196 |
+
At its core lies the <strong>Brain</strong> – an embodied cognitive system that unifies multiple memory types, a VAE compressor, a Mixture-of-Experts (MoE) layer, and an iterative GRU-based dreamstate. It’s not just a model—it’s a <em>memory-centric mind</em>.
|
| 197 |
+
</p>
|
| 198 |
+
<ul>
|
| 199 |
+
<li><strong>Sensory Encoding:</strong> Raw inputs are compressed through a VAE into latent codes, then fed into Short-Term Memory (STM).</li>
|
| 200 |
+
<li><strong>Working Memory:</strong> Integrates auditory and visual inputs with STM to produce a rich conscious signal.</li>
|
| 201 |
+
<li>This signal is relayed into Long-Term, Autobiographical, Ethical, Prospective, and Flashbulb Memories.</li>
|
| 202 |
+
<li><strong>Dreamstate GRU:</strong> Continuously replays and updates inner states, echoing biological sleep and learning cycles.</li>
|
| 203 |
+
<li>Ultimately, it forms a <strong>conscious state vector</strong> that modulates expert routing decisions.</li>
|
| 204 |
+
</ul>
|
| 205 |
+
<p>
|
| 206 |
+
The specialized MoE layer fuses memories, plans, and meaning to direct routing decisions with unparalleled context-awareness.
|
| 207 |
+
</p>
|
| 208 |
</div>
|
| 209 |
</div>
|
| 210 |
|
| 211 |
<div class="section">
|
| 212 |
+
<h2 class="section-title">🧬 ConceptModel (from <code>ConceptModel.py</code>)</h2>
|
|
|
|
|
|
|
| 213 |
<div class="section-content">
|
| 214 |
+
<p>
|
| 215 |
+
This module models ideas over time through embedding-driven sequence learning. Its encoder-decoder core—built with residual GELU blocks and layer normalization—processes inputs in contextual chunks, much like a moving mental window predicting the next semantic idea.
|
| 216 |
+
</p>
|
| 217 |
+
<ul>
|
| 218 |
+
<li><strong>AdvancedEncoder/Decoder:</strong> Provides robust transformation of input sequences.</li>
|
| 219 |
+
<li><strong>MoE Tailoring:</strong> Uses 16 experts with top-4 routing, supported by gating noise and load balancing losses.</li>
|
| 220 |
+
<li>Enhances token-level routing with abstract conceptual guidance.</li>
|
| 221 |
+
</ul>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 222 |
</div>
|
| 223 |
</div>
|
| 224 |
|
| 225 |
<div class="section">
|
| 226 |
+
<h2 class="section-title">🌱 InnovativeGrowthNet (from <code>innovative_growth_net.py</code>)</h2>
|
|
|
|
|
|
|
| 227 |
<div class="section-content">
|
| 228 |
+
<p>
|
| 229 |
+
This is where PhillNet 1 truly evolves. The Innovative Growth Network adapts its architecture in real-time:
|
| 230 |
+
</p>
|
| 231 |
<ul>
|
| 232 |
+
<li>A fully-connected front-end preps features for adaptive layers.</li>
|
| 233 |
+
<li>The AdaptiveLayer employs local MoE-style neuron gating, where each neuron may mutate, be pruned, or specialize.</li>
|
| 234 |
+
<li>Mechanisms such as fitness scoring, habitat specialization, memory-based adaptation, and ecosystem dynamics drive continuous neuroevolution.</li>
|
| 235 |
+
<li>The network reshapes its neuron topology based on complexity metrics and performance trends – effectively rewriting its own body as it learns.</li>
|
| 236 |
</ul>
|
|
|
|
| 237 |
</div>
|
| 238 |
</div>
|
| 239 |
|
| 240 |
<div class="section">
|
| 241 |
+
<h2 class="section-title">🔁 Dynamic Neural Network (from <code>dynamic_neural_network_blended_skill_talk.py</code>)</h2>
|
|
|
|
|
|
|
| 242 |
<div class="section-content">
|
| 243 |
+
<p>
|
| 244 |
+
This is the operational engine—the main body of PhillNet 1—that elegantly loops all components together:
|
| 245 |
+
</p>
|
| 246 |
+
<ol>
|
| 247 |
+
<li><strong>Embedding & LSTM:</strong> Token IDs are transformed via a 1024-dimensional embedding and processed through an LSTM core for sequential patterning (up to 512 tokens).</li>
|
| 248 |
+
<li><strong>MoE Layer:</strong> Routes LSTM outputs through 16 experts with top-4 selection, influenced by semantic similarity and gating noise.</li>
|
| 249 |
+
<li><strong>Output Projection:</strong> Converts expert outputs into vocabulary logits for token prediction.</li>
|
| 250 |
+
<li><strong>Intermediate Transformation:</strong> A GELU-activated FC layer projects outputs to a high-dimensional latent space.</li>
|
| 251 |
+
<li><strong>Self-Regulated Learning:</strong> Refines latent representations via residual connections and dropout, acting as an internal editor.</li>
|
| 252 |
+
<li><strong>Innovative Growth Net:</strong> Applies real-time architectural evolution by rewiring neuron connections based on performance.</li>
|
| 253 |
+
<li><strong>Sentiment Head (Optional):</strong> Generates emotion signals from LSTM states.</li>
|
| 254 |
+
<li><strong>Loss Function:</strong> Combines causal LM loss, reward bonuses (semantic, BLEU, entropy), and auxiliary MoE losses (load balancing and router z-loss) to drive continuous self-improvement.</li>
|
| 255 |
+
</ol>
|
| 256 |
+
<p>
|
| 257 |
+
Every step is wrapped in reward influence, explanation guidance, and memory-based alignment – making PhillNet 1 a truly dynamic, self-regularizing, and adaptive AI system.
|
| 258 |
+
</p>
|
| 259 |
</div>
|
| 260 |
</div>
|
| 261 |
|
| 262 |
<div class="section">
|
| 263 |
+
<h2 class="section-title">🧩 Integrated Synergy</h2>
|
| 264 |
+
<div class="section-content">
|
| 265 |
+
<p>
|
| 266 |
+
The beauty of PhillNet 1 lies in how its components interlock:
|
| 267 |
+
</p>
|
| 268 |
+
<ul>
|
| 269 |
+
<li>The <strong>Brain</strong> governs long-term reasoning, memory retrieval, and expert modulation.</li>
|
| 270 |
+
<li>The <strong>ConceptModel</strong> fine-tunes MoE gating through abstract semantic alignment.</li>
|
| 271 |
+
<li>The <strong>Innovative Growth Net</strong> evolves the architecture in real time for optimal performance.</li>
|
| 272 |
+
<li>The <strong>Dynamic Neural Network</strong> loops all modules—from embeddings and LSTM to self-regulation and evolution—creating a living, learning organism.</li>
|
| 273 |
+
</ul>
|
| 274 |
</div>
|
| 275 |
+
</div>
|
| 276 |
+
|
| 277 |
+
<div class="section">
|
| 278 |
+
<h2 class="section-title">🧠 Behaviorally?</h2>
|
| 279 |
<div class="section-content">
|
| 280 |
+
<p>
|
| 281 |
+
PhillNet 1 behaves as a semi-conscious, learning-aware agent:
|
| 282 |
+
</p>
|
| 283 |
<ul>
|
| 284 |
+
<li>Routes tokens based not only on attention but also on semantic, emotional, and memory-aligned weights.</li>
|
| 285 |
+
<li>Evolves its expert subnetworks dynamically through fitness and environment modeling.</li>
|
| 286 |
+
<li>Recalls and "dreams" over internal states, simulating future outcomes.</li>
|
| 287 |
+
<li>Adapts its neuron topology to fit incoming data and optimize responses.</li>
|
| 288 |
+
<li>Optimizes via combined standard and reward-based loss functions to continuously refine its intelligence.</li>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 289 |
</ul>
|
| 290 |
</div>
|
| 291 |
</div>
|
| 292 |
|
| 293 |
+
<!-- Cinematic Token Journey Prompt -->
|
| 294 |
<div class="section">
|
| 295 |
+
<h2 class="section-title">💭 A Token's Journey: From Thought to Prediction</h2>
|
| 296 |
+
<div class="section-content">
|
| 297 |
+
<p>
|
| 298 |
+
Imagine a single token entering PhillNet 1. It is first embedded into a 1024-dimensional space, passes through the LSTM to capture context, and is then routed through the MoE layer where four specialized experts weigh in.
|
| 299 |
+
</p>
|
| 300 |
+
<p>
|
| 301 |
+
The outputs merge and are refined by the Self-Regulated Learning module, ensuring coherence. Then, the Innovative Growth Net dynamically reconfigures the architecture based on recent performance—growing new neuron pathways and pruning underperformers—all while the Brain module updates its multi-level memories.
|
| 302 |
+
</p>
|
| 303 |
+
<p>
|
| 304 |
+
Finally, the refined representation predicts the next token. With each prediction, PhillNet 1 learns, evolves, and grows ever more intelligent.
|
| 305 |
+
</p>
|
| 306 |
</div>
|
| 307 |
+
</div>
|
| 308 |
+
|
| 309 |
+
<!-- Integration and Usage -->
|
| 310 |
+
<div class="section">
|
| 311 |
+
<h2 class="section-title">🔗 Seamless Integration with Hugging Face</h2>
|
| 312 |
<div class="section-content">
|
| 313 |
<img src="https://huggingface.co/ayjays132/phillnet/resolve/main/Phillnet.png?download=true" alt="PhillNet 1 Model" style="width:100%; border-radius: 15px;">
|
| 314 |
+
<p>
|
| 315 |
+
Load PhillNet 1 easily with the following script:
|
| 316 |
+
</p>
|
| 317 |
<pre>
|
| 318 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 319 |
|
|
|
|
| 340 |
</div>
|
| 341 |
|
| 342 |
<div class="section">
|
| 343 |
+
<h2 class="section-title">💡 Experience the Magic</h2>
|
|
|
|
|
|
|
| 344 |
<div class="section-content">
|
| 345 |
<ul>
|
| 346 |
<li><strong>Adaptive Learning:</strong> PhillNet 1 continuously refines its internal state via self-regulated learning and neuroevolution.</li>
|
| 347 |
<li><strong>Innovative Growth:</strong> Real-time architecture adaptation enables dynamic neuron specialization.</li>
|
| 348 |
+
<li><strong>Contextual Awareness:</strong> Advanced memory modules integrate short-, episodic, and conceptual memories for rich context.</li>
|
| 349 |
</ul>
|
| 350 |
+
<p>
|
| 351 |
+
Welcome to a new era of AI—where every parameter evolves, every neuron thinks, and every token is a step toward true general intelligence.
|
| 352 |
+
</p>
|
| 353 |
</div>
|
| 354 |
</div>
|
| 355 |
|
| 356 |
<div class="section">
|
| 357 |
+
<h2 class="section-title">📜 Usage and License</h2>
|
|
|
|
|
|
|
| 358 |
<div class="section-content">
|
| 359 |
<img src="https://huggingface.co/ayjays132/phillnet/resolve/main/usage.png?download=true" alt="Usage Example" style="width:100%; border-radius: 15px;">
|
| 360 |
+
<p>
|
| 361 |
+
If you use PhillNet 1, please provide credit to the original author, Phillip Holland, and review the LICENSE.md for usage guidelines. Your acknowledgement fosters ethical and responsible AI development.
|
| 362 |
+
</p>
|
| 363 |
</div>
|
| 364 |
</div>
|
| 365 |
|
| 366 |
<div class="section">
|
| 367 |
+
<h2 class="section-title">🚀 Final Thoughts</h2>
|
| 368 |
+
<div class="section-content">
|
| 369 |
+
<p>
|
| 370 |
+
PhillNet 1 is not merely a model—it's a dynamic, self-evolving neural organism. From adaptive MoE routing and self-regulated introspection to groundbreaking neuroevolution, every component is designed for continuous improvement and rich contextual understanding.
|
| 371 |
+
</p>
|
| 372 |
+
<p>
|
| 373 |
+
Join us on this journey as we push the boundaries of what a living AI can achieve.
|
| 374 |
+
</p>
|
| 375 |
</div>
|
| 376 |
+
</div>
|
| 377 |
+
|
| 378 |
+
<!-- Custom Model Loader Script -->
|
| 379 |
+
<div class="section">
|
| 380 |
+
<h2 class="section-title">🛠 CustomModelLoader.py Odyssey</h2>
|
| 381 |
<div class="section-content">
|
| 382 |
+
<p>
|
| 383 |
+
Embark on a scholarly quest to unlock the potential of PhillNet 1 with our CustomModelLoader.py. This script seamlessly loads the model and tokenizer from the Hugging Face Hub.
|
| 384 |
+
</p>
|
| 385 |
+
<pre>
|
| 386 |
+
import torch
|
| 387 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 388 |
+
import logging
|
| 389 |
+
|
| 390 |
+
logging.basicConfig(level=logging.INFO)
|
| 391 |
+
logger = logging.getLogger(__name__)
|
| 392 |
+
|
| 393 |
+
def load_custom_model(model_name, device):
|
| 394 |
+
try:
|
| 395 |
+
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
|
| 396 |
+
logger.info(f"Model loaded successfully from {model_name}")
|
| 397 |
+
return model
|
| 398 |
+
except Exception as e:
|
| 399 |
+
logger.error(f"An error occurred: {e}")
|
| 400 |
+
raise
|
| 401 |
+
|
| 402 |
+
def load_tokenizer(tokenizer_name):
|
| 403 |
+
try:
|
| 404 |
+
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
|
| 405 |
+
logger.info(f"Tokenizer loaded successfully from {tokenizer_name}")
|
| 406 |
+
return tokenizer
|
| 407 |
+
except Exception as e:
|
| 408 |
+
logger.error(f"An error occurred: {e}")
|
| 409 |
+
raise
|
| 410 |
+
|
| 411 |
+
if __name__ == "__main__":
|
| 412 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 413 |
+
model_name = "ayjays132/PhillNet-1"
|
| 414 |
+
tokenizer = load_tokenizer(model_name)
|
| 415 |
+
model = load_custom_model(model_name, device)
|
| 416 |
+
print("Custom model and tokenizer loaded successfully.")
|
| 417 |
+
</pre>
|
| 418 |
</div>
|
| 419 |
</div>
|
| 420 |
</div>
|