Datasets:
license: cc0-1.0
task_categories:
- text-generation
- question-answering
language:
- en
size_categories:
- 10K<n<100K
tags:
- cmmc
- nist
- cybersecurity
- compliance
- security-controls
- SP-800-171
pretty_name: CMMC Training Dataset - Comprehensive Variant
CMMC Training Dataset - Comprehensive Variant
Dataset Description
This is the Comprehensive variant of the CMMC (Cybersecurity Maturity Model Certification) training dataset, containing 11,279 high-quality training examples from the complete NIST CMMC publication library.
Dataset Characteristics
- Total Examples: 11,279 (9,023 train / 2,256 validation)
- Source Documents: 381 NIST publications
- CMMC Levels Covered: Level 1, Level 2, Level 3
- CMMC Domains: All 17 domains
- Format: JSONL with chat-formatted messages
- Embeddings: 1536-dimensional vectors (OpenAI text-embedding-3-small)
- License: Public Domain (NIST documents are US Government works)
What Makes This "Comprehensive"?
The Comprehensive variant is the most complete CMMC training dataset available, including virtually every NIST publication relevant to CMMC compliance:
- 381 source documents from the NIST CSRC library
- 11,279 training examples covering all aspects of CMMC
- Maximum context and coverage for exhaustive knowledge
- Research-grade completeness for academic and enterprise use
Document Categories
Core Foundation (14 documents):
- NIST SP 800-171 R3, 800-172 R3, 800-53 R5
- Assessment procedures and implementation guides
Security Controls & Implementation (~120 documents):
- Detailed guides for all 17 CMMC domains
- Implementation examples and case studies
- Assessment methodologies
Specialized Topics (~247 documents):
- Cloud security (800-series)
- IoT and mobile security
- Supply chain risk management
- Incident response and forensics
- Cryptography and PKI
- Privacy engineering
- Security automation
- Continuous monitoring
- Vulnerability management
- And much more...
CMMC Level Distribution
All Levels: 10,430 examples (92.5%)
Level 1 (Foundational): 392 examples (3.5%)
Level 2 (Advanced): 309 examples (2.7%)
Level 3 (Advanced): 148 examples (1.3%)
CMMC Domain Distribution
All 17 CMMC domains are comprehensively covered:
- Access Control (AC): 2,042 examples
- Awareness and Training (AT): 2,043 examples
- Audit and Accountability (AU): 2,042 examples
- Configuration Management (CM): 1,922 examples
- Identification and Authentication (IA): 2,043 examples
- Incident Response (IR): 2,042 examples
- Maintenance (MA): 2,043 examples
- Media Protection (MP): 2,043 examples
- Personnel Security (PS): 2,043 examples
- Physical Protection (PE): 2,043 examples
- Planning (PL): 2,042 examples
- Risk Assessment (RA): 2,043 examples
- Security Assessment (CA): 2,043 examples
- Supply Chain Risk Management (SR): 1,947 examples
- System and Communications Protection (SC): 2,042 examples
- System and Information Integrity (SI): 2,042 examples
- System and Services Acquisition (SA): 2,042 examples
Note: Domain counts represent the number of examples tagged with each domain. Since examples can be tagged with multiple domains, the sum of domain counts exceeds the total number of examples (11,279).
Average: ~2,014 examples per domain (well-balanced)
Source Documents (381 total)
The Comprehensive variant includes:
FIPS Publications (~15 documents):
- Cryptographic standards
- Hashing and encryption algorithms
- PKI and digital signatures
SP 800-Series Security Guides (~340 documents):
- Core compliance (171, 172, 53)
- Implementation guides
- Risk management
- Security controls
- Assessment procedures
- Specialized topics
Interagency Reports (IR) (~26 documents):
- Research findings
- Technical analysis
- Emerging threats
- Implementation case studies
This represents virtually the entire NIST CMMC-relevant catalog available as of 2025.
Dataset Structure
JSONL Training Files
Each example follows the chat format:
{
"messages": [
{
"role": "system",
"content": "You are a cybersecurity expert specializing in CMMC..."
},
{
"role": "user",
"content": "What are the cryptographic requirements for CMMC Level 3?"
},
{
"role": "assistant",
"content": "According to NIST SP 800-172 R3 and FIPS 140-3..."
}
],
"metadata": {
"source": "NIST SP 800-172 R3",
"cmmc_level": "3",
"cmmc_domain": "System and Communications Protection",
"type": "cmmc_requirement"
}
}
Vector Embeddings
Pre-computed embeddings using OpenAI's text-embedding-3-small model:
- Format: Parquet files with 1536-dimensional vectors
- Files:
embeddings_train.parquet,embeddings_valid.parquet - Size: ~140 MB total (estimated)
- Cost: $0.03-0.05 (estimated 1.5-2.5M tokens)
FAISS Indexes
Ready-to-use vector similarity search indexes:
- L2 distance indexes:
faiss_train_l2.index,faiss_valid_l2.index - Cosine similarity indexes:
faiss_train_cosine.index,faiss_valid_cosine.index
Q&A Generation Strategies
Examples were generated using 5 complementary strategies:
- Section-based Q&A: Questions from document sections
- Control-based Q&A: NIST control requirements (3.1.1 format)
- CMMC-specific Q&A: Level-focused questions (L1/L2/L3)
- Domain-specific Q&A: Questions per CMMC domain
- Semantic chunking: General content with context preservation
With weighted sampling: core documents (5x), balanced documents (3x), supplementary (2x).
Use Cases
The Comprehensive dataset is ideal for:
- Enterprise-grade CMMC assistants: Maximum knowledge coverage
- Research and development: Complete NIST CMMC corpus
- Exhaustive RAG systems: Every relevant document included
- Academic studies: Research-grade completeness
- Specialized consulting: Coverage of niche/emerging topics
- Long-term knowledge base: Future-proof comprehensive training
Dataset Statistics
Source Documents: 381
Total Examples: 11,279
Training Examples: 9,023 (80%)
Validation Examples: 2,256 (20%)
Avg Example Length: ~234 tokens
Total Tokens Embedded: 2,639,168
Embedding Cost: $0.05 USD
Domain Coverage: Complete (all 17 domains)
Advantages Over Other Variants
vs. Core (14 docs, 1.2K examples):
- 27x more source documents
- 9x more training examples
- Covers specialized/emerging topics not in core
- Better for niche use cases and edge scenarios
vs. Balanced (71 docs, 2.8K examples):
- 5x more source documents
- 4x more training examples
- Deeper coverage of each domain
- Includes research reports and specialized guides
- Better for exhaustive knowledge requirements
Trade-offs:
- Longer training time (4x vs. Balanced)
- Higher computational cost
- May include redundant/overlapping content
- Potential for overfitting on less-critical topics
When to Use Comprehensive
Choose Comprehensive if:
- You need maximum coverage of CMMC topics
- You're building an enterprise-grade knowledge system
- You need coverage of specialized/emerging topics (IoT, cloud, supply chain)
- Training time/cost is not a constraint
- You want research-grade completeness
- You're building a long-term knowledge base
Choose Balanced if:
- You need good coverage but faster training
- You want equal domain representation
- You're resource-constrained
- You need production-ready performance
Choose Core if:
- You only need SP 800-171/172 fundamentals
- You want fastest training possible
- You're focused on core CMMC certification only
Example Topics Unique to Comprehensive
This variant includes specialized content not found in smaller datasets:
- Cloud Security: NIST 800-series cloud guidance
- IoT Security: Embedded systems and IoT frameworks
- Supply Chain: Software supply chain security (SSDF, C-SCRM)
- Privacy Engineering: NIST Privacy Framework integration
- Quantum-Safe Crypto: Post-quantum cryptography guidance
- Security Automation: SOAR and automation frameworks
- Forensics: Digital forensics and incident investigation
- Industrial Control Systems: ICS/SCADA security
- Mobile Security: Mobile device management
- Secure Development: SDLC and DevSecOps
Quick Start
Load JSONL Data
import json
# Load training data
with open('train.jsonl', 'r') as f:
train_data = [json.loads(line) for line in f]
print(f"Total examples: {len(train_data)}")
# Example: Find all Level 3 examples
level3_examples = [
ex for ex in train_data
if ex.get('metadata', {}).get('cmmc_level') == '3'
]
print(f"Level 3 examples: {len(level3_examples)}")
Load Embeddings
import pandas as pd
import numpy as np
# Load embeddings
df = pd.read_parquet('embeddings_train.parquet')
# Access embeddings as numpy array
embeddings = np.vstack(df['embedding'].values)
texts = df['text'].tolist()
print(f"Embeddings shape: {embeddings.shape}") # (9023, 1536)
Use FAISS Index for Semantic Search
import faiss
import numpy as np
# Load FAISS index
index = faiss.read_index('faiss_train_cosine.index')
# Search for similar content
query_embedding = ... # your query vector (1536-dim)
k = 10 # number of results
distances, indices = index.search(query_embedding.reshape(1, -1), k)
# Get similar texts with scores
for i, (idx, score) in enumerate(zip(indices[0], distances[0])):
print(f"{i+1}. [Score: {score:.3f}] {texts[idx][:150]}...")
Related Datasets
This is part of a family of 3 CMMC datasets:
- Core: 14 docs, 1.2K examples - Essential CMMC foundation
- Balanced: 71 docs, 2.8K examples - Domain-balanced coverage
- Comprehensive (this dataset): 381 docs, 11.3K examples - Complete NIST CMMC library
Citation
If you use this dataset, please cite:
@dataset{cmmc_comprehensive_2025,
title={CMMC Training Dataset - Comprehensive Variant},
author={Troy, Ethan Oliver},
year={2025},
publisher={HuggingFace},
note={Derived from 381 NIST Special Publications (Public Domain)}
}
License
Public Domain - This dataset is derived from NIST Special Publications, which are works of the US Government and not subject to copyright protection in the United States.
Acknowledgments
This dataset is built from 381 publications by the National Institute of Standards and Technology (NIST), Computer Security Resource Center.
Special thanks to:
- NIST CSRC for comprehensive cybersecurity documentation
- The CMMC-AB for defining the certification framework
- The open-source community for extraction and processing tools
Dataset Version
- Version: 1.0
- Created: 2025
- Source: NIST CSRC Publications (381 documents)
- Processing: Docling + custom CMMC-aware data preparation
- Weighting: Core documents (5x), balanced (3x), supplementary (2x)
Performance Recommendations
For training with this large dataset:
- Recommended batch size: 4-8 (depending on GPU memory)
- Training iterations: 1000-2000 for good convergence
- LoRA rank: 16-32 for capacity
- Expected training time: 8-12 hours (7B model on M4 Max)
- Memory required: 16GB+ for training, 64GB+ recommended
Contact
For questions or issues, please open an issue on the GitHub repository.