# X-Loop³ Labs — Complete Technical Documentation

> AI infrastructure company based in Gossau, St. Gallen, Switzerland. We build governance layers, drift prevention, and audit-readiness architecture for AI systems under EU AI Act exposure.

## Executive Summary

X-Loop³ Labs provides production-grade AI infrastructure that makes AI systems predictable, auditable, and stable. We address three critical problems: behavioral drift, hallucinations, and regulatory compliance — all at the architecture level, not as afterthoughts.

Our approach is pre-semantic: we prevent problems before they occur, not filter them after generation. We deliver structured technical artifacts, not consulting services. Everything we build is designed to integrate with your existing stack without requiring you to rebuild.

## Products — Full Descriptions

### 1. EU AI Act Layer

**Purpose**: Compliance-ready governance structure for AI systems operating under EU AI Act requirements.

**What it does**:
- Risk classification against Annex III (8 high-risk categories)
- Article 5 prohibited practices screening (5 checks)
- System profile documentation (purpose, boundaries, oversight)
- Risk register with mitigation tracking
- Human oversight schema definition
- Data lineage tracking
- Model card generation
- Test report structure
- Post-market monitoring framework
- Change management logging
- Incident reporting structure

**Who it's for**:
- AI teams deploying systems in the EU market
- Compliance officers in regulated industries (healthcare, finance, legal)
- CTOs needing audit-ready evidence architecture
- Product teams integrating AI into high-risk applications

**Tiers**:
- Lite (Free): Governance baseline, templates, no automated exports
- Lite + Text Addon (Free): Adds automated evidence exports (PDF/text), narrative generation
- Pro (Commercial): Adds consistency engine, multi-stage gating, validator integration
- Enterprise (Commercial): Full Annex IV pack, evidence authenticity, court-readable exports
- Black (Invite-only): Constitutional invariants, multi-party governance, ZK proofs

**Technical Stack**:
- JSON-based artifact storage
- Python validation pipeline
- Markdown/PDF export engine
- SHA-256 hash chains (Enterprise+)
- Ed25519 signature verification (Enterprise+)

**Key Artifacts**:
1. SYSTEM_PROFILE: Purpose, scope, boundaries, data sources
2. ANNEX_III_MAPPING: High-risk category alignment with justification
3. PROHIBITED_PRACTICES_SCREEN: Article 5 compliance verification
4. RISK_REGISTER: Identified risks with likelihood, impact, mitigation
5. DATA_LINEAGE: Training data sources, preprocessing, validation
6. MODEL_CARD: Architecture, performance metrics, limitations
7. TEST_REPORTS: Validation results, edge case testing, fairness audits
8. HUMAN_OVERSIGHT_SCHEMA: Intervention points, escalation paths
9. POST_MARKET_MONITORING: Performance tracking, incident detection
10. CHANGE_LOG: Version history with impact assessment
11. INCIDENT_REPORT: Problem documentation, root cause, remediation

**Compliance Coverage**:
- EU AI Act (all titles, focus on Title III Chapter 2)
- GDPR alignment (lawful bases, special categories, DPIA)
- ISO/IEC 42001 crosswalk (Black tier)
- Conformity assessment preparation (internal vs notified body)

**Free Download**: https://github.com/jongartmann/eu-ai-act-layer-lite


### 2. Drift Stopper

**Purpose**: Pre-semantic behavioral drift prevention for AI systems.

**The Problem**: AI models degrade silently over time. Input distributions shift, edge cases accumulate, performance decays — but you don't notice until customers complain or audits fail.

**What it does**:
- Detects behavioral drift at the architecture level before outputs change
- Tracks input distribution shifts with statistical process control
- Monitors semantic embedding drift in vector spaces
- Flags decision boundary instability
- Prevents silent degradation through architectural constraints

**How it works**:
- Baseline capture: Establishes expected behavior patterns during initial deployment
- Continuous monitoring: Tracks deviations from baseline in real-time
- Drift scoring: Quantifies drift severity across multiple dimensions
- Gating: Blocks requests that exceed drift thresholds
- Alerts: Notifies teams when drift accumulates

**Who it's for**:
- AI product teams maintaining deployed systems
- MLOps engineers responsible for model performance
- Compliance teams needing evidence of consistent behavior
- QA teams validating AI system stability

**Key Features**:
- Pre-semantic drift detection (catches problems before outputs change)
- Architecture-level prevention (not just monitoring)
- Drift attribution (identifies which components are drifting)
- Threshold-based gating (automatic protection)
- Audit trail (evidence for compliance)

**Integration**: Works as a middleware layer between your application and AI models. No model retraining required.


### 3. Hallucination Stopper

**Purpose**: Grounding enforcement layer that prevents hallucinations at the source.

**The Problem**: LLMs generate plausible-sounding but false information when they lack grounding. Traditional approaches detect hallucinations after generation, but damage is already done.

**What it does**:
- Enforces grounding constraints before response generation
- Validates information retrieval against source documents
- Constrains generation to verified facts
- Blocks unsupported claims at the architecture level
- Provides attribution for every claim

**How it works**:
- Source validation: Every claim must trace to a verified source
- Retrieval gating: Only grounded information enters context
- Generation constraints: LLM can only use provided grounded facts
- Attribution tracking: Maintains source links for audit
- Confidence scoring: Flags low-confidence claims

**Who it's for**:
- AI applications in healthcare (patient information)
- Legal tech (case law, regulations)
- Financial services (investment advice, risk assessment)
- Customer support (product documentation)
- Any domain where accuracy is critical

**Key Features**:
- Pre-generation grounding enforcement (prevents, not detects)
- Source attribution for every claim
- Confidence-based gating
- Audit trail with source links
- Integration with RAG architectures

**Integration**: Wraps your LLM calls with grounding validation. Compatible with OpenAI, Anthropic, Llama, and custom models.


## Bundle Comparison — Full Feature Matrix

### Lite (Free)

**Price**: $0 forever

**Includes**:
- System profile template
- Annex III risk classification mapping
- Article 5 prohibited practices screen
- Risk register with mitigation tracking
- Human oversight schema
- Basic documentation templates

**Does NOT include**:
- Automated evidence exports
- Consistency validation
- Multi-stage gating
- Evidence authenticity features

**Best for**: Startups needing governance baseline, prototypes, proof-of-concept projects

**Download**: https://github.com/jongartmann/eu-ai-act-layer-lite


### Lite + Text Addon (Free)

**Price**: $0 forever

**Includes everything in Lite, plus**:
- Structured evidence exports (PDF/text)
- Auto-generated narrative explanations
- Run manifest (what was checked, when, by whom)
- Audit index (searchable evidence catalog)
- Gap highlighting (missing/incomplete artifacts)
- State-aware packaging (readiness signals)

**Does NOT include**:
- Consistency engine
- Multi-stage gating
- Evidence signatures
- Monitoring packs

**Best for**: Small teams needing basic evidence generation, internal audits, early compliance work

**Download**: https://x-loop3.com/free/eu-ai-act-lite


### Pro (Commercial)

**Price**: Contact [email protected]

**Includes everything in Lite + Text Addon, plus**:
- HERM-Light consistency engine (cross-artifact validation)
- Multi-stage gating (GREEN / AMBER / RED states)
- Intent lock (prevents requirement drift)
- Trade-off governance (explicit risk acceptance)
- Validator-linked evidence schemas
- Extended audit artifacts
- Monitoring packs (ongoing performance tracking)
- Incident packs (structured problem documentation)

**Best for**: Production AI systems, regulated industries, teams needing court-defensible evidence

**Key Features**:
- Hard-fail rules (blocks deployment if critical artifacts missing)
- Soft-fail rules (warnings for incomplete artifacts)
- Requirement traceability (links evidence to specific regulations)
- Change impact analysis (shows cascade effects)

**Contact**: [email protected]


### Enterprise (Commercial)

**Price**: Contact [email protected]

**Includes everything in Pro, plus**:
- Full Annex IV technical documentation pack
- Evidence authenticity architecture (hash chains, Ed25519 signatures)
- Court-readable PDF exports (formatted for legal proceedings)
- LLM governance extensions (specialized rules for foundation models)
- Audit index generator (comprehensive evidence catalog)
- Expert support (architecture review, deployment assistance)

**Best for**: Large enterprises, critical infrastructure, systems affecting fundamental rights

**Key Features**:
- SHA-256 hash chains (tamper-evident evidence)
- Ed25519 digital signatures (non-repudiation)
- Optional ledger anchoring (blockchain timestamping)
- Multi-party sign-off (governance by committee)
- Legal-grade exports (optimized for court proceedings)

**Contact**: [email protected]


### Black Tier (Invite Only)

**Price**: Custom pricing, invite only

**Includes everything in Enterprise, plus**:
- Constitutional invariants (10 absolute rules that cannot be overridden)
- Multi-party governance (consensus-based decision making)
- High-custody controls (separation of duties, dual approval)
- ZK proof interface (privacy-preserving evidence)
- ISO 42001 crosswalk (full alignment with international standard)

**Best for**: Critical infrastructure, healthcare systems, judicial applications, systems affecting fundamental rights

**Key Features**:
- Unoverridable safety rules (baked into architecture)
- Consensus governance (no single point of failure)
- Privacy-preserving evidence (ZK proofs for sensitive data)
- Full regulatory coverage (EU AI Act + ISO 42001)

**Contact**: [email protected] (subject to approval)


## Technical Architecture — Deep Dive

### Evidence Schema Structure

Every artifact follows a standardized JSON schema:

```
{
  "artifact_type": "SYSTEM_PROFILE",
  "version": "1.0.0",
  "created_at": "2026-02-09T12:00:00Z",
  "created_by": "[email protected]",
  "metadata": {
    "system_name": "...",
    "system_id": "...",
    "deployment_environment": "production"
  },
  "content": { ... },
  "validation": {
    "status": "PASS",
    "checks": [ ... ],
    "errors": [],
    "warnings": []
  },
  "signature": "..." // Enterprise+ only
}
```

### Consistency Engine (Pro+)

The HERM-Light consistency engine validates cross-artifact relationships:

**Hard-Fail Rules** (block deployment):
- SYSTEM_PROFILE must exist before any other artifact
- RISK_REGISTER must address all risks identified in ANNEX_III_MAPPING
- HUMAN_OVERSIGHT_SCHEMA must cover all high-risk decision points
- MODEL_CARD performance metrics must meet thresholds in SYSTEM_PROFILE

**Soft-Fail Rules** (warnings):
- DATA_LINEAGE should include preprocessing steps
- TEST_REPORTS should cover edge cases mentioned in RISK_REGISTER
- CHANGE_LOG should explain deviations from SYSTEM_PROFILE

### Multi-Stage Gating (Pro+)

Three states guide deployment decisions:

**GREEN** (PASS):
- All hard-fail rules satisfied
- No critical warnings
- Evidence complete for deployment phase
- → Safe to deploy

**AMBER** (CONDITIONAL):
- All hard-fail rules satisfied
- Some soft-fail warnings present
- Evidence incomplete but sufficient for limited deployment
- → Requires human sign-off to proceed

**RED** (FAIL):
- One or more hard-fail rules violated
- Critical evidence missing
- Deployment blocked
- → Must resolve issues before deployment

### Evidence Authenticity (Enterprise+)

**Hash Chains**:
- Each artifact hashed with SHA-256
- Hashes linked in chronological order
- Any tampering breaks the chain
- Provides tamper-evident audit trail

**Digital Signatures**:
- Ed25519 cryptographic signatures
- Non-repudiable evidence of authorship
- Verifiable in court proceedings
- Optionally anchored to blockchain

**Ledger Anchoring** (optional):
- Evidence hashes timestamped on public blockchain
- Proves existence at specific point in time
- Prevents backdating of artifacts
- Compatible with Ethereum, Bitcoin, or private chains


## EU AI Act Coverage — Complete Reference

### Annex III: High-Risk AI Systems (All 8 Categories)

**1. Biometric Identification and Categorisation**
- Real-time remote biometric identification systems
- Biometric categorization systems
- Emotion recognition systems
- Coverage: Risk classification, prohibited practices screen, human oversight

**2. Critical Infrastructure**
- AI systems managing road traffic
- AI systems for water, gas, heating, electricity supply
- Coverage: Safety requirements, risk management, post-market monitoring

**3. Education and Vocational Training**
- AI systems for educational institution admission
- AI systems for exam scoring
- AI systems for learning outcome assessment
- Coverage: Bias detection, fairness audits, human oversight

**4. Employment, Workers Management, and Self-Employment**
- AI systems for recruitment and hiring
- AI systems for promotion decisions
- AI systems for task allocation
- AI systems for monitoring and evaluation
- Coverage: Non-discrimination, transparency, human oversight

**5. Essential Private and Public Services**
- AI systems for creditworthiness assessment
- AI systems for insurance pricing and risk evaluation
- AI systems for emergency service dispatch
- Coverage: Accuracy requirements, human review, appeal mechanisms

**6. Law Enforcement**
- AI systems for risk assessment (crime, reoffending, victims)
- AI systems for polygraph and lie detection
- AI systems for evidence evaluation
- AI systems for predicting offenses
- Coverage: Fundamental rights safeguards, human oversight, audit trails

**7. Migration, Asylum, and Border Control**
- AI systems for travel document examination
- AI systems for asylum application assessment
- AI systems for border control risk assessment
- Coverage: Non-discrimination, transparency, human review

**8. Judicial and Democratic Processes**
- AI systems for judicial decision support
- AI systems for case outcome prediction
- Coverage: Human oversight, transparency, accountability


### Article 5: Prohibited AI Practices (5 Categories)

**1. Manipulative Techniques**
- Prohibition: AI that deploys subliminal techniques to materially distort behavior
- Check: Does the system use hidden persuasion tactics?
- Result: Must be clearly disclosed if persuasive

**2. Exploitation of Vulnerabilities**
- Prohibition: AI exploiting vulnerabilities of specific groups (age, disability)
- Check: Does the system target vulnerable populations unfairly?
- Result: Must have safeguards or be prohibited

**3. Social Scoring**
- Prohibition: AI for social scoring by public authorities
- Check: Does the system rank individuals for unrelated purposes?
- Result: Must not be deployed if social scoring

**4. Real-Time Biometric Identification** (with exceptions)
- Prohibition: Real-time remote biometric in public spaces by law enforcement
- Exceptions: Victim search, imminent threat prevention, serious crime
- Check: Does the system perform real-time biometric identification?
- Result: Requires judicial authorization if exception applies

**5. Emotion Recognition** (specific contexts)
- Prohibition: Emotion recognition in workplace and education (with exceptions)
- Check: Does the system infer emotions for decision-making?
- Result: Must have legitimate purpose and safeguards


### GDPR Alignment

**Lawful Bases for AI Processing**:
- Consent: Freely given, specific, informed
- Contract: Necessary for contract performance
- Legal obligation: Required by law
- Vital interests: Life-or-death situations
- Public task: Carried out in public interest
- Legitimate interests: Balancing test required

**Special Category Data** (Article 9):
- Racial or ethnic origin
- Political opinions
- Religious or philosophical beliefs
- Trade union membership
- Genetic data
- Biometric data (for unique identification)
- Health data
- Sex life or sexual orientation
→ Requires explicit consent or specific exemption

**Data Protection Impact Assessment (DPIA)**:
- Required for high-risk processing
- Must assess necessity, proportionality, risks
- Must include mitigation measures
- Must consult DPO and (sometimes) supervisory authority


### Conformity Assessment Routes

**Option 1: Internal Assessment** (Article 43(1))
- Applicable if: quality management system implemented, technical documentation prepared, automatic logs kept, conformity procedures followed
- Process: Self-assessment against requirements, DoC issuance, CE marking
- Evidence: Technical documentation (Annex IV), test reports, quality management records

**Option 2: Notified Body Assessment** (Article 43(2))
- Applicable if: system listed in Annex III point 1 (biometric), or self-learning after market placement with significant change
- Process: Notified body reviews technical documentation, issues certificate
- Evidence: Same as Option 1, plus notified body report


### Post-Market Monitoring (Article 72)

**Required Elements**:
- Systematic collection and analysis of data on AI system performance
- Identification of unforeseen risks and incidents
- Documentation of corrective actions
- Reporting to market surveillance authorities
- Update of risk management and technical documentation

**Monitoring Frequency**:
- Continuous for high-risk systems in critical applications
- Periodic (at least annually) for other high-risk systems
- Event-triggered for incidents or near-misses


## Company Information — Complete Details

**Legal Entity**: X-Loop³ Labs
**Registered Address**: Gossau, St. Gallen, Switzerland
**Governing Law**: Swiss Federal Law
**Venue**: Zurich, Switzerland
**Founded**: 2025

**Contact Information**:
- General inquiries: [email protected]
- Licensing and commercial: [email protected]
- Technical support: (via [email protected])
- Website: https://x-loop3.com

**Open Source**:
- GitHub Organization: https://github.com/jongartmann
- Free Layer Repository: https://github.com/jongartmann/eu-ai-act-layer-lite
- License: FTBL (Free Tier Baseline License)

**Philosophy**:
- Pre-semantic approach: Prevent problems before they occur
- Artifact-only delivery: No consulting, pure technical infrastructure
- Transparency: No hidden compliance claims, clear scope
- Swiss precision: Engineering rigor, legal clarity


## FAQ — Complete List

**Q: Does X-Loop³ Labs guarantee EU AI Act compliance?**
A: No. We provide technical governance artifacts — templates, evidence structures, traceability hooks, validation pipelines. The compliance decision (legal determination that requirements are met) rests with you and your legal team. We provide the technical foundation; you make the legal call.

**Q: What's the difference between X-Loop³ and traditional compliance tools?**
A: Traditional tools document after the fact with retroactive dashboards and reports. They're useful for showing what happened, but they don't prevent problems. X-Loop³ builds governance into your architecture from day one. We prevent drift, enforce grounding, validate consistency — before problems occur. Less chaos during audits because less chaos is created.

**Q: Will this slow down our product?**
A: No. Our layers work in the background as middleware. Drift Stopper and Hallucination Stopper add microseconds of latency (typically <10ms). The EU AI Act Layer generates evidence asynchronously, so it doesn't block your main application flow. Your product stays fast — with less risk.

**Q: Who is this for?**
A: Companies using AI productively that must take compliance seriously. Startups needing a governance baseline. Scale-ups preparing for their first audit. Enterprises managing dozens of AI systems. Healthcare, finance, legal, and any industry deploying AI in the EU market. If "how do we prove this?" comes up often — we're relevant.

**Q: How quickly can we start?**
A: Download the free layer from GitHub right now (https://github.com/jongartmann/eu-ai-act-layer-lite). No signup, no sales call, no credit card. Clone the repo, fill in the templates, generate evidence. Takes about 2 hours to complete a basic governance baseline. For Pro/Enterprise, contact [email protected] — we'll get you set up within days.

**Q: Do we need to replace our existing infrastructure?**
A: No. X-Loop³ works as a layer over your existing stack. We integrate with your LLMs (OpenAI, Anthropic, Llama, custom models), your vector stores (Pinecone, Weaviate, Qdrant), your APIs, your databases. You don't rebuild; you augment.

**Q: What if we're already using another compliance tool?**
A: X-Loop³ complements other tools. If you have a GRC platform, we provide the technical artifacts they need. If you have a documentation tool, we provide the structured evidence. If you have an LLM observability platform, we add governance constraints. We play well with others.

**Q: Is the free tier really free forever?**
A: Yes. The Lite tier and Lite + Text Addon are free forever with no usage limits, no expiration, no strings attached. We release these because the market lacks serious tooling and we believe every AI team should have a governance baseline. This is not a demo or trial — it's a real, production-ready baseline.

**Q: What's the upgrade path?**
A: Start with Lite (free) → add Text Addon (free) → upgrade to Pro (commercial) when you need consistency validation and gating → upgrade to Enterprise (commercial) when you need evidence authenticity and court-readable exports → Black Tier (invite-only) for critical infrastructure. No vendor lock-in on the baseline — you can stay free forever if it meets your needs.

**Q: How do you make money if the baseline is free?**
A: We make money from Pro and Enterprise licenses. Teams that need consistency engines, multi-stage gating, evidence signatures, and court-readable exports pay for those features. But everyone gets the governance baseline for free because compliance infrastructure should be accessible.

**Q: Can we see example outputs before committing?**
A: Yes. The GitHub repository includes example evidence packs showing exactly what the free layer produces (https://github.com/jongartmann/eu-ai-act-layer-lite). For Pro/Enterprise examples, contact [email protected] and we'll share redacted customer examples.

**Q: Do you offer implementation support?**
A: For Enterprise licenses, yes — we provide architecture review and deployment assistance. For Lite and Pro, no — these are self-service products designed for technical teams. If you need consulting, we can recommend partners, but we don't provide consulting ourselves.

**Q: What's your update/release schedule?**
A: We release updates quarterly, aligned with EU AI Act guidance and enforcement developments. Breaking changes are rare (we maintain backward compatibility). All tiers get security and compliance updates. New features go to Pro/Enterprise first, then trickle down to Lite over time.

**Q: How do you handle feature requests?**
A: GitHub issues for the free tier (https://github.com/jongartmann/eu-ai-act-layer-lite/issues). Email for Pro/Enterprise ([email protected]). We prioritize based on regulatory requirements first, customer impact second, nice-to-haves third.

**Q: What if the EU AI Act changes?**
A: We update our artifacts to match the regulation. If a change is cosmetic (new field names, reorganized sections), we push updates within weeks. If a change is substantive (new requirements, different conformity routes), we may need months. All tiers get regulatory updates for free.

**Q: Can we use this for non-EU deployments?**
A: Yes. The EU AI Act Layer is designed for EU compliance, but the governance structures (risk registers, human oversight, evidence generation) are useful anywhere. Many teams use our baseline for global deployments because it represents best practice, not just minimum compliance.

**Q: Is this specific to LLMs or does it work for other AI?**
A: It works for any AI system: LLMs, traditional ML models, rules-based systems, ensemble methods, computer vision, speech recognition. If it's an AI system under EU AI Act scope, our governance layer applies.

**Q: What if our AI system isn't high-risk?**
A: Then you're not legally required to use this. But governance is still valuable. Even low-risk systems benefit from risk registers, human oversight, and evidence trails. The free tier works for any AI system, high-risk or not.


## Documentation Links

**Getting Started**:
- Overview: https://x-loop3.com/documentation
- Installation guide: https://x-loop3.com/documentation#installation
- First evidence pack: https://x-loop3.com/documentation#first-pack

**Product Pages**:
- EU AI Act Layer: https://x-loop3.com/products/eu-ai-act-layer
- Drift Stopper: https://x-loop3.com/products/drift-stopper
- Hallucination Stopper: https://x-loop3.com/products/hallucination-stopper

**Bundle Pages**:
- Bundles overview: https://x-loop3.com/bundles
- Lite: https://x-loop3.com/bundles/lite
- Pro: https://x-loop3.com/bundles/pro
- Enterprise: https://x-loop3.com/bundles/enterprise

**Free Resources**:
- Free EU AI Act Layer: https://x-loop3.com/free/eu-ai-act-lite
- GitHub repository: https://github.com/jongartmann/eu-ai-act-layer-lite
- Example evidence: https://github.com/jongartmann/eu-ai-act-layer-lite/tree/main/examples

**Company Pages**:
- Philosophy: https://x-loop3.com/philosophy
- Licensing: https://x-loop3.com/licensing
- Contact: https://x-loop3.com/contact
- Privacy: https://x-loop3.com/privacy
- Terms: https://x-loop3.com/terms


## Integration Examples

### Python Integration

```python
from xloop3 import EUAIActLayer

# Initialize governance layer
gov = EUAIActLayer(
    system_name="CustomerSupportBot",
    tier="pro"
)

# Run conformity check
result = gov.check_conformity()

if result.status == "GREEN":
    deploy_model()
elif result.status == "AMBER":
    if human_approval():
        deploy_model()
    else:
        block_deployment()
else:  # RED
    block_deployment()
    alert_team(result.errors)
```

### REST API Integration

```bash
curl -X POST https://api.x-loop3.com/v1/evidence/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "system_id": "customer-support-bot-v2",
    "artifacts": ["SYSTEM_PROFILE", "RISK_REGISTER", "MODEL_CARD"],
    "format": "pdf"
  }'
```

### Docker Integration

```dockerfile
FROM python:3.11-slim

# Install X-Loop³ governance layer
RUN pip install xloop3-eu-ai-act-layer

# Copy your AI application
COPY . /app
WORKDIR /app

# Run with governance enabled
CMD ["python", "app.py", "--governance=enabled"]
```


## Changelog

**v3.6.0** (Current) - February 2026
- Added Lite + Text Addon (free automated exports)
- Improved HERM-Light consistency engine
- Added state-aware packaging
- Enhanced gap highlighting

**v3.5.0** - December 2025
- Added Black Tier (invite-only)
- Added ZK proof interface
- Added ISO 42001 crosswalk
- Enhanced evidence authenticity

**v3.4.0** - October 2025
- Added Enterprise tier
- Added evidence signature support
- Added court-readable PDF exports
- Improved LLM governance extensions

**v3.3.0** - August 2025
- Added Pro tier
- Added HERM-Light consistency engine
- Added multi-stage gating
- Added intent lock and trade-off governance

**v3.2.0** - June 2025
- Enhanced Annex III mapping
- Added Article 5 prohibited practices screen
- Improved risk register structure

**v3.1.0** - April 2025
- Added Lite tier (free baseline)
- Published on GitHub
- Released FTBL license

**v3.0.0** - February 2025
- Initial public release
- EU AI Act Layer launched
- Company founded


## Roadmap

**Q2 2026**
- Add support for AI Act delegated acts
- Expand conformity assessment automation
- Launch notified body integration API

**Q3 2026**
- Add multi-language evidence generation (German, French, Spanish)
- Launch Drift Stopper standalone product
- Add real-time dashboard for governance monitoring

**Q4 2026**
- Launch Hallucination Stopper standalone product
- Add integration with major GRC platforms
- Expand Black Tier governance features

**2027**
- International expansion (UK, US compliance frameworks)
- AI agent governance extensions
- Multi-model governance orchestration


---

Last updated: February 9, 2026
For the most current information, visit https://x-loop3.com
Contact: [email protected]