AI Governance Policy
Version: 1.0 | Effective Date: November 5, 2025 | Last Updated: November 5, 2025
1. Purpose
This policy defines how Solidus AI Tech designs, builds, deploys, and communicates about artificial-intelligence (AI) systems in a lawful, transparent, and trustworthy way. It aligns with:
EU AI Act (Regulation (EU) 2024/1689) – legal requirements for transparency, quality management, risk control, and post-market monitoring; and
NIST AI Risk Management Framework (AI RMF 1.0) – best-practice lifecycle management through the functions GOVERN, MAP, MEASURE, MANAGE.
2. Scope
Applies to all AI systems, tools, or components created or integrated by Solidus AI Tech (e.g., AvaChat AI Agent, Agent Forge, AI Marketplace) and all AI-generated or AI-assisted marketing, public communications, and customer-facing content.
3. Core Principles of Trustworthy AI
Lawful & Ethical – Comply with the EU AI Act, GDPR, UK DPA, and other laws.
Transparent – Inform users when they interact with AI or view AI-generated content.
Fair & Inclusive – Minimize bias and discrimination.
Secure & Reliable – Protect against tampering and data breaches.
Accountable – Maintain traceability, human oversight, and auditability.
Privacy-Respecting – Use and store data lawfully and minimally.
4. Governance Framework (NIST + EU AI Act Integration)
NIST Function
Solidus AI Tech Actions
EU AI Act Alignment
GOVERN
AI Governance Committee, AI System Inventory, risk tolerance statement, role assignments.
Arts. 16-22 (QMS, Provider Obligations)
MAP
Define purpose, users, datasets, risks; classify system as Prohibited / High-Risk / Limited / Minimal.
Art. 6 + Annex III (System Classification)
MEASURE
Perform TEVV – Testing, Evaluation, Verification & Validation for accuracy, bias, robustness, safety.
Annex IV (Technical Documentation) + Annex V (Declaration of Conformity); procedures Annex VI/VII
MANAGE
Post-market monitoring, incident response, continuous improvement, documentation updates.
Arts. 72-73 (Post-Market Monitoring & Serious Incident Reporting)
5. Marketing and Communication Standards
Truthful Representation – Claims about AI capabilities must be fact-based and verifiable.
Transparency Notice – Clearly indicate when content was AI-generated (e.g., “This content was generated using AI.”).
Fairness & Balance – Avoid inflated or misleading language.
Citation Integrity – Reference credible sources for data or benchmarks.
Compliance Approval – All AI-related materials undergo Compliance review before release.
Corrections & Feedback – Provide a clear public contact for reporting inaccuracies and rectify promptly.
6. AI Disclaimer Policy (EU AI Act + NIST AI RMF Transparency Requirement)
6.1 Purpose
Disclaimers help ensure that users, customers, and the public are aware when AI systems or AI-generated content are being used and understand their limitations. This supports Article 50 of the EU AI Act and the NIST AI RMF’s “Transparency & Accountability” principle.
6.2 When Disclaimers Are Required
Users interact directly with an AI system (e.g., chatbots, virtual assistants).
AI generates text, images, video, audio, or graphics for public distribution or marketing.
AI is used to simulate human communication or generate synthetic voices/faces.
Performance claims or analysis are derived from AI-driven outputs.
6.3 Standard Disclaimer Templates
For AI Interaction: “This conversation is powered by artificial intelligence. Responses are generated by AI and should not be taken as professional advice.”
For AI-Generated Marketing Content: “This content includes AI-generated text and/or visuals created for informational purposes only.”
For Analytical or Predictive Outputs: “Results are produced using AI models based on available data and may not reflect future performance or outcomes.”
6.4 Disclaimer Placement
Disclaimers must be visible and close to the AI output: in-line for web/app content, overlay or caption for visuals, or in captions for social media posts.
6.5 Maintenance & Audit
Compliance Team reviews disclaimer templates annually or when regulations change. All teams must record which disclaimer was used and when it was approved.
Last updated

