Skip to content
← Back to Home

AI Transparency Policy

How our AI works and how we keep humans in control

Last updated: December 12, 2025

Our Commitment to Explainable AI

At PartnerAlly, we believe that artificial intelligence used in compliance and risk management must be held to the highest ethical standards. Compliance decisions affect organizations, their employees, and the public. They deserve AI that is transparent, accountable, and human-centered.

This Ethical AI Policy outlines the principles that guide our AI development and deployment, our commitments to users, and the governance practices we follow to ensure our AI remains trustworthy and beneficial.

Core Principles

Transparency

Every AI decision comes with complete reasoning. We show exactly what data was analyzed, what patterns were identified, and why specific recommendations were made. No black boxes.

Human Oversight

AI augments human judgment, never replaces it. Users can set confidence thresholds, review recommendations before action, and override AI decisions at any time.

Accountability

Complete audit trails for every AI interaction. Timestamps, inputs, outputs, confidence scores, and reviewer notes are logged and retained for regulatory review.

Fairness

We actively test for and mitigate bias in our AI systems. Regular fairness assessments ensure our AI treats all users and data equitably.

Privacy by Design

AI processing respects data minimization principles. We only use data necessary for the specific task and implement strong access controls.

Risk Awareness

We classify AI use cases by risk level and apply appropriate safeguards. High-risk decisions always require human review and approval.

How Our AI Works

Document Analysis

When our AI analyzes compliance documents, it provides:

  • A detailed explanation of what was found in each document
  • Specific citations and references to relevant sections
  • Confidence scores (0-100%) for each finding
  • Clear identification of areas requiring human review

Gap Detection

When identifying compliance gaps, our AI:

  • Explains the specific control requirement that may not be met
  • Shows the evidence (or lack thereof) that led to the finding
  • Provides severity ratings with clear justification
  • Suggests remediation steps with reasoning
  • Flags low-confidence findings for mandatory human review

Workflow Generation

AI-generated remediation workflows include:

  • Explanation of why each step is recommended
  • Mapping to relevant compliance framework requirements
  • Estimated effort and complexity assessments
  • Alternative approaches where applicable
  • Required human approval before workflow execution

Confidence Scores and Thresholds

Every AI output includes a confidence score that reflects the AI's certainty in its analysis. We use the following thresholds:

90-100%
High confidence. AI recommendations can proceed with standard review.
70-89%
Medium confidence. Findings flagged for careful human review.
Below 70%
Low confidence. Mandatory human review required before any action.

Users can configure their own thresholds based on their risk tolerance and compliance requirements.

Audit Trail and Record Keeping

We maintain comprehensive audit trails for all AI operations, including:

  • Timestamp of each AI operation
  • Input data provided to the AI (with appropriate data protection)
  • AI model version and configuration used
  • Complete output including reasoning and confidence scores
  • User actions taken on AI recommendations
  • Any overrides or modifications made by users
  • Reviewer identity and approval timestamps

Audit records are retained for a minimum of 5 years and can be exported for regulatory review upon request.

Our Commitments

We make the following commitments to our users and the broader community:

We will always explain how our AI reaches its conclusions
We will never use AI to make final compliance decisions without human review
We will maintain complete audit trails of all AI operations
We will regularly test our AI for bias and fairness
We will be transparent about AI limitations and confidence levels
We will allow users to opt out of AI-assisted features
We will continuously improve our AI governance practices
We will comply with applicable AI regulations including the EU AI Act

AI Governance Structure

Our AI governance includes:

  • AI Ethics Review Board: Cross-functional team that reviews AI capabilities before deployment
  • Regular Bias Audits: Quarterly assessments of AI outputs for potential bias or unfairness
  • Model Documentation: Comprehensive model cards for each AI capability describing purpose, limitations, and appropriate use
  • Incident Response: Clear procedures for addressing any AI-related issues or concerns
  • Continuous Monitoring: Ongoing monitoring of AI performance and accuracy

Regulatory Compliance

Our Ethical AI practices are designed to comply with current and emerging AI regulations, including:

  • EU AI Act: Our AI is designed to meet transparency, human oversight, and documentation requirements
  • NIST AI Risk Management Framework: We follow NIST guidance for identifying and managing AI risks
  • ISO 42001: Our AI management practices align with this emerging standard for AI management systems

Your Rights Regarding AI

As a PartnerAlly user, you have the right to:

  • Request explanation of any AI decision affecting your data or compliance status
  • Override or reject AI recommendations
  • Request human review of AI outputs
  • Access audit logs of AI operations on your data
  • Opt out of specific AI-assisted features
  • Report concerns about AI behavior or outputs

Limitations and Disclaimers

While we strive for the highest accuracy and reliability, we are transparent about AI limitations:

  • AI recommendations are advisory and should not replace professional compliance or legal advice
  • AI performance depends on the quality and completeness of input data
  • Compliance requirements vary by jurisdiction and industry. AI may not capture all local nuances.
  • AI models are trained on historical data and may not reflect the latest regulatory changes immediately
  • Users remain responsible for final compliance decisions and should verify AI outputs

Questions and Concerns

If you have questions about our Ethical AI practices or want to report a concern, please contact us:

Email: ethics@partnerally.com

We take all AI-related concerns seriously and will respond within 5 business days.

Policy Updates

We review and update this Ethical AI Policy at least annually and whenever significant changes are made to our AI capabilities. Material changes will be communicated to users via email and in-app notifications.