With a one time payment, the SC is an
easy-to-use AI governance tool that has been designed based on AI principles. It captures all key information on the AI systems that you use and enables you to demonstrate that you have deployed AI in a responsible way to meet regulatory standards and client expectations. It delivers results now – not in the future.
Analyse
Features
Holistic view
Evaluation
Attribute
Impact
Weight
Relevance
Discover
Ownership
Collaboration
Research
Third parties
Gap Analysis
Documentation
Issues Log
Action Plan
Allocate
Differentiate
Rating
Risk contribution
Assign
Summation
Comparison
Reporting Issues resolution
Governance
Enterprise
Risk Management
Change Management
SC Review Schedule
Executive mindshift
Position the organisation to embrace a new AI assurance framework.
Exploration
Assess current AI frameworks and workflows to determine AI risks.
Regulatory scoring
Create governance that aligns with applicable regulations.
Model review
Analyse AI systems and align with AI principles.
Deploy AI-MS System Scorecard
Create AI-MS SC for each AI system
Workforce readiness
Deliver training on AI risk assessment techniques and use of AI-MS SC.
Executive Report
Present AI Assurance Framework, risk rating and recommendations for ongoing compliance.
AI principles are the foundation
upon which AI regulations are
formulated, good AI systems are
developed, and effective AI
operations are built.
AI principles enable humans and machines to co-exist, push the boundaries of scientific discovery and create new possibilities, while enhancing human potential
AI principles provide clients, employees and stakeholders
with a criterion to assess AI
and gain assurance that AI
is used ethically and
responsibly.
AI principles offer a structure on which organisations can build frameworks to meet client expectation and their fiduciary, regulatory and statutory obligations.
Accountability: Designated AI Sponsors appointed to ensure responsible and ethical AI implementation, including ongoing monitoring, timely audits, and risk assessments.
Understandability: Employees engaged in the acquisition, design, development and use of AI systems must be competent, qualified, and understand the impact of AI outcomes.
Interpretability and Auditability: documents and reports for external distribution must explain very clearly how AI is used in end-client decisions. AI systems must undergo comprehensive testing, including parallel runs against production systems before deployment to confirm AI is accurate, fit-for-purpose and delivers outcomes consistent with standard use cases. Any anomalies or errors must be corrected immediately.
Classification: Each AI system requires a model Scorecard with a risk rating (Unacceptable, High, Limited, Low), including third-party AI systems. Model Scorecards must comply with the EU AI Act’s conformity standards.
Transparency: AI inputs and outputs should be documented and traceable. Disclosures about AI use must be made to clients and stakeholders, in accordance with applicable laws and regulations. Management information on
AI usage must be timely and facilitate executive decision-making.
Risk and Controls Indicators: AI systems must be continuously monitored.
If outcomes deviate from expectations, there must be processes to disable deactivate or replace the AI system. Breaches of safeguards must be documented and escalated immediately to the team responsible for risk
(usually the Enterprise Risk Committee
Corporate Values: AI systems must not compromise the firm’s values.
Disclosure: Clients must receive a copy of the organisation’s AI Ethics Code.
Risk vs Reward: The benefits of AI must significantly outweigh the risks, considering long-term impacts of AI. AI must not jeopardise Environmental, Social, and Governance (ESG) goals, including carbon footprint targets.
Decisions on behalf of clients: AI must not undermine fiduciary or statutory obligations or treat customers unfairly. AI deployment and usage must comply with all applicable laws and regulations, such as GDPR and the EU AI Act
Cybersecurity: AI must be assessed against the organisation’s Information Security Cybersecurity Protocol. Any breaches must be escalated, investigated, and signed off by the Enterprise Risk Committee.
Privacy: AI system access and use, and the handling and distribution of client
and personal data must comply with GDPR, EU AI Act and other applicable laws.
Data Protection: There must be processes for timely escalation and notification of unauthorised data access. Sensitive information must be protected.
Trustworthiness: AI systems must be safe and resilient, including ongoing integrity against systematic vulnerabilities, malicious threat and adversarial threats.
Third-Party Software: Vendors must attest to their AI’s ongoing legal and regulatory compliance.
Bias: AI must not discriminate or be used to discriminate or cause harm, nor infringe on the civil rights of any individual or group of individuals.
Roles and Responsibilities: All employees involved with the operations of AI systems must have clearly defined roles and responsibilities.
Training: Appropriate training must be provided for any employee involved in the acquisition, design, development, and use of AI systems.