Artificial Intelligence – Risks and Rewards
The most compelling challenge facing businesses of all sizes today is maximizing the value of AI investments while keeping up with evolving risks and steady stream of regulatory obligations, all within the limits of available resources and funds.
Innovation is the heart of everything we do and, in many instances, equally shapes both small and large companies DNA. Today, as the era of disruption and turbulence continues, innovation plays a critical role in ensuring that companies remain competitive and resilient. It is becoming increasingly evident that AI and data analytics are key enablers in achieving growth, scale, and efficiency through innovation.
As humans, when we create “value”, it is our instinct to “protect and safeguard it”. Business value is created in the same way.
Building a business is an integral part of both “value creation” and “value protection”. When an organization pursues value creation, the upside risks associated with its creation are carefully considered to maximize its rewards and returns. In the field of AI, such upside risks or opportunities, generally translate into competitive advantages in terms of performance and accuracy, and to a relatively lesser degree, reliability and consistency of its operation and unfortunately, to abysmal scale, its long-term sustainability and environmental damage.
The downside risks associated with neglected outcomes not only lack AI alignment with human values and human intent, but also failure to comply with ever-evolving laws and regulations.
Emerging Global AI Standards and Framework
In order to guide compliance with global AI laws and regulations, all leading frameworks, standards, and policy guidance – such as the OECD AI Accountability Framework, the government’s pro-innovation strategy, or the NIST Risk Management Framework 1.0 – offer a mix of the following trust enablers (shown below). The trust principles or enablers covered across these laws and standards are necessary in setting the right tone, providing the context and risk coverage, and raising the bar on risk management and harms However, they are not actionable or measurable.
.
Ethics – Bias (fairness and discrimination), ethical choices, human harm, societal implications
Privacy and Security – Transparency and accountability (explainable AI and ownership), security of human asset, resilience and privacy rights and freedoms.
Robustness – Validity (interpretability), accuracy and reliability
In reality, Trustworthy AI systems should be anchored with these objectives which methodically factors in the above trust principles –
- Effective AI Governance – robust policies implemented, strong culture and leadership, and well-defined core objectives across scope, nature, context and purpose
- Mature information governance (see (3) Post | Feed | LinkedIn)
- Effective Risk Management practice to manage cross-disciplinary risks commonly referred to as trust enablers unique to socio-technical systems, in addition to risks inherent to systems and processes.
- Processes to continuously monitor and measure for optimal risk-taking related to data, models, and adaptive capabilities of AI learning environments.
Companies often struggle to understand what trust indicators are and which one’s matter to them based on their business use case and context. Furthermore, this makes it difficult to align the requirements of trust enablers with the legal requirements that are applicable to them and other emerging AI regulations that are yet to be enacted. The measures and safeguards (often referred to as guardrails) should be considered in light of the scale of AI models and the complexity of data – often, customers are stranded in understanding how this all fits into their requirements, leading to frustration and chaos – and you are not the only one.
To add, the recent heightened public outcry about AI risks (mostly from generative AI models) along with the resounding alarms raised by ethicists and academia has exasperated the practical management of risks. An effective approach for alignment with trust indicators while maintaining the performance and value require actionable and measurable risk assurance based on various trust dimensions. In fact, there is no doubt that the potential harms and erosion of trust that AI may cause are fully valid concerns that have helped to raise risk awareness in households and organizations alike. However, there isn’t a straightforward approach to closing risk gaps today. Period.
Three-Tier Model to Build and Sustain Trust
By leveraging the three-tier model outlined below, companies can navigate the management of risk to AI and preserve its benefits and utility in a coordinated and deliberate manner. Each tier outlines how trust-gaps are to be closed and gives clear indicators of the current state and the target state.

Tier 1 – Pivot– Stakeholders understand the scope, data, and context of AI systems. In scope AI systems are aligned with the overall organizational goals. There exists an organizational culture that prioritizes responding to risks and earning the trust of stakeholders and consumers. Foundational processes, policies and documentation of AI assets are in place across infrastructure, data management including an effective cybersecurity program. Laws and regulations applicable to the situation are well understood. Tier 1 offers a baseline assessment of the gaps in your in-scope trust enablers – Ethics, Privacy and Security, and Robustness – which need to be addressed to ensure that the benefits of your AI program are preserved. This could be handled as a self-assessment under light-touch engagement.
Completion of each tier will accompany with a risk scorecard that identifies your current state risk levels across the trust dimensions of your AI model, regulatory obligations, and scale of operations. This will help you prioritize your risk mitigation strategies and ensure that you are on track towards building and maintaining trust in your AI systems. See an example below for illustration.

Tier 2 – Transform –
Mechanisms and processes are in place to qualitatively measure risks to facilitate informed decision-making across data and models. Standards and guidelines provide clear understanding of the business use case, controls and safeguards to implement aligned to legal and statutory framework and ethical standards to achieve explainable AI, and alignment with other applicable trust dimensions – privacy, security and robustness. Tier 2 provides insight into risks that need to be prioritized and necessary safeguards based on the controls / criteria implemented to close gaps in order to reduce risk exposure.
Depending on the scale, complexity, and regulatory requirements, achieving target and desired trust levels in Tier 2 can be complex. As approach to rationalize and realize this journey is through ForHumanity – we highly recommend. ForHumanity is a non-profit organization established to provide a range of interoperable independent Audit criteria and framework to enable trust for autonomous systems. With this model and the relevant ForHumanity (https://forhumanity.center/) framework, the process of closing trust gaps, securing compliance, and future-proofing your trust and performance is an achievable goal – use the ‘Let’s work together’ link below to get in touch with one of us / certified independent auditors, if this is an option you want to consider.
Tier 3 – Adapt – Risks are managed on a continuous basis. Proactive risk monitoring and evaluation of AI scope, nature, and context contributes to understanding of compliance, post-market model drift, bias, privacy and security pipeline data, model infrastructure and learning experiences. Repeatable mechanisms exist to effectively mitigate and measure risks against qualitative and quantitative criteria. Provides platform for agility, feedback and innovation to optimize and maximize AI performance.

Although AI poses complex and emerging risks, risk management should not be choppy and confusing. Using this strategy, companies can develop critical AI risk management capabilities, distinguish themselves from their competitors, and gain an enviable reputation among customers, investors, regulators, and partners in the ecosystem – further enabling Trustworthy AI as your competitive advantage.