top of page

Solved the AI Black Box problem yet?

Have you solved the AI Black Box problem yet?

Accountability issues make it difficult to implement AI safely, especially in high-stakes scenarios.


XAI is certainly making progress in reducing the risks involved. But in such a new field (as explained in this 2020 primer) there are problems to be resolved in the accuracy and quality of the explanations.


GDPR Article 22 gives individuals a legal right to explanation from any organisation that offers goods or services to individuals in the EU, regardless of where that organisation is located. In Australia the latest 2023 OAIC survey, informing an ongoing review of the current Privacy Act 1988, found 96% of people want conditions in place before AI is used in decisions that might affect them. In New Zealand the Privacy Act 2020 gives individuals the right to understand how their personal information is being used and to challenge any decisions that have been made about their information.

And they don't all have to be explanations for data use with potentially catastrophic consequences. Fines under GDPR Article 22 start at the higher or €10M or 2% of global turnover individuals have the right to challenge automated decisions and to have them reviewed by a human.


It is vital safeguards are in place against the potential risks of unknow, or unintended consequences of automated decision-making. Automated decision-making can be discriminatory and biassed, and it can have a significant impact on individuals' lives.


Localised guardrails must be in place to mitigate ambiguities, phantom results, and hallucinations. AI-generated responses need to be understood accurate, reliable, and truthful for each unique enterprise operation.


For any enterprise navigating their digital transformation journey and wanting to benefit from AI, a private myGPT Starter Pack from smartR AI helps turn the AI black box into a AI glass box. This private myGPT engine becomes the enterprises own PoC or MVP for independent evaluation of use cases and process flows to consider the broad-term implications for conversational AI. It becomes possible to support operations on their digital transformation journey and help business planning, including any roadmap of future custom developments and operationalisation of AI.


Achieving a consistent, defensible understanding of the truth while safeguarding confidential information and protecting competitive intellectual property (IP) remains a significant challenge for conversational AI systems established without the safety net of a Private myGPT engine. Maintaining control over core aspect of AI is crucial to ensure the effective and trusted adoption of myGPT is optimised and properly understood for any responsible enterprise.

Trialling Conversational AI with a Private myGPT Starter Pack from smartR AI will be an important project for any major enterprise. Avoiding the black-box risks while recognising the glass-box insights means project success is ever more dependant on the emotional maturity of the project team. All the classic issues often found across stakeholders and team members, sponsors and leadership further amplify the importance of high emotional intelligence for PMs and their teams. Just as the private my GPT technology needs to run as a transparent glass-box to be reliable, explainable and secure, successful project performances need high emotional maturity to eliminate bias, maintain accuracy, eliminate gaps and protect service quality.


If you’re interested in a Private myGPT Starter Pack from smartR AI and want to improve the quality and success of your AI delivery projects please contact Applied EQ Services. The Starter Pack helps kick-start the Conversational AI journey safely and our coaching and delivery service help you navigate the journey successfully. We offer smart solutions and optimise skill sets to deliver a sense of assurance for Project Leaders to confidently conduct the "orchestra" for their own AI projects.


Comments


bottom of page