Code of Practice for Artificial Intelligence (AI) Applications

Kenya Bureau of Standards (KEBS) issued a draft Information Technology – Artificial Intelligence – Code of Practice for AI Applications (hereinafter referred to as “the Code”) for public comments. The Code outlines recommendations to assist organizations in developing, providing, or utilizing AI systems responsibly in achieving their intended goals.

The Code outlines methods for ensuring the trustworthiness of AI systems. It details approaches to achieve transparency, explainability, and controllability, all of which are crucial for building trust in AI. Additionally, the code identifies common engineering pitfalls and associated threats and risks, along with mitigation techniques to address them. Finally, it provides guidance on assessing and achieving key qualities of AI systems, including availability, resiliency, reliability, accuracy, safety, security, and privacy.

The Code further outlines the key characteristics and processes involved in AI systems, along with the stakeholders who play various roles throughout the AI system’s lifecycle. An AI application is distinguished by processing at least one of the following functionalities: AI Model and Development; AI application and Decision-Making; and Continuous Learning and Improvement. Each process has stakeholders, who play specific roles and bear different responsibilities, i.e., AI Developers are in-charge during AI Model and Development, and AI Users/Individuals are tasked with the responsibility of monitoring the stage of the AI system life cycle, thus realizing its value from use of the AI product or service.

In AI application, there are instances of non-functional characteristics, beyond an AI application’s core functionality, are crucial for its successful implementation. These characteristics encompass aspects like trustworthiness, transparency, predictability, and reliability. Addressing these concerns during development ensures minimization of potential AI risks. Furthermore, the Code proposes ethical considerations for developing and using AI systems. There are ethical principles to follow when designing, developing, and using AI, e.g., accountability, fairness, transparency, and human control. There are social concerns about AI, such as privacy and bias in decision-making seeing as AI is a prediction-tool, and lacks reasoning like human beings.

There are measures for organizations developing and managing powerful generative AI systems. These organizations are bound by the Code as it applies to AI systems with broad public access and potential for misuse, and it aims to proactively address risks before specific regulations are in place. While developers and managers have distinct roles, the Code encourages their collaboration to mitigate potential risks of these AI systems. Moreover, since AI risks are interconnected, and different participants in the AI lifecycle have varying levels of knowledge and control, a comprehensive approach is needed. This means incorporating AI risk management into existing enterprise risk strategies, considering it alongside other critical risks for a more efficient and holistic approach.

In conclusion, KEBS’ draft Code of Practice for AI Applications offers valuable guidance for organizations looking to develop, deploy, and utilize AI responsibly. By outlining key principles, processes, and stakeholder roles, the Code empowers organizations to build trustworthy AI systems that are transparent, explainable, and minimize potential risks.