Integrating AI into diversity initiatives poses risks as well as opportunities
In November 2018, there were five billion consumers who interacted with data. By 2025 that number is predicted to increase to six billion, or 75 per cent of the world’s population. The ability to understand and properly utilise this burgeoning wealth of information is an absolutely vital attribute to remain competitive going forward. This data resource, however, needs context, ethics and human intelligence when applied to AI as there are often variables at play that only humans can interpret and understand.
The core requirements when integrating an AI platform for D&I purposes are twofold. First, the requirement that the AI applications are used ethically. Secondly, that the AI programme itself has been designed in a way that minimises inherent bias and is able to be used ethically. When it comes to AI, development priorities such as ethics, shareability, scalability and security can often be considered as peripheral to the core development goals of building a functional product and getting it to market quickly.