“We practice and consult on the basis of ethical AI. We recognize that humans are biased. But how can we build AI systems that are far less biased?”
Arsanjani’s Laws for Cognitive Systems :
1. A cognitive system should not be trained on dark curated data . Yes maybe the hidden layers of neural networks are a black box, but the data itself should not be “dark”: it’s sourcing, inputs, annotations, training workflow and outcomes should be transparent. Training, testing and validation data sets should be traceable, white boxed and accessible for enterprise governance and compliance .
2. The data curation process will be transparent. Ends do not justify means : transparency of data curation process for machine learning is paramount . This means traceability and governance around where data is sourced, who curated and annotated it, who verified the annotations .
3. Cognitive system recommendations must provide traceable justification . Outcomes must be coupled with references to why they made the decision or recommendation.
4. Where human health is at stake cognitive systems with relevant but different training backgrounds will cross check each other before making a recommendation to a human expert .
These practices or initial variations of them should be considered as part of an overall governance process for the training, curating of datasets for machine learning of cognitive systems.
5. Engage in initiatives that Minimize Bias. Eliminating bias may not be feasible or practical. Seek to minimize bias in :
— Race, Gender, Age, Background, Political Affiliation, Weight, Political Views, etc.