Data security and a commitment to harm mitigation in AI adoption will foster trust and enable collective learning at a time when our knowledge of capabilities and consequences is limited.
In this context, we understand harm mitigation as both prevention and repair of harm stemming from the uses of AI. Even through a “Do No Harm” approach, the potential for damage exists, and persistent and careful attention to transparency in early AI design and implementation is essential. We recognize that effective AI is conditioned on quality, de-biased information and that robust and active oversight and safeguarding of inputs to AI tools are essential to producing the right outputs.