Happy new year!
One of the big trends in AI this year: AI is maturing as a domain. We are using AI to address complex problems. That means we will need to be more aware of the potential downsides of AI. I believe that a new trend could manifest: responsible AI by design.
a) Responsible AI could be at the highest level of the stack
b) This is needed because AI will be increasingly used in areas where risk is high (as opposed to low risk areas like advertising)
c) The analogy is ‘privacy by design’ ie you think of privacy from first principles (as opposed to as a retrospective add-on) https://lnkd.in/eSUVm2s3
I also see other papers and references in this direction: https://lnkd.in/ei8QUqVF and https://lnkd.in/eTHcCRQ9
If we break this down further,
Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them.
The principles of responsible AI are Fairness and inclusiveness, Reliability and safety, Transparency, Inclusiveness, Privacy and security and Accountability
These are depicted as below.
Image source: https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai
I will be exploring the idea of responsible AI by design in my teaching. If we can successfully consider Responsible AI from the outset, it would lead to better AI and a wider deployment of AI.