Build 2020 : Responsible ML

This year I am attending Build remotely, as is the rest of the world. Instead of feeling left out, I am feeling more engaged then ever!

At the core of Microsoft’s AI are the principles of fairness, reliability & safety, privacy & security, inclusiveness, transparency & accountability. As AI capabilities increase along with adoption, it is important that we also leverage tools that enable us to practice AI responsibly. I was delighted to hear that Responsible ML is at the forefront of Build announcements for AI.

Responsible ML provides us with tools to ensure that as practitioners we

Understand machine learning models – Are we able to interpret and explain model behavior? Are we able to assess and mitigate model unfairness

Protect people and their data – Are we actively working to prevent data exposure with differential privacy?
Control the end-to-end machine learning process – Are we documenting the machine learning life cycle?
Announced at Build this week were multiple Responsible ML open source packages. The accessibility of these freely available tools enables every machine learning developer to consider incorporating Responsible ML into the development cycle of their AI projects.

InterpretML – An open source package that enables developers to understand their models behavior and the reasons behind individual predictions.

FairLearn – A python package that enables developers to assess and address fairness and observed unfairness within their models.

WhiteNoise – An open source library that enables developers to review and validate the differential privacy of their data set and analysis. Also included are components for data access allowing data consumers to dynamically inject ‘noise’ directly into their queries.

Datasheets for Models – A python SDK that enables developers to document assets within a model, enabling easier access to metadata about models

Microsoft continues to lead the way in setting the standard for how we use AI. I continue to be impressed by their dual focus of enabling customers to not only build out their AI capabilities, but also address stakeholder concerns regarding corporate responsibility.

I am excited to say, that my friends and I from the Global AI Community were given a preview of some of these features. We will be dropping in on @HBoelman’s Twitch session with guest @TessFerrandez to provide demos and feedback on our experiences.

Eve Pardi (@EvePardi) will talk about Interpret-Text. You can read her intro blog [here]

Willem Meints (@Willem_Meints) is going to discuss FairLearn. You can find his intro blog [here]

Sammy Deprez (@SammyDeprez) will take you into the magical world of Confidential ML with Microsoft Seal and OpenEnclave [here].

And finally, I will be giving a demo on the open source WhiteNoise library that enables analysts to leverage Differential Privacy.

Want more AI throughout the year? Be engaged! Join us at one of the Global AI Community events throughout the year!