Company Updates

Product Update - Bias Monitoring v2.1

Product Update - Bias Monitoring v2.1

Arthur is thrilled to announce the official release of Bias Monitoring v2.1, previously in beta, a new way to visualize and programmatically combine sensitive attributes into subpopulations that can then be compared to one another in real time.

In the new version of our Bias tab, clients can view predictions (or outcomes) of your model, segmented by relevant subgroups in the population. An adjustable fairness threshold lets you quickly identify if your model is causing disparate impact for protected classes.

During model setup, you can designate which attributes are important to you to monitor for Bias. You can toggle which attributes you wish to examine as well as identify a reference group, and all the corresponding visualizations and fairness metrics are calculated and visualized for immediate insights.

If you have already registered a few input attributes of your model, you can set a subset of them to be monitored for bias.

You can monitor for bias even if the sensitive attributes are not direct input to your model.

If you have already registered a few input attributes of your model, you can set a subset of them to be monitored for bias.

You can monitor for bias even if the sensitive attributes are not direct input to your model.

It’s not enough to only measure model accuracy, with Arthur you can also measure impact with model monitoring, and act on it to improve outcomes for the people that you serve.

“Most companies are not vetting their technologies in any way. There are land mines — AI land mines — in use cases that are currently available in the marketplace, inside companies, that are ticking time bombs waiting to go off.”