AI IRL

PRESC: Performance Robustness Evaluation for Statistical Classifiers

None
PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be underexplored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models. This is a project sprint from the "AI IRL Hackathon - Building Trustworthy AI". Registration and more information here: http://mzl.la/taihackathon

Additional information

Type Contribute-a-thons and Hack-a-thons
Language English