Modern data analytic methods and tools, including Artificial Intelligence (AI) and Machine Learning (ML) models, depend on correlations; however, such approaches fail to account for confounding in the data, which prevents accurate modeling of cause and effect and often leads to bias. Edge cases, drift in data/concept, and emerging phenomena undermine the significance of correlations relied upon by AI. New test and evaluation methods are therefore needed for ongoing evaluation. Carnegie Mellon University Software Engineering Institute (CMU SEI) has developed a new AI Robustness (AIR) tool that allows users to gauge AI and ML classifier performance with data-based confidence.
Learn More about AIR