Veritas is a versatile verification tool for tree ensembles. You can use Veritas to generate adversarial examples, check robustness, find dominant attributes or simply ask domain specific questions about your model.
Evaluating machine learning models has mainly relied on reporting performance metrics (e.g., accuracy, ROC AUC, squared error) on a so-called test set – a held-aside portion of the data that was not used for training the model. A good value for these metrics is usually sufficient to convince us that the model successfully learned what it needed to learn. After all, the model can predict values for examples it has never seen before. It is becoming more common for deployed machine learned models to have to conform to requirements (e.g., legal) or exhibit specific properties (e.g., fairness). This has motivated the development of verification approaches that are applicable to learned models. Given a specific property, these techniques verify, that is, prove whether the property holds for the model. For applications where these requirements are crucial, just achieving good predictive performance on an unseen test set is no longer sufficient for selecting a model.