Hydrosphere Interpretability is a service built to explain the decisions behind machine learning models. It interprets predictions of your model so you can act upon this knowledge. It also explains how exactly your data has changed over time, not just whether it changed or not.
- Model Prediction Explanation. Explain predictions produced by your model. Use this knowledge to understand your dependency between your data and the target variable.
- Data Drift Explanation. Sometimes it’s not enough to fire an alert about change in your data distribution. Explaining this change helps you find the root cause of a change and react in time.
- Black Box. Built-in Interpretability methods do not require any knowledge about the inner structure of your model. Specify inputs and select explained target and you are good to go!
- High-dimensional visualization. Visualize high-dimensional data in 2D with built-in transformers. Explore which clusters and outliers your production data has and discover regions of novel data.
- GDPR Support. Interpret the decisions behind your AI model to be compliant with GDPR regulations.