Ksama Arora

CREATE AND EXPLORE RESPONSIBLE AI DASHBOARD FOR A MODEL IN AZURE ML

Microsoft’s 5 Responsible AI principles:

To help implement Responsible AI principle, you can create Responsible AI Dashboard.

Responsible AI (RAI) Dashboard

Responsible AI (RAI) Dashboard is a Microsoft tool that provides a single user interface to help implement RAI in practice effectively and efficiently.

Explore Responsible AI components

The available tool components and the insights you can use are:

Create Responsible AI dasboard

To create RAI dashboard, need to create a pipeline by using built-in components. The pipeline should:

IMP NOTE:

A Responsible AI scorecard automatically generates cohort analysis reports, including MAE per cohort in the dataset

When you run the pipeline, a Responsible dashboard (and scorecard) is generated and associated with model. After training and registering of model, you can create RAI dashboard in 3 ways:

Using Python SDK to build and run pipeline

rai_constructor_component = ml_client_registry.components.get(
    name="microsoft_azureml_rai_tabular_insight_constructor", label="latest"
)
rai_explanation_component = ml_client_registry.components.get(
    name="microsoft_azureml_rai_tabular_explanation", label="latest"
)
rai_gather_component = ml_client_registry.components.get(
    name="microsoft_azureml_rai_tabular_insight_gather", label="latest"
)
from azure.ai.ml import Input, dsl
from azure.ai.ml.constants import AssetTypes

@dsl.pipeline(
    compute="aml-cluster",
    experiment_name="Create RAI Dashboard",
)
def rai_decision_pipeline(
    target_column_name, train_data, test_data
):
    # Initiate the RAIInsights
    create_rai_job = rai_constructor_component(
        title="RAI dashboard diabetes",
        task_type="classification",
        model_info=expected_model_id,
        model_input=Input(type=AssetTypes.MLFLOW_MODEL, path=azureml_model_id),
        train_dataset=train_data,
        test_dataset=test_data,
        target_column_name="Predictions",
    )
    create_rai_job.set_limits(timeout=30)

    # Add explanations
    explanation_job = rai_explanation_component(
        rai_insights_dashboard=create_rai_job.outputs.rai_insights_dashboard,
        comment="add explanation", 
    )
    explanation_job.set_limits(timeout=10)

    # Combine everything
    rai_gather_job = rai_gather_component(
        constructor=create_rai_job.outputs.rai_insights_dashboard,
        insight=explanation_job.outputs.explanation,
    )
    rai_gather_job.set_limits(timeout=10)

    rai_gather_job.outputs.dashboard.mode = "upload"

    return {
        "dashboard": rai_gather_job.outputs.dashboard,
    }

EVALUATE RESPONSIBLE AI DASHBOARD

Screenshot-2024-05-30-at-1-51-40-AM.png

EXPLORE ERROR ANALYSIS:

With error analysis feature, can review and understand how error (false predictions) are distributed in dataset. In error analysis, can explore 2 types of visuals:

ERROR EXPLANATIONS:

Understand how each input features influences the models predictions (how model reaches a certain predictions). Run model explainers to calculate the feature importance. Explore two types of feature importance:

EXPLORE COUNTERFACTUALS:

Counterfactuals are used to explore how models output would change based on a change in input. Explore counterfactual what-if examples by selecting a data point and the desired models predictions for that point.

EXPLORE CASUAL ANALAYSIS:

Casual analysis uses statistical techniques to estimate the average effect of a feature on a desired prediction. It analyzes how certain interventions or treatments may result in a better outcome, across a population or for a specific individual. Three available tabs including causal analysis:

Model debuggingBusiness decision making
Error analysisCausal analysis
Data explorerCounterfactual what-if
Model overview 
Fairness assessment 
Model interpretability 
Counterfactual what-if