AI Trustworthy Toolkit

Our AI Trustworthy Toolkit enables businesses to assess security, privacy, fairness, and explaninability of their AI systems and build a robust, ethical and trustworthy AI ecosystem.

Our AI Trustworthy Toolkit Features

Over the years, leveraging data for training accurate and robust AI models have been the primary concern of the majority of enterprises. However, heedless of sharing data and trained AI models threatening the privacy of enterprises. Several inversion attacks have been smartly designed to extract sensitive and private information from a trained AI model. The vulnerability of these models to keep training materials private is magnifying concerns towards using AI. Also, sharing datasets with third parties for AI-related tasks is a challenging issue. To address these concerns, the AiSafatey is equipped with a Privacy component that provides versatile tools for data scientists and data governors. This component accepts a dataset and generates several metrics and multi-level reports to identify the degree of privacy for sharing with third-party users. Besides, using several data anonymization algorithms such as t-closeness, k-anonymity, and l-diversity, the component can generate an anonymized version of data for sharing with third parties. Finally, this component accepts AI models as input and generates a detailed report to demonstrate the possible vulnerabilities of given AI model against adversarial inversion and membership inference attacks.

Increasing the methods of collecting data which already associated with different labels judgments for the AI model would be a questioning task in terms of discrimination. The unintentional unfairness that happens when a decision causes different results for different groups. Machine learning algorithms are mostly used to define a real-world problem result by given data such as mortgage loan approval or pay rate. Therefore, it is a need for the AI community to minimize unintentional biases. Unfairness in AI system happen due to i) data adequacy: there a rare pattern in a dataset which may be omitted by AU models in terms of generalization, ii) data bias: despite data is sufficient to represent all the groups, the overall outcomes may affect the future decision, for example, the data may represent women are paid less and this would hard to remove in the future decisions this phenomenon also named as a negative-legacy, or iii) model adequacy: that means some AI models architecture may represent some groups better than others. In AI trustworthy toolkit we have developed AI-Fineness component which tries to avoid any biases in data and model. The component takes training data and a trained model and identify the degree of fairness in the inputted data or model.

Despite eye-catching advancement in artificial intelligence and machine learning algorithms, the request for more transparent and interpretable AI models has been inflated by the models’ stakeholders. Utilizing AI for various sensitive contexts such as healthcare, fintech, and security entails a clear explanation for its generated outputs. Furthermore, different AI’s stakeholders require a variety of information regarding explainability of their AI systems. For instance, a data scientist and a medical practitioner are looking at the interpretability of output through different lenses. In the AiSafety toolkit and in order to involve human in the AI loop, we have developed an Explainainer component that accepts AI’s training data and a trained model and identify explainability levels of the inputted data or model. Under the umbrella of AI explainability, the toolkit applies various algorithms such as global white box explainer, local black box explainer, and local black box explainer as well several metrics and reports that shed light on AI models as black-box decision-makers.

Machine learning has brought numerous benefits to our lives and it is significantly becoming one of the latest emerging technologies in the market, producing mobile apps and other services better and smarter than before. However, building ML engines are still in their initial stage of development and the attack on these components is not much clear. Therefore, ML vulnerabilities became a critical concern in applied AI industries. With AI trustworthy toolkit we ensure our customers have a  safe environment against poisoning attacks and evasion attacks beyond their AI engines.