|
1 |
| -= Overview |
| 1 | += Welcome to TrustyAI 👋 |
2 | 2 |
|
3 |
| -== What is TrustyAI? |
| 3 | +image::../images/trustyai_icon.svg[Static,300] |
4 | 4 |
|
5 |
| -TrustyAI is a set of components and services for Responsible AI. |
6 |
| -TrustyAI offers fairness and drift metrics, explainable AI algorithms, evaluation and xref:features.adoc[various other XAI tools] at a library-level as well as a containerized service and Kubernetes deployment. |
7 |
| -TrustyAI includes: |
| 5 | +https://trustyai-explainability.github.io/trustyai-site/main/main.html[TrustyAI] is an open source Responsible AI toolkit supported by Red Hat and IBM. TrustyAI provides tools for a variety of responsible AI workflows, such as: |
8 | 6 |
|
| 7 | +* Local and global model explanations |
| 8 | +* Fairness metrics |
| 9 | +* Drift metrics |
| 10 | +* Text detoxification |
| 11 | +* Language model benchmarking |
| 12 | +* Language model guardrails |
| 13 | +
|
| 14 | +TrustyAI is a default component of https://opendatahub.io/[Open Data Hub] and https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai[Red Hat Openshift AI], and has integrations with projects like https://github.com/kserve/kserve[KServe], https://github.com/caikit/caikit[Caikit], and https://github.com/vllm-project/vllm[vLLM]. |
| 15 | + |
| 16 | +== 🗂️ Our Projects 🗂️ |
9 | 17 | * xref:trustyai-core.adoc[TrustyAI core], the core TrustyAI Java module, containing fairness metrics, AI explainers, and other XAI utilities.
|
10 | 18 | * xref:trustyai-service.adoc[TrustyAI service], TrustyAI-as-a-service, a REST service for fairness metrics and explainability algorithms including ModelMesh integration.
|
11 | 19 | * xref:trustyai-operator.adoc[TrustyAI operator], a Kubernetes operator for TrustyAI service.
|
12 | 20 | * xref:python-trustyai.adoc[Python TrustyAI], a Python library allowing the usage of TrustyAI's toolkit from Jupyter notebooks
|
13 | 21 | * xref:component-kserve-explainer.adoc[KServe explainer], a TrustyAI side-car that integrates with KServe's built-in explainability features.
|
14 | 22 | * xref:component-lm-eval.adoc[LM-Eval], generative text model benchmark and evaluation service, leveraging lm-evaluation-harness and Unitxt
|
15 | 23 |
|
16 |
| -== Glossary |
17 | 24 |
|
| 25 | + |
| 26 | +== 📖 Resources 📖 |
| 27 | +### Documentation |
| 28 | +The Components tab in the side bar provides documentation for a number of TrustyAI components. Also check out: |
| 29 | + |
| 30 | +- https://opendatahub.io/docs/monitoring-data-science-models/#configuring-trustyai_monitor[Open Data Hub Documentation] |
| 31 | +- https://trustyai-explainability-python.readthedocs.io/en/latest/[TrustyAI Python Documentation] |
| 32 | + |
| 33 | +### Tutorials |
| 34 | +- https://trustyai-explainability.github.io/trustyai-site/main/installing-opendatahub.html[The Tutorials sidebar tab] provides walkthroughs of a variety of different TrustyAI flows, like bias monitoring, drift monitoring, and language model evaluation. |
| 35 | +- https://github.com/trustyai-explainability/trustyai-explainability-python-examples[trustyai-explainability-python-examples]: Examples on how to get started with the Python TrustyAI library. |
| 36 | +- https://github.com/trustyai-explainability/odh-trustyai-demos[trustyai-odh-demos]: Demos of the TrustyAI Service within Open Data Hub. |
| 37 | + |
| 38 | +### Demos |
| 39 | +- Coming Soon |
| 40 | + |
| 41 | +### Blog Posts |
| 42 | +- https://www.redhat.com/en/blog/introduction-trustyai[An Introduction to TrustyAI] |
| 43 | +- https://developers.redhat.com/articles/2024/08/01/trustyai-detoxify-guardrailing-llms-during-training[TrustyAI Detoxify: Guardrailing LLMs during training] |
| 44 | + |
| 45 | +### Papers |
| 46 | +- https://arxiv.org/abs/2104.12717[TrustyAI Explainability Toolkit] |
| 47 | + |
| 48 | +### Development Notes |
| 49 | +* https://github.com/trustyai-explainability/reference/tree/main[TrustyAI Reference] provides scratch notes on various common development and testing flows |
| 50 | + |
| 51 | +== 🤝 Join Us 🤝 |
| 52 | +Check out our https://github.com/trustyai-explainability/community[community repository] for https://github.com/orgs/trustyai-explainability/discussions[discussions] and our https://github.com/trustyai-explainability/community?tab=readme-ov-file#community-meetings[Community Meeting information]. |
| 53 | + |
| 54 | +The https://github.com/orgs/trustyai-explainability/projects/10[project roadmap] offers a view on new tools and integration the project developers are planning to add. |
| 55 | + |
| 56 | +TrustyAI uses the https://github.com/opendatahub-io/opendatahub-community/blob/master/governance.md[ODH governance model] and https://github.com/opendatahub-io/opendatahub-community/blob/master/CODE_OF_CONDUCT.md[code of conduct]. |
| 57 | + |
| 58 | +### Links |
| 59 | +* https://github.com/trustyai-explainability/community?tab=readme-ov-file#community-meetings[Community Meeting Info] |
| 60 | +* https://github.com/orgs/trustyai-explainability/discussions[Discussion Forum] |
| 61 | +* https://github.com/trustyai-explainability/trustyai-explainability/blob/main/CONTRIBUTING.md[Contribution Guidelines] |
| 62 | +* https://github.com/orgs/trustyai-explainability/projects/10[Roadmap] |
| 63 | + |
| 64 | + |
| 65 | +== Glossary |
18 | 66 | [horizontal]
|
19 | 67 | XAI::
|
20 | 68 | XAI refers to artificial intelligence systems designed to provide clear, understandable explanations of their decisions and actions to human users.
|
21 | 69 | Fairness::
|
22 | 70 | AI fairness refers to the design, development, and deployment of AI systems in a way that ensures they operate equitably and do not include biases or discrimination against any individual or group.
|
| 71 | + |
0 commit comments