Designing Trust in Human-AI Systems

picture

2025-08-27

Designing Trust in Human-AI Systems

How to Build User Trust in AI

Building Trust in AI through Design

AI is gradually integrating into our lives - algorithms recommend movies, AI assistants draft emails, and banks use AI to assess loan risks. But have you ever stopped to think: why do we "trust" or "distrust" an AI?

It's like making friends. If someone is often inconsistent, evasive, or makes you feel like they're hiding something, you'd probably find them unreliable. The same goes for AI. Trust is the key to user adoption.

When designing AI products, we need to think beyond "does it work?" to "will users feel comfortable using it?" Trust isn't just a feeling; it's a product capability that can be designed, governed, and validated.


1. Trust = Understandable + Controllable + Predictable + Secure

We can break down AI trust into four core elements:

Understandable

Users don't trust what they can't understand. AI systems must not only make decisions but also demonstrate their process. Systems that only output results without explanation feel like black boxes, breeding suspicion rather than trust. The more users understand how an AI reached a conclusion, the more willing they are to rely on it.

Controllable

Users retain the power to review, correct, or even disable AI (Human-in-the-loop). It's like seeing a "Chef's Recommendation" on a menu - you can choose to order it, pick something else, or even leave the restaurant if you don't see anything you like.

Predictable

Users tend to trust systems that provide stable, repeatable results. When AI models produce unpredictable or contradictory outputs, confidence plummets. Predictability isn't just about technical accuracy; it also includes how AI handles errors, communicates uncertainty, and continuously improves.

Security and Privacy

Trust in AI isn't built on performance alone. Users need assurance that AI systems protect their data, operate responsibly, and maintain user control. Security and privacy are the foundation of user trust.

Applying These Elements Across Three Layers:

  • Interface Design → What users see
  • Model Explanation → How AI's internal workings are explained in understandable terms
  • Organizational Governance → Internal processes that ensure AI is used safely, reliably, and consistently

NIST's AI Risk Management Framework (AI RMF) implements this through GOVERN → MAP → MEASURE → MANAGE:

  • GOVERN: Establish organizational policies, roles, responsibilities, standards, and processes
  • MAP: Identify risks AI systems may bring and align them with business goals and regulations
  • MEASURE: Use quantitative metrics to assess AI system performance, reliability, bias, and risks
  • MANAGE: Based on measurement results, develop improvement actions, update processes, and refine models or strategies

NIST AI RMF Framework


2. XAI: Making AI's Thought Process Transparent

The goal of Explainable AI (XAI) is to make AI's reasoning process more transparent and understandable, so people can know "how it thinks."

Two Approaches to XAI

  • Ante-hoc Aims to provide interpretability before model training begins. Rules are clear and fixed, decisions are transparent - you can directly see how inputs affect outputs. Examples: Decision Tree models. These are often called White Box or Glass Box models.
  • Post-hoc Means explaining the model after it has been trained. This involves adding a "Surrogate Model" to interpret the black box model - like a translator inferring and explaining how the AI might be thinking based on its inputs and outputs.

XAI Approaches

Explanation Scope: Global vs Local

Refers to the coverage of explanations generated by XAI methods:

  • Global: Explains the entire model, considering all the model's reasoning data. Provides an overall perspective on how the model interacts with all input data. Decision Tree algorithms are inherently global.
  • Local: Only explains specific test data cases. Helps users understand "why the model made this particular decision in this case," thereby increasing trust in the result. In decision trees, Local corresponds to a single path in the tree.

Global vs Local Explanation

Making Explanations "Useful" to People

  • Audience-specific: When designing, consider different users' needs and provide explanations useful to them. Executives need strategic logic; auditors/regulators need traceability; frontline staff need operational guidance; developers need debugging clues.
  • Format variety: Text summaries, visualizations, interactive dashboards - avoid overwhelming users with too many technical details at once. (Nielsen Norman Group research also emphasizes clear language and layered information.)

3. Turning "Trust" into UI: 8 Applicable Design Patterns

  1. AI Labeling Clearly mark "AI-generated/suggested" content, maintaining transparent awareness of AI's presence (see IBM Carbon for AI guidelines).
  2. Confidence/Uncertainty Indicators Don't just provide answers; show AI's confidence intervals, data coverage, or "suggested alternatives when confidence is low." Avoid misleading absolute statements.
  3. The "Why" (Reasoning) Explain the top 1-3 reasons in plain language, with options to expand details; in professional scenarios, simultaneously provide advanced data like SHAP/LIME values.
  4. Reversibility & Human Review Provide review, undo, and error reporting buttons; user feedback should continuously optimize the system.
  5. Data Provenance & Scope Provide data sources, time ranges, and usage limitations; label AI-generated content with its origin.
  6. Known Limitations Notice Alert users to AI's risks and limitations, e.g., medical applications requiring professional confirmation for diagnosis.
  7. Privacy & Fairness Cues Clearly explain how data is used, whether it's used for training, options to disable personalization, monitoring mechanisms, and recent review dates.
  8. Transparency Documentation Links Transparency documents are like AI's manual - make them available within the product so both users and auditors can understand what the AI is doing.

4. Institutionalizing Trust: Governance & Compliance

  • NIST AI RMF (US): Uses the GOVERN→MAP→MEASURE→MANAGE cycle, requiring risk identification, measurement, and management throughout the product design lifecycle. Currently one of the most widely adopted practical guides across industries.
  • ISO/IEC 42001 (International Standard): The world's first AI management system standard, providing a framework for establishing policies, risk management, supply chain control, and continuous improvement, helping align with various regulations.
  • EU AI Act (EU): The world's first comprehensive AI regulation, imposing transparency and risk management obligations based on risk classification for high-risk systems (like recruitment, education, healthcare); phased implementation from 2025, requiring transparency and compliance for high-risk use cases and general-purpose models.
  • Major Company Internal Policies: Companies like Microsoft and IBM require Transparency Notes, reliability, review checkpoints, making explanation, transparency, privacy, fairness, and robustness core pillars.

Key point: Build requirements into the process, don't just document them right before launch.

Governance Framework


5. Documenting Transparency: Making Audits and Communication Easier

Model Cards

Document the AI model itself: its purpose, performance across different groups, limitations, evaluation methods.

Model Card Example

Datasheets for Datasets

Document the training data for AI: sources, sampling methods, cleaning approaches, limitations, copyright, etc.

Datasheets for Datasets

Transparency Notes

Explain AI functionality at the product level to users: what AI can and cannot do in this product, usage limitations, or considerations.

Transparency Note Example

UX Applications

  • These documents aren't just for audits; they can be translated into transparent UI cues, for example:
    • Small labels next to AI-generated suggestions: "This suggestion is based on sales data from the past 3 years"
    • Display AI confidence scores or applicability ranges
    • Provide clickable explanations for "why AI made this decision"

In other words: Document Transparency → Interface Transparency - users can understand AI more intuitively, increasing trust.

Confidence in Models


6. Quantifying "Trust": KPI and Dashboard Recommendations

  • Behavioral: Adoption rate, review rate, override rate, completion rate, decision time
  • Quality: Accuracy, recall rate, calibration error, drift detection rate, number of security incidents
  • Fairness: Error disparities across different groups, rejection rate disparities
  • Governance: Document update frequency, incident reporting and resolution time, audit pass rate

Establish a Trust Dashboard, regularly reviewed in product reviews and risk committees, aligning with the MEASURE→MANAGE spirit of NIST/ISO.


Conclusion

Trust is a deliverable product capability: human-centered explanations, honest uncertainty, verifiable governance.

When you connect UI Design (the visible) × XAI (the understandable) × Governance (the actionable), AI can truly amplify value - under the essential premises of being understood, controllable, and predictable.

Reference

NIST 技術系列出版物 NIST 數位策略 歐盟人工智慧法案 AP News ISO KPMG Microsoft AAAI Open Access Wiley Online Library shap.readthedocs.io DataCamp oecd.ai ansi.org carbondesignsystem.com

jodie_avatar

Jodie Wu

UIUX Designer

Has a weird obsession with fluffy things. During meetings, there are always two little minions by her side cheering her on with gusto. If it comes to a fight, they can do nothing but meow.

Check more from this author

Share to

Back