Domain 1: Housing Price

Domain 1 Experiments

Experiment 1

Condition 1 (AI recommendation)

Introduction - Condition 1

Your browser does not support PDFs. Download the PDF instead.

Training - Condition 1

Your browser does not support PDFs. Download the PDF instead.

Main Task - Condition 1

Your browser does not support PDFs. Download the PDF instead.

Condition 2 (AI-explanation-only)

Introduction - Condition 2

Your browser does not support PDFs. Download the PDF instead.

Training - Condition 2

Your browser does not support PDFs. Download the PDF instead.

Main Task - Condition 2

Your browser does not support PDFs. Download the PDF instead.

Condition 3 (Hypothesis-driven)

Introduction - Condition 3

Your browser does not support PDFs. Download the PDF instead.

Training - Condition 3

Your browser does not support PDFs. Download the PDF instead.

Main Task - Condition 3

Condition 3 has three variants:

Low

Your browser does not support PDFs. Download the PDF instead.

Medium

Your browser does not support PDFs. Download the PDF instead.

High

Your browser does not support PDFs. Download the PDF instead.

Subjective Questions: Twelve subjective questions are as follows. We evaluate Q1-4 separately to measure 4 measures (In control, Preference, Mental demand and System complexity). We aggregate Q5-12 to measure Trust. The questions for trust are based on (Hoffman et al. (2018). Metrics for explainable AI: Challenges and prospects.)

  1. In control: I feel in control of the decision-making process when using this decision aid. (0 = Disagree strongly; 10 = Agree strongly)
  2. Preference: I would like to use this decision aid frequently. (0 = Disagree strongly; 10 = Agree strongly)
  3. Mental demand: I found this task difficult. (0 = Disagree strongly; 10 = Agree strongly)
  4. System complexity: The decision aid was complex. (0 = Disagree strongly; 10 = Agree strongly)
  5. Trust: I am confident in the decision aid. I feel that it works well. (0 = Disagree strongly; 10 = Agree strongly)
  6. Trust: The decision aid is very predictable. (0 = Disagree strongly; 10 = Agree strongly)
  7. Trust: The decision aid is very reliable. I can count on it to be correct all the time. (0 = Disagree strongly; 10 = Agree strongly)
  8. Trust: I feel safe that when I rely on the decision aid I will get the right answers. (0 = Disagree strongly; 10 = Agree strongly)
  9. Trust: The decision aid is efficient in that it works very quickly. (0 = Disagree strongly; 10 = Agree strongly)
  10. Trust: I am wary of the decision aid. (0 = Disagree strongly; 10 = Agree strongly)
  11. Trust: The decision aid can perform the task better than a novice human user. (0 = Disagree strongly; 10 = Agree strongly)
  12. Trust: I like using the decision aid for decision-making. (0 = Disagree strongly; 10 = Agree strongly)

Experiment 2

The design and tasks in Experiment 2 are similar to those in Experiment 1, with the main differences being:
  1. Participants were asked to explain why they selected an option.
  2. No subjective questions.

Your browser does not support PDFs. Download the PDF instead.