AuditBot

Chatbot to help everyday Instagram users audit AI algorithmic biases
Project Overview
My team set out to research the problem space of how to best harness the power of everyday users in auditing algorithmic bias in artificial intelligence. In effort to streamline and improve AI algorithmic auditing for everyday Instagram users, we propose AuditBot.
Project Details
Duration: 3 months
Team: 5 students

Skills: Think-aloud usability testing, semi-structured interviews, contextual inquiry, affinity diagramming, stakeholder and empathy mapping, speed dating, and experience prototyping.
Background Research
Data Analysis

To begin exploring the problem space, my team and I analyzed 3 popular datasets, including the Twitter Image Cropping dataset, to understand how users express their thoughts about instances of algorithmic bias. By generating word clouds and performing sentiment analysis using R, we found that users overall focused on negative actions, rather than positive solution and used inciting speech.



R-generated word cloud and sentiment distribution graph


Literature Review

I reviewed 10-15 current research papers on the topic to educate myself on what we already know. In the past, I have also written an in-depth literature review on the topic of biases in NLP and Face Recognition Algorithms.
Example of algorithmic bias in Google search of "CEO". Only 11% of top Google image results are women
Problem Definition
How might we create a bi-directional feedback loop between developers and end-users to mitigate the harms of algorithmic biases in AI?
To understand the problem in more depth and empathize with all those impacted, I created an empathy map and stakeholder map.
Generative User Research
Think-Aloud Study

Study Goal:
My team and I chose this method with the intent of understanding how users perceive their own actions when reporting incidences of algorithmic bias. 5 Participants were given the task of reporting a biased Youtube ad so we could analyze their behaviors and probe into their attitudes and perceptions.

Findings:
1. Users have difficulty recognizing bias

2. Users feel a common sentiment of invalidation from the company/platform. They feel that if their report won’t be heard.

3. User’s prefer simpler processes and choose immediate and easy to access options.

Consent form and portion of discussion guide outlining tasks
Since we found that user’s did not have the motivation to go through the reporting process, in part due to their lack of confidence in the company’s following action, we decided to understand this in our next phase of research.


Contextual Inquiry

Study Goal:
We wanted to understand how, and to what extent social proof impacts a user’s motivation in copying other’s actions, especially in the realm of social media. Contextual Inquiry gives the ability to observe users in action. We had 5 diverse participants browse on social media for 10 minutes as we observed their behavior and probed them with questions about their online interactions.
Consent form and portion of discussion guide outlining tasks
Synthesis from Interpretation Sessions
After completing 5 in-depth interpretation sessions to synthesize the learnings (as seen above), I landed on the following findings:

1. Anonymity provides less pressure to interact with content

2. People leverage of recommendation algorithms of multiple platforms to tailor their experience

3. Maintaining relationships triggers people’s reaction to the content
Evaluative Research
Speed Dating

At this point we were starting to zero in on the benefit of devising solutions that increased social proof in the AI algorithm auditing and reporting process. In total, we created 15 storyboards with possible solutions to the problem. During our focus group, we had our participants discuss and comment on each storyboard.

Findings:
1. Users prefer anonymity when reporting biased or harmful content

2. Social proof is a motivator, however anonymity should be preserved

3. Users prefer intrinsic reward over extrinsic rewards when reporting bias and appreciate transparency

Storyboard Example
Storyboard Example
Solution Principles
Anonymity
Product Feature:
Low-pressure social proofing maximizes incentive to report bias. Users can see the number of their friends who reported an ad but not their names

Evidence:
"Theres a guise of anonymity that comes with being on the Internet - I might talk about something I normally wouldn't talk about because there is less social stigma online."
Simplicity
Product Feature:
An easily discoverable chatbot icon addresses all possible auditing and reporting concerns






Evidence:
"I don't know if there's an option to report this ad...I would just thumbs it down because I don't know how else to go about reporting it"
Transparency
Product Feature:
User can access updates on any report they made to make sure progress is being made by developers. This allows for a channel for conversation.




Evidence:
"Sometimes, it's just like 'ok so the report has been made, and someone has looked at it'...I want to know that work is being done on that front and that we're movign along the process"
Solution Overview
Based on our insights we created a two prong solution. First, at the bottom of recommended content, users can view the number of friends that reported the content. This piece of information is a huge motivator for potential reports. Users can then open the second feature, AuditBot, in their direct messages to do a variety of things from submitting a report, seeking resources/social support, and receiving updates about the progress of their report, and engaging in guided conversation. AuditBot allows users to be part of the reporting/auditing process in the ways they desire.
Our solution motivates users to contribute to everyday auditing by showing a numerical indication of previous reports, improves their confidence by providing resources/educational material, and promotes transparency by giving updates on the progress of their report.