August 2018 - October 2020
Twitter is a global social media platform that serves the public conversation around breaking news, entertainment, sports, politics, and more. At my time at Twitter, I've had the pleasure to work on many different projects across a few teams, including the Creation / Conversations and Identity / Health teams.
In early 2020 a team, known as Incentives, formed with the goal to incentivize users to behave "better" on the platform. Between Feb and Oct 2020 the team ran 3 moderated usability tests, 2 unmoderated usability tests, 2 in app qualitative surveys, 1 lightning decision jam workshop, and 5 live experiments. All these were conducted in hopes to create a systematic suite of nudges to ultimately reduce toxicity and change behavioral norms on the platform.
When things get heated, people may say things they don't mean. To let them rethink a reply, we ran an experiment on iOS and Android with a prompt that gives them the option to revise their reply before it’s published if it uses language that could be harmful or offensive.
There are too many toxic replies on the platform, causing participants to leave the conversation or regret what they said.
Customers’ will be less likely to contribute to the conversation with toxic Tweets if they are presented with explicit consequence and an opportunity to pause and consider their contribution to the conversation.
Customers will encounter less toxic Tweets and gain higher value out of their experience if we promote and incentivize healthier contributions.
Early in the project we explored nudging the user in real time as they composed. We thought about giving suggestions as they composed as well as showing a Tweet score decreasing as their toxicity increased. Another concept was to append the nudge to the bottom of the Tweet sent toast if we detected toxicity but also always give users the ability to undo the Tweet from the toast.
Knowing we couldn't nudge users in the composer due to technical reasons and that the post Tweet toast wasn't a surface area our team was able to own, we decided to utilize the halfsheet and test a few different tactics with users to understand which resonated with them and would be most likely to change negative contributions to the platform.
Copy: We ran multivariate tests with to learn what language resonated with the users.
CTAs: We chose a primary blue button style for the "Revise" action and a neutral button style for the "Send" action. We wanted to see if this would encourage more revisions.
Qual Survey: After the user saw the nudge we sent them a survey to help gather more insights.
ML Model: The model that detected toxicity in a Tweet needed more work to consider nuances in a conversation so we shut the experiment down to make improvements on that model.
Based on what we learned through quant and qual insights, we made the following improvements...
Education: Users had general confusion and frustration around why they received the nudge, therefore we included a Get more info link that would provide them more context.
Copy: We again ran multiple copy options but moved away from the mention of others "reporting" your Tweet on the first screen. We chose a more positive tone suggesting revision could keep Twitter safe and open.
CTA: We equalized all the buttons styles to not push our agenda too strongly. We also added a delete button as we felt the deletion of the Tweet at times could be more valuable than a small revision.
In our second experiment 71% of the users didn't think their Tweet was offensive and a lot of the previous survey complaint persisted (double standards, political bias, censorship, etc.). This could be because the model was returning false positives and/or because users didn't want such an intrusive nudge. We decided to return to one of our initial concepts to append the prompt to the toast, which wouldn't block the users progress and could easily be swiped away to dismiss it. Studies on this lightweight nudge wrapped up in early Oct 2020.
With the Revise nudge, the team was testing how prompting the user after they created toxic content could change their behavior. With the Preemptive nudge, the team was testing how to stop toxicity before it's composed. If you know a user is replying to toxic content, can an intervention in that moment and nudge the author to hide the toxic reply instead of commenting? This was the question we hoped to answer with this experiment.
Toxic replies are more likely to get toxic replies in return.
The author would prefer to perform alternative actions (e.g. hide replies, mute, block, etc.) if they are presented with the option, instead of replying toxically.
We explored nudging users before they enter the composer but there were issues with being able to get a server call back fast enough. Knowing that we couldn't intervene before the composer we explored a small banner in the composer but it felt detached from the reply the user would be hiding, so we opted to include an inline nudge below the Tweet the user is replying to.
Placement: If the Tweet an author was replying to was considered "toxic", the author would see a nudge slide into view on the composer asking if they want to hide the reply instead of replying to it.
Education: The user is able to tap into the get more info link and read more about hiding replies without exiting the composer.
Actions: The user is able to hide and unhide the reply in the composer.