User:Jonathan at CornellNLP
I’m a PhD student at Cornell University studying ways to encourage healthier online interactions. My collaborators and I have had successful collaborations with Wikipedia in the past (see our most recent example here).
I’m currently part of a team that’s working on a prototype browser extension "ConvoWizard" which uses AI technology to provide Wikipedia editors with real-time warnings of rising tension within conversations. Specifically, whenever an editor who has ConvoWizard installed replies to a discussion on a talk page or noticeboard, the tool will provide an estimate of whether or not the discussion looks to be getting tense (i.e., likely to deteriorate into violations of the Wikipedia:No personal attacks policy), as well as feedback on how the editor’s own draft reply might affect the estimated tension. We envision ConvoWizard as a prototype for a future user-facing tool that Wikipedia editors who frequently engage in discussions could include in their everyday workflow, alongside the many other tools that they already use.
We are actively recruiting Wikipedians for a research study to test ConvoWizard; those interested in participating should check out our meta-wiki project page for more information on the study and links to sign up.
ConvoWizard is based on a tool that we previously piloted on Reddit, so those interested in finding out more can check out NPR's coverage of the study. Also, for those with a technical background in machine learning and/or natural language processing who are interested in more details about the technology, it is introduced in this paper; the model is also open-source and its training data is publicly accessible and documented.
We share Wikipedia’s commitment to transparency and accountability. If you came across this page due to an announcement or discussion thread we posted about the study, and you have thoughts, questions, or concerns, we invite you to voice your feedback on our user talk page, or email us directly.