User:Yiming11
This user is a student editor in University_of_Washington/Online_Communities_(Fall_2024). |
Wikipedia Advising Report
[edit]The wikipedia is a large website which everyone can find the information that they want. People from the all over the world still add different kinds of education content. Members of the Wikipedia community and the Wikimedia Foundation wish to producing more high-quality educational content and engaging more content creation. They want to use generative AI and large language models to create Wikipedia content. In fact, using the generative AI can benefit the wikipedia community. First, using generative AI can enhance content update speed and production speed. Nowaday, there are so many different kinds of informations in our daily life. Sometimes when we want to search some latest incidents or latest information on the wikipedia, we will not able to find them because informations are not update yet, or people are still edting these informations. However, after using the generative AI, it can make sure people can see the latest information from wikipedia because AI didn't need to spend a lot of time to typing and thinking, so it's benefit the wikipedia on content update and production. Second, the AI can attract more new editor join the wikipedia's community. For instance, the AI can help new editor better editing their content because the AI might guide the new editor what to do, especially for the editor who didn't know the rules of the wikipedia. AI can also provide some advice, like what kinds of content that they can contribute to the wikipedia community.
Here I will provide some advice for the Members of the Wikipedia community and the Wikimedia Foundation which how to use the AI properly.
First advice is Wikipedia should use the AI as tool to assist editor editing their contents rather than let the AI generate the content by itself because letting the generative AI to create the content will destroy the contributor's extrinsic and intrinsic motivation. In extrinsic motivation, contributors will create the article because they can get the reputation or attention from others. In intrinsic motivation, contributors create the content because they want to let the wikipedia community become better. However if WMF let Generative AI create the content by itself, it will destroy contributor's extrinsic and intrinsic motivation to editing the article. Therefore for the AI, it should just need to be a assistant tool to motivate people to editing the article, such as suggest some ideas, structuring articles, or point out the errors of content. With this advice, the editor can follow the instruction of the AI to create a lot of diversity content that editor could could not think about by themselves, and it can make sure contents more accuracy becasue as human, people will go to check and publish some content whether is satisfied the wikipedia's community regulation, so human contributor will check the content that they wrote in wikipedia before they publish. As a result, it can make the wikipedia become more trustful to the public.[1][2]
Second advice is Members of the Wikipedia community and the Wikimedia Foundation (WMF) should increase monitor the AI usage by contributor and create clear norm in the wikipedia community. According to lecture, lack monitor and clear norm will harm for the online community, such as "make it difficult for new members to join, or cause communities to tear apart and destroy themselves."[3] Although the AI can provide a lot of practical advice for the contributor on create the content, AI will also make mistake. For instance, AI will use algorithm to provide the guidance or information. However, the algorithm might make the guidance have bias, such as AI will let the contributor create some bias content in wikipedia. Therefore, it is important for Wikimedia Foundation to create the norm for the community about usage of AI and enhance monitor the AI usage. If they found contributor accept the AI's advice and create some inappropriate content, they will delete the contents, and let the contributor know the content that they create has violate the wikipedia regulation.
Third, Wikimedia Foundation(WMF) can start to try to use the AI in low-stake place, such as talk page. In the talk page, wikipedia user can discuss edits and ask questions to others. Sometimes the people might share offensive words or inappropriate discussion in talk page, so the AI can remind user what kinds of contents that they should publish in the talk page. Or guide people how to ask or answer the questions. The benefit that trying the AI in talk page is lowing the influence if Wikimedia foundation found AI is not perform well in Wikipedia community because most of time people would just using the wikipedia to search the information rather than go to talk page.
However, there are also some bad consequence after using the generative AI in wikipedia.
First, Inaccuracy information and plagiarism. As we know, sometimes AI will create some incorrect information, especially when they encounter questions for which they lack accurate data. For example, the ChatGPT is an example. When we ask some latest informations which AI could not find on the internet, they will answer some wrong information. Even they find the information, they might still provide some information which just copy and paste from the internet. As a result, this will let the wikipedia violate the copyright regulation. The incorrect information and violate the regulation will destroy trust of the public. To avoid this situation, Wikimedia Foundation(WMF) can set Two-Tier Review Process. Before contributor publish, the WMF can let the AI check whether the content has plagiarism. If not plagiarism, Members of the Wikipedia community and the Wikimedia Foundation can check whether the content is inaccuracy or not.
Second, the information that created by AI might cause Bias. When AI create the content, the AI will use the algorithm which using existing data and history content to help them create the information. The bias can cause serious consequences for Wikipedia. For example, biased content could misrepresent topics or offense some group people. This is conflict Wikipedia’s commitment to equality and inclusivity. To reduce the bias, Wikipedia could work with AI developers, such as use different kind of models to trained the AI. When AI was trained in different situation, it can decrease the bias of the content. Moreover, Wikipedia could also create a feedback system. If the wikipedia user find the contents has bias, they can report it to the system, so Members of the Wikipedia community and the Wikimedia Foundation (WMF) could delete these bias information, and put it into the AI trained to make sure the similar bias information would not show in the future.