Link -> https://diff.wikimedia.org/2021/09/15/disinformation-and-ai-the-differences-between-wikipedia-and-social-media/
Let’s start pointing out some key differences: While most dynamics in social media are about sharing opinions and gaining popularity, Wikipedia is about sharing in the sum of all knowledge. Wikipedia’s unique goal and process presents both challenges to applying machine learning techniques used by large social media platforms to identify disinformation and opportunities for new, more human-centered approaches. Let’s start by comparing these two different paradigms. First of all, your social network activity reflects your thoughts and interests, on the other hand, a Wikipedia article is collectively created, or in other words is a commons, without a single owner. Second, the lifecycle on social networks content is very short, but for Wikipedia it is about perennial knowledge. And last, but not least, in social networks you can do – almost – whatever you want respecting a very general set of “terms and conditions” designed by the platform owners, while in Wikipedia there are procedures and policies for writing articles, created by the community, including key points like keeping a neutral point of view and using reliable and verifiable sources.
[Diferenças cruciais para separar UGC de Social Media Content]
However, problems for the most popular social networks are related with
their business model: they need to allow people to say whatever they
want, and – probably the most important part – to show people content
that increases their engagement, usually reinforcing their beliefs.
Content trustworthiness is not the aim of those companies; they need
just to control extreme cases. This filtering process is known as
content moderation. And given the huge amount of content they need to
moderate, big tech companies are putting a lot of effort and hopes on
developing tools based on Machine Learning (a.k.a Artificial
Intelligence) to help – and take the lead – on detecting and removing
that extreme content.
Misleading information can happen in Wikipedia when edits do not fulfill content policies. This can be on purpose with the intention of deception (disinformation) or it can be accidental (misinformation). Whatever the motivation, the community regulates itself.
[Processo social inerente ao crowdsourcing, não visa dar popularidade a indivíduos]
Wikipedia can’t have a unique ground-truth, because its aim is to be the sum of all human knowledge, so there is no single point of reference, and moreover, all significant points of view – supported by reliable sources – needs to be represented. Wikipedia “moderation” is not about the truth, it is about verifiability of content through reliable sources. And the rules don’t come from a unique central authority, they are designed, reviewed and applied by a community of editors, through a well-established deliberation process.
[Verdade Relativa e diferentes perspectivas precisam ser contempladas para uma visão neutra]
In summary, the challenges on fighting disinformation on Wikipedia require dedicated effort that goes beyond addressing the traditional “fact-checking” problem. Differently from Social Networks where algorithms are expected to do the work that no one else is doing, in Wikipedia we need algorithms able to help the current editor’s workflows, implying that our baseline is much more challenging.
[Não tem por objetivo Fact-Checking]
Comentários
Postar um comentário
Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.