How can we be sure that the online discussion will not be biased or manipulated by certain interest groups?
Online discourses mirror the world as it is and should not be misunderstood as an ideal or protected room where only the Habermasian “forceless force of the better argument” counts. Participants are part of this world and thus often argue from a specific perspective, defending their own particular interests. To this end, the only way of preventing a biased discussion is to ensure that all the different interests are present. A heterogeneously structured community is crucial.
The only thing the majority of the participants need to have in common is the belief that discussing the respective issue online is worth the effort. If this is the case, it becomes quite hard to manipulate the discussion because a kind of immune system emerges. And this system tolerates neither simple propaganda nor obvious manipulation.
What may sound like wishful thinking or esoteric beliefs relies on a simply explained mechanism. Just imagine that you have already invested several hours or even days pointing out your arguments and convincing other participants. What would you think about others rushing into the discussion just to say something like, “this discussion is useless, stop arguing”, or posting something that appears to have been copied and pasted from a press release and does not refer to anything that has already been discussed? You would probably feel annoyed and disrespected. And that is exactly what we have observed in so many of our discussions. Other participants criticised this behaviour even before the moderators could intervene.
If participants want to make their point in a lively discussion with many committed participants, their only resource is their arguments. That requires time and energy and if they make that commitment, any manipulative intention turns into more or less constructive contributions to the discussion.
Sometimes we also use quantitative measures such as online surveys as part of the online discourse. The results of quantitative surveys can be more easily biased by mobilising supporters of a certain position who do nothing else but vote for specific options. However, mobilisation is not as easy as it seems and it is very rare for organisations to be able to motivate a considerable proportion of their members to influence the results. The higher the number of participants, the harder it becomes to sway the outcome.
This can be illustrated by a concrete example, the participatory budget we conducted in the city of Freiburg, Germany. Part of the online platform was the budget calculator which allowed all participants to distribute the city’s budget virtually among the different budget headings. At the beginning, the budget of one of the local theatres was reduced on average to 80% of the actual figure. Then the head of the theatre wrote an email to all employees and friends asking everybody to set the budget to the maximum (200%) in order to prevent any future budget cuts. Over the next days the budget increased rapidly to 120%, but as the number of participants grew, the budget fell again below 90%. Since the campaign was spotted by journalists who covered the story in some of the most widely read newspapers, public opinion turned against the initiator. This experience demonstrates that it is not only hard to influence an online discussion; it is quite a risky endeavour, too.
related posts: eParticipation: 5 Questions, 5 Answers (#1)