Trusted resources

How is it, or why is it, that we trust information that we obtain from the Web? There is no formal peer review process, but even without a formal process, one is normally able to obtain some feeling for the trustworthiness of a site if it is providing information in a domain with which one is familiar. The problem comes in evaluating a site when one is searching for information on a new topic; amongst the myriad sources of information on breastfeeding, for example, how can I know where to focus my attention?

One approach could be to simply focus only on those that feature high in the search rankings. Google’s page-ranking algorithm, for example, could be seen as measuring a form of community endorsement of specific sites. But this is not its primary purpose, and popularity should not be confused with trustworthiness (are the popular tabloids trusted sources of news information?). So the question remains, how can we design methods for a community to build trustworthiness ratings for information sources, or more importantly, for the resources (services and information sources) that are available?

One way to handle this could be to break down the concept of “trust” into a number of component, or contributing factors. Accountability, for example, (in the sense of being obliged “to bear the consequences for failure to perform as expected” Webster 1913) is seen by the World Bank as an important precursor to the establishment of trust. Security is another factor that is critical to establish for e-commerce resources. This helps in understanding how the degree of “trust” in resource that is published on the Web may be conditional on context. For example, I may trust the product reviews on Amazon (because there is some element of accountability built into the process of uploading reviews) to help in a purchasing decision, but not trust the purchase with Amazon (because I am paranoid about the levels of security provided for credit card transactions over the internet).

In building a trust model, the next question to ask is: “Is there a role for some form of independent mediators in operationalising the trust model?” These could be agents that maintain and evolve some community driven measure(s) of trust. If so, how does one guarantee that such agents are free of bias? How does one ensure that a malicious “trojan horse” cannot somehow manipulate an agent to build up a level of trust within a community, only to exploit that to its advantage in one final act, or series of acts?

This panel will be joined with the Individual identity in Network Society.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: