Informal Media

Wading in the Info Sea

An Interview with Richard Rogers about Web Epistemology and Information Politics

Willem van Weelden

December 31, 2007interview,

How can the web be understood as both a symptom and an expression of a public practice? According to what logic do search engines work and how do they influence the way we deal with knowledge, news and information? Web epistemology is a new research practice that regards the web as a separate knowledge culture and advocates giving an ear to what lies beyond all the din. An interview with Richard Rogers, web epistemologist at the University of Amsterdam, author of Information Politics on the Web, founder of the Govcom.org Foundation and developer of the Issue Crawler, an ‘info-political tool’.1

The very beginning of the information revolution was described by the philosopher Jean-François Lyotard as something that instils an inherent anxiety: the fear that scientific knowledge would become a commodity like all information, which would thus drastically alter the status of knowledge.2 He proposed that knowledge would no longer be disseminated for its  'formative’ value, but in the framework of daily maintenance. Knowledge ceases to be an aim in itself; it loses its ‘use-value’ and becomes a commercial commodity circulated along the same channels and networks as money. The distinction would no longer be between knowledge and ignorance, but between payment knowledge and investment knowledge. (According to the dominant liberal ideology, some flows of money are used in decision making, while others are only good for payments.)

This immediately raises the issue of ‘access’: who will have access to knowledge and under what conditions, and who will decide which channels are forbidden? In this social conflict Lyotard saw no decisive role either for the state or for knowledge. In the postmodern analysis, after all, the state is no longer the governing factor of social and political life. Power is no longer exercised on the basis of ideological contrasts or grand narratives, but is dictated by economic movements. What’ s more, the same analysis shows that science is caught up in an internal crisis: any formulated knowledge has to ultimately acquire its legitimacy in another knowledge. The economy, and hence social life, is henceforth dependent for its dynamism and ‘development’ on social agencies that not only control access to the information society, but also provide the networks that shape this society.

At the beginning of the 1980s Lyotard outlined a technocratic spectre, suggesting that the crisis of knowledge lies in its historical origins. At the same time, he distilled from the diagnosis of this crisis a programme of what was at stake in thinking, philosophy, science and the arts: the restoration of the honour of thinking and knowing by critically investigating the new technocratic conditions under which it exists. The ‘conditional’ approach he chose for this was based in part on systems theory. Society is only really a system when the relations that constitute it are optimalized as regards performativity and efficiency.

This means that the critical tradition, including philosophy, art and science, is in danger of being systematically co-opted in order to strengthen the technocratic whole, even though it has a different agenda. The only way to escape from this ‘paranoia of Reason’ is through a deeply rooted distrust as regards all forms of appropriation. The crucial question continues to be how critique can be practised when the critical agency itself is also an instrument that is part of the whole it is attempting to describe.

Seek and Ye Shall Find! During the last decade, search engines have drastically changed the way we regard knowledge. The use of clever algorithms for search queries accommodates the vast amount of information offered by the internet and meets the wishes of the millions of internet surfers who consult the web for their daily information needs and production. Search engines are also more than advisory systems that indicate in a quasi neutral manner what information is available on the internet; they are also suppliers of semi-finished knowledge that is supplemented and changed so as to become new information which in many cases is then published again on the internet. Search engines have not only intervened deeply in how we interact with the internet, but the way we deal with and produce knowledge and how access to it is gained have also radically changed. For the internet is not organized like a library; search engines clearly utilize a different logic than library systems based on thesauri and lexical indexing. The modernist endeavour to preclude interpretation has mutated in postmodern reality into an elegant, critical surfing of interpretations, where improbabilities are welcome. Search engines are now looking for users – not the other way around.

Since the enthusiastic beginning of the web, the ‘web spirit’ has been dominated by the expectation that this new public domain would be egalitarian and democratic. The chaos, anarchy or lack of organization that this entailed was seen as a positive quality. The web was regarded as a corrective to the offline world. The web site of a private individual was just as visible as that of a big company. Domain names often did not correspond with their offline variants. McDonalds.com, for example, belonged to a private individual who had nothing to do with the hamburger concern. These were the times before search engines, portals, web browsers and selective hyperlinking would start to determine the face of the web.

The advent of search engines in the second half of the 1990s (Webcrawler, AltaVista, Yahoo) revealed the changed status of information or knowledge in an insistent way. The ‘preferred placement case’ (1998) serves as a good illustration of this. AltaVista, then the most respected search engine, decided to sell the first two links (known as ‘pole positions’) resulting from a search. This gave rise to the difference between purchased results and organic results, the ‘neutral’ results generated by search engines with the help of algorithms, but without that difference being visible to users. This ‘preferred listing’ led to vehement criticism from ‘freedom fighters’ who called for an end to this ‘advertorial’ practice. The neutrality of the algorithms with which the search engines worked was not to be besmirched by commercial interference. After a few months the practice was abandoned, but all the commotion had damaged AltaVista’ s reputation and it lost its position of power.

Web Epistemology

The controversy created by the preferred placement case was not only relevant for studying the effects of preconfigured networks and media technology, but also raised the issue of the aim of the web itself. The preferred placement case led the Amsterdam-based American researcher Richard Rogers to concentrate on what he calls web epistemology: an empirical study concentrated in the research group he founded under the name govcom.org, which investigates the web precisely at the point of intersection between medium and user. Web epistemology is concerned with what the web knows, how it knows that and why certain sources are chosen above others. At the forefront are issues concerning the authenticity of sources, the algorithms with which search engines work and the functioning of the internet as the whole of its users and technology. In short, research focussing on ‘ Knowledge Politics on the Web’, the subtitle to the 2000 book that Rogers devoted to the Preferred Placement project.3

Willem van Weelden: What insight led you to web epistemology?

Richard Rogers: What we are looking at in the contemporary period, whether it’ s through the rise of the amateur or through the rise of search engines, tools and algorithms that take the amateur more seriously, is the redistribution of attention. It’ s very difficult for a lot of people to think about the consequences of new media, because there are a number of things that people tend to fall back on, like ‘the good journalist’, in the assumption that the web is a rumour mill, or the blogosphere an ‘echo chamber’. If you’ re working with these types of assumptions you are already thinking epistemologically. The natural impulse of the traditional journalist, or even the digital journalist, would be to trace a story back to its source. But in the new media way of thinking, the way it is built in in Google News for example, the scoop or the original source is not rewarded. The original source is buried; what is shown is the circulation and what is the freshest. From a journalistic standpoint it is too fresh to be true! From a web-epistemological point of view the question is why the most recent source should be rewarded. It is about first of all identifying the differences between what is considered to be relevant, important or significant in the old approach versus this new way of thinking.

This insight is the start of what you could call a web epistemology. What we’ ve been doing in a number of our projects is to study how this redistribution of attention is captured. It is no surprise that a development like the rise of the amateur is connected to the web.

In the past the web already disrupted how we decide on what matters. The next step is to ask yourself the question: ‘How do you study how this manifests itself? ’First you look at what sort of data streams are available to the makers of the search engines. For Google it was a major breakthrough, in a certain sense already an original Web 2.0 thought, when they formulated algorithms on the basis of ‘we are not going to rely on what individuals say about something, we are going to rely on what others say’. They argued: ‘We are going to count links, and if the site has a lot of links it must be very relevant, and if the link in its pointer text has the word that matches the query, then the site that has the most links with the correct pointer text is the one that ends up at the top.4 No experts, no authorities determine the ranking! ’Their way of thinking is very much concentrated on: ‘ What are the data streams or data sources that we have, how can we organize them and, finally, how can we recommend that information? ’They just use what’ s available to them. How many links? They use date stamps: how fresh is it? Once one identifies all of these potential things that you can use to count and to put into algorithms then you can ultimately recommend, putting one source on top of another source. So we must no longer rely on what individuals say about their own importance (self appointing), nor on what independent experts say is important; it’ s mainly a question of where sites refer to with their most recent links. And if you let that thought sink in you begin to realize the massive reverberation that has.

What was the ‘drama’ you found in the Preferred Placement project and why was that so important for your research?

It is very much a matter of de-equalization. In the Jan van Eyck period5 we also talked about the web in terms completely opposite to those used at the time. We were against this ‘public sphere’ or that idea of ‘equality’, as if such notions were incorporated into the infrastructure of the web.6 We were looking for public debate and we found something different. We found issue networks, through empirical research. We were looking for some sort of evidence of this neo-pluralistic space, where there was some sort of flat ontology, where sources were next to each other, the side-by-sideness principle. The Whole Earth Catalogue in 1994 already showed that the eminent expert and the crackpot are side by side. That’ s a very interesting thing, and a very important feature of the web.7 Side-by-sideness, however, is gradually disappearing. By ranking sites, search engines create hierarchies of credibility and these can differ from traditional, pre-web methods for determining credibility or reliable sources. This is exactly what the study of web epistemology is about.

The ‘Preferred Placement’ study was very much about the drama of search engines. As you know, the term ‘ PP’ was coined by AltaVista as an advertising service: you could buy preferred placement so that your site would be at the top of the list for certain queries. You can think of this rather mundanely as yet another advertising service – ‘ we’ ve found new ad space’ – but to us it was more about the perceived importance of being at the top of an authoritative space, whose authority supposedly derived from a ‘ neutral’ algorithm, for in the search engine industry results that are not paid for are called ‘ organic’. On the one hand we tried to critique this ‘ neutrality’ of search engine results, and on the other hand we wanted to deal with the ‘ drama’ in that space. The idea that as a company or organization you need to be at the top, and then you are faced with the drama of being driven out of the first ranks. The daily quest to find out where you are today in the list: ‘ Oops, I’ ve sunk four places’, or the drama of being dropped from the top ten!

Most recently, and that was a sort of dream of mine, we created a tool that is called the ‘ Issue Dramaturg’ (http://issuedramaturg.issuecrawler.net/) which shows over time a site’ s page rank for a particular query. If you put the query ‘ climate change’ or ‘ RFID’ into a search engine then the results somehow influence your view of the world. You don’ t often pose yourself the question as to whether this particular organization is researching RFID, for I don’ t see them here, so where are they, and how are they doing? And where is spychips.com when I type in the query ‘ RFID’? How are they doing? So with the Issue Dramaturg we make this drama visible. This project started with the Preferred Placement project, purely to investigate page ranking. Just type in ‘ http’ or ‘ www’ and what you get is basically the top of the net. Then we spent a while looking at what was at the top and we saw that the New York Times , for example, climbed from 76th to 12th place over a period of three months. Later, with Dragana Antic, a student at the Piet Zwart Academy, we showed how this ‘ Hyperlink Economy’ works.8

The problem with the sort of research you are doing is that you are bound up with what you are investigating. You’ re using search engines to examine how they work. How can you escape from this ‘ paranoia of Reason’?

With the notion of info politics. Epistemologies have consequences. First we have to recognize that there are several epistemologies. Directories are made in a different way than search engines. And they have different assumptions about which sources should be counted. In the late 1990s the question was always what the value of information was. And our question has always been not what counts as much as who decides what counts? And then once you have thought that through a little then you test the outcomes infopolitically. Information Politics on the Web starts with the important consideration that information has long been regarded as something a-political.9 What the web has helped us to see again is that sources are in constant competition to be the source. Sources are dying to inform you! You have to think of algorithms politically, by testing the consequences of a particular algorithm.

But to come back to the idea of side-by-sideness as something to strive for, you have to imagine what I discovered in 2004 when I typed in ‘ terrorism’. I was interested in the question whether the algorithm would produce familiar hierarchies of credibility, familiar in the sense of what the TV news would bring, or would they show something else? I typed it in and the results were: CIA.gov, FBI.gov, Whitehouse.gov, Heritiage Foundation, and somewhere further down the list were CNN and Al Jazeera. You have to understand that the algorithm gives these sources the privilege of informing us about terrorism. Where is the ‘ side-by-sideness’ in that list? Then you ask yourself: ‘ How do you solve this? ’ Well, by looking at the infopolitical consequences of your own practice.

Can you be more precise about that? What is such an infopolitical consequence?

The web makes us face the fact that there is a multiplicity of sources. The question we asked was: ‘ Is an issue hot because it is in the news? ’ What we did was to think in terms of how the web brings us beyond the notion of news. So we did the project infoid.org, where we took advantage of the web as a multiple source space.10 What we also did was to look at another common idea that people have about the web, namely that it speeds things up and leads to journalistic sloppiness, because there seems to be no more time anymore. But by checking on the web empirically and looking at the difference between how the news covers certain issues and how issue professionals cover them, we discovered that issue professionals have a much longer attention span than the news to particular issues. It shows that with the web things aren’ t sped up; people have longer attention spans! The heat of an issue is no longer determined by the news. Generally speaking what we do is undertake research that would be impossible without the web.

Does your research show that in the way they relate to the news users have become more accustomed to this principle and that they use the internet more critically?

We do not study users! A very important thing to know is that we study what is published, not what is read! We identified and described this given in terms of the differences between the hit economy and the link economy. Once it was assumed that you could determine how much interest a site garnered by counting the number of hits, but nowadays it’ s a question of a link-economy, which is about pointers. We tried to develop new ways of describing webdynamics which are not necessarily familiar. What we are trying to do is in that respect uncomfortable.

Can you say something about how your research looks at specific terminology in order to arrive at an issue?

We rely on specific issue terminology and make use of that as a research technique. We used these techniques in the Election Issue Tracker, for example, by pulling out the specific issue language of, say, Lijst Pim Fortuyn (LPF) and comparing it to the language in the same general issue area of other parties. We ran batch queries nightly of all the newspapers and we watched how specific issue language was resonating in the press. And what we found was that, generally speaking, the press was using the language of the populist parties with greater frequency than the language of non-populist parties. So we were able to raise the question of to what extent the press was participating in the rise of populism. It showed that the newspaper’ s information is in some sense political.

In your research you make a methodological distinction between issue networks, social networks and stranger networks. Can you say something about this in relation to more common forms of social research into the internet?

The distinction between different types of networks is one way to try to differentiate our work from the social network analysts. Social network analysts generally use surveys and questionnaires to determine ties between individuals, whereas what we do is study links in order to demonstrate what are essentially very normal strategies for establishing connections between organizations, and we do this on the basis of issues. These organizations do not necessarily have to work together or even be on good terms with each other; they might oppose each other or be enemies. And what we strive to locate is a different set of actors who are implicated in a certain issue area. I’ m using these words so as to try to differentiate what a social network analyst would do. When you study the networks well, they not only reveal who are involved but also who is the addressee of the issue. Those who can be considered as the parties that are expected to contribute to the settlement of the issue. That’ s the difference. And the notion of stranger networks comes from thinking about social movements. What is the difference between a social movement and a network? A social movement often has an ideal demographic that is largely derived from the Paris ’ 68 uprising, a classic constellation of students and workers. Another example is the peace movement in the 1980s around such issues as nuclear energy and nuclear arms, and added to that demographic is then a religious element (pax christi). In a certain sense these ideal groups are stranger networks but they are not strange, because it is an ideal demographic. In a network the question is do you have an unfamiliarity. What’ s the unfamiliarity of the demographic? When the level is high you can speak of a stranger network.

Is the creation of a stranger network an indication of the urgency of a issue?

The process by which some form of collectivity produces some kind of urgency or what you can call issuefication, the issuefying of an issue, involves more than just refreshing pages. Traditionally, one could measure the level of urgency by the growth of the network and the frequency of issue statements, and by some sort of refreshing behaviour, the adding of content, and levels of info sharing. That’ s ideal typical. A high degree of strangeness and a high degree of network growth and intensity of issue statements, that is then urgency. Or heavy issuefication. You could have all those factors present and yet it still doesn’ t become ‘ urgent’ or ‘ hot’, that is, in the news.

Govcom.org, it seems, supplies its research with visual, cartographic evidence. Or is it the other way round, the maps providing the insight?

The practice is that it strives to build upon the notion of a social map. In some sense the visualization practice is based on this notion, but it strives to show another kind of reality than those that are constructed when traditionally one initiates a broader social discussion. In identifying who the stakeholders to a certain issue are, traditionally speaking you would have implicit assumptions about who is important. Whereas we ask the web to tell us who is important. So this is the new social map. In thinking about our cartographical work, then, you have to understand it as a ‘ notional’ practice.

What is the spatial notion behind your cartographical work? What is actually depicted?

The language that is used on the web is a language of space. And over the past eight years web notions of space have changed. In the early days you had the notions of hyperspace or outerspace which later then gave way, largely because of public sphere theory, to notions about ‘sphere’ or ‘spheres’: the ‘ blogosphere’, the ‘logosphere’ and the ‘websphere’. More recently there is what I call the revenge of geography: when you type ‘ www.google.com’ into your browser, you are redirected to google.nl, you’ re taken back home! We can dismiss the idea of the web as a placeless space. You’ re taken back home by default. We make visual contributions to these types of notions of space, most recently with the Issue Geographer . With your Issue Crawler results you can create an Issue Crawler network and plot this onto a geographical map. Why would you want to do that? Well, what we are doing is developing a critique of issue mobility or issue drift [when organizations or networks of organizations drift away from issues. Ed. ], of organizations (governmental of nongovernmental) that move from summit to summit, and from one large dam project to another large dam project, and seeing the extent to which these organizations remember what’ s actually happening on the ground. Looking at the extent of issue abandonment because of the mobility of organizations. So we wanted to look at the distance between where an issue comes from and where an issue is based. The base being the network and the form being the ground. And also to look at the distributed geography of an issue. In each of these visualization projects, we not only research questions but we contribute to them critically.

How is your work used in the end? What is your reservoir? Is it ‘the honour of thinking’, as Lyotard suggested in The Postmodern Condition?

What we are dipping into is more like wading into the info sea. It is the insight into the degree to which the web can still be a kind of collision space for alternative forms of realities. In some sense our visualization work is making this collision space into a reality. Our reservoir is that insight. From what was previously termed source competition to what is now termed collision space.

Should the work of Govcom.org be understood as an indication or an expression of the public domain that you are studying?

We use advanced webmetrics in order to derive indicators of the state of the web. And ultimately the infographics we produce must also be understood as issue narratives, stories about the state of an issue, and as expressions of those states. So, unfortunately, they are both indicative as well as expressive.

A good part of our work is to prevent ourselves from being pushed into a corner. Never be just scientists, never be just visualizers, nor just designers, just software developers. We talk about science in artistic circles, we talk about art in scientific circles, because we have the web-insight that the action is always going on elsewhere.

1. For information about projects like the Issue Crawler and about publications by Richard Rogers, see www.govcom.org.

2. Jean-François Lyotard, La condition postmoderne: rapport sur le savoir (Paris: Minuit, 1979).

3. Richard Rogers (ed.), Preferred Placement: Knowledge Politics on the Web (Maastricht/Amsterdam: Jan van Eyck Akademie Editions/de Balie, 2000).

4. The pointer text is the text that can be clicked on [editor’s note].

5. The book Preferred Placement: Knowledge Politics on the Web emerged from research at the Jan van Eyck Academy in Maastricht, 1999-2000 [editor’ s note].

6. See also Noortje Marres, No Issue, No Public: Democratic Deficits after the Displacement of Politics (Amsterdam: Universiteit van Amsterdam, 2005), dissertation.

7. Howard Rheingold, The Millennium Whole Earth Catalog (Harper, 1994), 263. ‘ The least discussed, but most important aspect of what’ s ahead is quality assurance. The democratic nature of the Net, where eminent scientists and isolated crackpots can publish side by side, leads to wide variations in the self policing . .. Authenticating that a resource is the definitive, unedited version is next to impossible. ’

8. See www.govcom.org

9. Richard Rogers, Information Politics on the Web (Cambridge, MA: MIT Press, 2004).

10. www.infoid.org.

Willem van Weelden is an Amsterdam-based teacher, lecturer and independent writer on new media culture, media theory and interaction design.