ANT Actor Network Theory

The actor network theory is describing the relationship between the technology and the social. That they are cyclical and informing of eachother. The theory gives as much credit to the technology as to the people that create it and use it, that the relationships are both material and semiotic, the concept and the thing. 

“In what they have called a “network theory” [Latour and Callon] have developed a vocabulary that does take the distinction between subjects and objects, the subjective and the objective, into consideration. What they call an “actant”, for example, is more than a human actor. Both humans and nonhumans may be actants. An actant may be “enrolled” as “allied” to give strength to a position. When a biologist argues for the existence of a molecule, the data that prove this existence are enrolled actants. An actant may be an automatic door opener (Latour 1988), or it may be scallops in the sea (Callon 1986). In networks of humans, machines, animals, and matter in general, humans are not the only beings with agency, not the only ones to act; matter matters.” 

I found this the easiest to understand from the readings. 

What we publish can be informed by the technology, and that technology can be informed by the people, which may be influenced by their place in time. 

Data Permanence 2 minute talk

The shift from print to digital publishing has meant that the longevity of a text is now seemingly infinite. Data permanence is something we have to consider these days in relation to what we publish. In the past, if all 100 copies of a book were burnt and destroyed, that book and its contents would cease to exist. Nowadays you could write an article, which could be referenced and cited by other articles, hosted on other blogs, and a digital snapshot of the page could be taken. All of a sudden it is difficult to destroy all copies of the text.

At the moment, organisations like “The Internet Archive” are collecting texts such as, music, videos, academic articles and journalistic articles and creating an online library for “contemporary academics and future historians”. Their goal is to create a free online library which archives important information for the future. Though this task has its difficulties, when collating data from the internet there are some factors to consider:

“Data format

Data must be stored in a format which can be meaningfully accessed now and in the future.

Technology reliance

If data requires a special program to view it, say, as an image, then software must also be available to both interpret the basic data file and also render it appropriately. In some cases, this might also require special hardware.

Archival strategy

Data must remain available in the long term.

At present a growing problem is the time taken to reproduce an archive, for instance following a hardware or system upgrade. Since the sheer volume of archive data continues to grow, new hardware is always required to maintain the archive.

Digital rights management

Maintaining digital information in an accurate and accessible format over an extended retention period also must address the requirements of the authors’ digital rights.


Digital information must be able to be reproduced as originally intended or available.

This is significant especially where the original data was produced on technology at a lower level than currently possible. For example, archivists try to maintain the distinction between listening to a gramophone record played on a gramophone as opposed to a digitally cleaned version of the same recording through a modern hi-fi system.”

The ever expanding wealth of information to collect is matched by the constant need to update technology. The rate at which technology is updated could mean that some texts become obsolete as they can no longer be read by new forms of software as that version is too dated. This would mean that every text, once archived, would constantly need to be updated to a format that is compatible with new technology. The alternative is that technology should adapt to read both old and new forms of text.

The implications of having data permanence ever ready at our finger tips could be both positive and negative for the media industry.

Suddenly we can search for sources of information in one place, making it easier to reference and back up your own knowledge. This is especially important for students like myself, who could find academic essays online that once before may have only been located in university libraries in other states or countries.

Negatively, data permanence means that once something is published, it is extremely difficult to destroy.

An update of the copyright laws could be needed as well, as author’s intent may only want their work to exist through one medium, but with the constant update of technology and the greater ease for people to reference other works, they may find that their works are being taken and used in ways that are still legal under the current copyright laws or perhaps, as we are currently experiencing, are extremely hard to govern (re: internet piracy and distribution).

Another aspect to consider is that what becomes data permanent may not be data we want to share. Information collected through social media and search engines such as google may breach a privacy issue as their collection of metadata about our search histories and other information could be used against us. This is speculation but it could be a possible issue of the future.



Digital permanence, Wikipedia accessed 10/8/14

Internet permanence, The Economist, G.F accessed 10/8/14

Internet Archive accessed 10/8/14


Week 2: Publishing vs Printing and the people

To print is a form of publishing but publishing is not limited to printing. Publishing is the act of sharing, communicating thoughts and ideas over varied mediums to a wider public. Whether that public consists of one other person than yourself or 1,000,000,000 other people, it is still in fact publishing. Wikipedia describes publishing as “the activity of making information available to the general public”.

This week’s readings have explored the many ways in which publishing has changed in both print media, journalism, printing, and on the web. Each of these different ways to publish have undergone significant changes due to the way in which both technology has evolved and how we use it.

The following are some of my observations and quotes I have pulled from a selection of the readings for this week: – “microformats” reading data like geolocations, contact info, calendar events – already being used in iphone text messaging software
“I propose “multigraph” to describe a monograph reconceived as an gathering of many content components, structures, and pathways for creation and use.
“We might view the “monograph” as simply one state or frame for a large network of activities:

  1. assemblage and composition of elements by one or more writer/editor/compilers, which might be a mixture of people, processes, machines
  2. distribution and testing of elements while and after writing
  3. sharing and reuse of “monograph” components, from quotes to sections, in later stages”
Publishers are using data to analyse the needs and wants of their consumers to publish what will most likely be profitable, successful, useful.
New ways of analysing what consumers want to read. “Attention web” – not tracking how many hits a site has but how much time is spent on the website, what for and why, also who. Collecting readership data to understand market value of what is being published. See fb and tumblr, sites that don’t make money, but their worth is exponential due to the value of the User and their attention.


What I gather from this all is that the way in which we publish is significantly tied to how we understand the consumer public, how technology is used by the public to fill their desires and how publishers then use that technology to meet the needs of the public and profit.