Seeing Software: Appraisal in Web Archives / Ed Summers (University of Maryland, College Park)

Over the past twenty years organizations such as the Internet Archive and members of the International Internet Preservation Consortium have worked to develop practices and infrastructures for preserving the web. These cultural heritage projects are essential tools for contemporary historical research (Milligan, 2019). However, web archives take many shapes, and the diversity of actors involved in archiving the web are under explored. As documents are fetched via web protocols and stored as records in databases and storage systems, web archives are increasingly read by algorithms in addition to people, in order to provide new ways of seeing, reasoning, and controlling. Attending to these algorithmic ways of reading archival records helps to discern the ontological politics (Mol, 1999) of web archives, which escape their use merely as sites for viewing what a particular web resource looked like at a given time. To explore these new ways of reading in web archives, and the types of legibility that web archives afford, I have conducted a year-long field study in a government agency that has spent the last 20 years building the largest known archive of software in the world. While initially assembled from physical media, this collection is now built using bespoke software that interfaces with web-based distribution platforms such as Steam and Google Play. The means of access that these platforms provide, as well as their algorithmically defined conceptions of popularity and authenticity critically shape what this archive contains, and what it does not. Using Critical Algorithm Studies (Seaver, 2017) as a theoretical framework, my ethnographic study focused on how appraisal in this specific, historically contingent web archive operates as a sociotechnical practice. My findings highlight how fixity algorithms are deployed to provide users with what I call a negative archive for viewing criminality. The forensic imaginary (Duranti, 2002; Kirschenbaum, 2008) that shapes the form and function of this specific web archive has general implications for how web archives can be recognized not simply as cultural heritage recovery sites, but as instruments of governmentality (Introna, 2016).

References:

Duranti, L. (2002). Authenticity and appraisal: Appraisal theory confronted with electronic records. In Proceedings of the 3rd International Colloquium on Library and Information Science: The refined art of the destruction: Records’ appraisal and disposal. Retrieved from http://www.interpares.org/display_file.cfm?doc=ip1_dissemination_cpr _duranti_clis_2002.pdf

Introna, L. D. (2016). Algorithms, governance, and governmentality: On governing academic writing. Science, Tech- nology, & Human Values, 41(1), 17–49.

Digital Recordkeeping Practices at the University of Michigan Bentley Historical Library / Max Eckard (University of Michigan) and Dallas Pillen (Wayne State University)

A guiding principle of the University of Michigan (U-M) 2016 Information Technology Strategic Plan states that “technology choices will favor solutions offered as external cloud services.” While such choices have their benefits, potentially reducing the total cost of ownership and, aspirationally, “accelerat[ing] the pace of research and innovation, cultivat[ing] interdisciplinary and inter-university collaboration, and driv[ing] economic development,” [1] they also have their costs. As a review of the relevant literature demonstrates, issues surrounding cloud computing are not limited to the technology–although the implementation and technological issues are numerous–but also include organization management, human behavior, regulation, and records management, making the process of archiving digital information in this day and age all the more difficult. In the Cloud, digital information is no longer captured in ways that are familiar to archivists. It is typically not moved into a centralized recordkeeping system, for example, and, due to the use of collaboration tools, it is often difficult to attribute to one author and is not clearly linked to one business process or function. Policies aiming to control such behavior have not been successful, and in some cases employers lack internal regulations on cloud computing use. In addition, users have questioned how well the move to the cloud computing environment aligns with institutional goals of transparency–whether it comes to communicating behind “walled gardens” or the lack of transparency of a providers’ service–and they lack trust in its sustainability and continued economic viability. Finally, as Luciana Duranti points out, the “by-products of such interaction are no longer finite entities, but processes that are always changing,” [2] calling the evidential value of records into question, even for those institutions with well-established born-digital recordkeeping and digital preservation policies and procedures. This paper will explore some of the consequences of this shift and its effect on digital recordkeeping at the Bentley Historical Library, whose mission is to “collect the materials for…the University of Michigan.” After providing context for this problem by discussing relevant literature, two practicing archivists will explore the impact of the move toward cloud computing as well as various productivity software and collaboration tools in use at U-M–especially “Cloud @ U-M” services like Amazon Web Services, Microsoft Azure, and the Google Cloud Platform–throughout the various stages of a standard lifecycle model for managing records: creation; active status; semi-active status; and inactive records with long-term, indefinite, archival value. Drawing on their experience archiving other “emerging technologies,” including web archiving and building and sustaining social media archives, the authors will address causes of technical debt introduced by this shift and explore the intersection of digital archiving and maintenance work, drawing on one of the core values of the Bentley’s Curation Team “to recognize the value of repairing, maintaining, and improving those technologies and collections that already exist, including their basic technical and human infrastructure.”

[1] University of Michigan Office of the Vice President for Information Technology and Chief Information Officer. “University of Michigan Information Technology Strategic Plan.” 2016. https://it.umich.edu/it-strategy/strategic-plan.

[2] Duranti, Luciana. “Digital Records and Archives in the Commercial Cloud,” in Regulating the Cloud: Policy for Computing Infrastructure, edited by Christopher S. Yoo and Jean-François Blanchette, 197-214. Cambridge, Massachusetts: The Massachusetts Institute of Technology Press, 2015.

Making Networks for the Library of the Future: Mechanization, Transcription and Standardization of Information in the ‘Biblioteca De Catalunya’. 1976-1986 / Luz María Narbona (Archiveras sin Fronteras Chile)

This work deals with the projects that sought to mechanize the data generated inside the “Biblioteca de Catalunya” between 1976 and 1987. This phenomenon is situated in a context where the first computational systems began to penetrate both public and private services, modifying practices around the use and handling of information. This research reflects on the impact that data mechanization had on institutions that -in the words of Markus Krajewski- have become large-scale producers of information. In this way, it is reconstructed the network of actors that aimed to mechanize the Library, identifying specific practices, proposals and problems arising from this phenomenon. As well as some of the transformations and frictions that occurred with the incorporation of new technology in the library space are also investigated. Those projects aimed to transcribe the information from an analog to a mechanical format for getting the connectivity of the information. However, there were some differences and frictions in these conversions that realize the depth of the impact that automation brought. In this way, it is asserted that the automation involved a transcription and standardization of data that would enable the international connectivity. Globality, nevertheless, was not easy to achieve and involved a deployment of transformations locally, due to the complexities of the connection process at different scales. Related to this, it is also asserted that the phenomena associated with the data collection and handling are subject to immense complexities, which are accentuated when observing their local applications. So I inquire into the tensions generated by the proposals for data mechanization in the library, which have their counterpart in the expectations and projects of connectivity on a global scale. For this purpose I use some postulates from the history of sciences, particularly those related to information management and classification and cataloging criteria, to Big Data and to computing. These tools contribute to endow the study phenomenon with historical density. The research is also based on the documentation stored in the Administrative Archive of the “Biblioteca de Catalunya”, which provides us with facts about this problem. The search for the mechanization of the library gives rise to a several questions that allow us to deepen the analyze and clarify the problem: Who were the actors involved in this situation about the automation of the library’s data? What were the expectations raised by the development of this process? What were the problems of their implementation? Which material tools did they need? How does this technology differ from what already existed? How were professionals linked to these new transformations? How does this affect the data? What makes information connectivity possible? Finally, from this research we have been able to deduce the enormous complexity of such a mechanization process that even failed, and so it shows us an interesting panorama just before the arrival of Internet.