Happy February, Roundtablers! Here’s your Web Archiving Roundup for February 5, 2018:
- Archiving the alternative presses threatened by wealthy buyers: in partnership with Archive-It, Freedom of the Press Foundation is launching an online archives collection, focused ‘on news outlets we deem to be especially vulnerable to the billionaire problem,’ and ‘aims to preserve sites in their entirety before their archives can be taken down or manipulated.’ (Archived link.)
- A New Playback Tool for the UK Web Archive: the UK Web Archive will be working with Rhizome to build a version of pyweb (Python Wayback) that they ‘hope will greatly improve the quality of playback for access to our archived content.’ (Archived link.)
- And, speaking of: Webrecorder has released an updated version of pyweb, ‘a major refactoring and improvement’ of the ‘core engine’ that powers Webrecorder. (Archived link.)
- The International Internet Preservation Consortium Content Development Group would like your help to archive websites from around the world related to the 2018 Winter Olympic and Paralympic Games! Submit seeds via this Google Form.
Here’s your Web Archiving Roundup for January 22, 2018:
- A Case for Digital Squirrels: in First Monday, authors Lindsay Kistler Mattock, Colleen Theisen, and Jennifer Burek Pierce look at ‘the myth of YouTube as an archive’ and discuss their ‘recommendations for developing new practices for archiving YouTube content to support scholarly research.’ (Archived link.)
- An update from Cobweb: from the University of California Los Angeles, Harvard University, and California Digital Library — and, with a production launch in 2018 — Cobweb seeks to empower ‘specialists, digital curators, and researchers’ by allowing them to ‘establish thematic web archiving collecting projects; nominate web resources for capture; claim nominated web resources with an intention to capture them; and contribute descriptions of those web resources that have been captured.’ (Archived link.)
- Rhizome receives $1 million from the Andrew W. Mellon Foundation: the money ‘will support Webrecorder’s implementation in institutional contexts, while upgrading capture and usability for all users.’ (Archived link.)
- We’re all Bona Fide: on On Archivy, Bergis Jules argues that preserving cultural heritage on the web should be an inclusive and community-centered effort. ‘Archiving social media content,’ he writes, ‘should be a shared professional and community responsibility because it not only stretches our resources further, but it can also help to ensure that the records we end up creating are more representative of marginalized people.’ (Archived link.)
- You still have time to let the International Internet Preservation Consortium know what you need when it comes to web archiving training: fill out this survey, and help the Consortium in its quest to develop materials for all types of training, be it technical, curatorial, or training for practitioners and researchers.
Happy New Year, Roundtablers! Here’s your Web Archiving Roundup for January 8, 2018:
- Meet the Librarians Saving the Internet: at Science Friday, Lauren J. Young profiles a few of the digital librarians who ‘continue to preserve our history’ by navigating ‘through a labyrinth of dispersed personal accounts on the web that have come and gone through time. (Archived link.)
- Read .supDigital’s interview with Dragan Espenchied, Preservation Director for Rhizome (and Webrecorder.io). Then, read more from Jasmine Mulliken and .supDigital on web archiving. (Archived link.)
- On their blog, Old Dominion University urges you to Link to Web Archives, not Search Engine Caches. Why? Because ‘Search Engine caches are useful for covering transient errors in the live web, but they are not archives and thus not suitable for long-term access.’ (Archived link.)
- The Library of Congress will no longer archive every public tweet. Read the Library’s ‘Update on the Twitter Archive at the Library of Congress’ here. (Archived link.)
- The International Internet Preservation Consortium wants to know what you need when it comes to web archiving training. Fill out this survey and help the Consortium in its quest develop materials for all types of training: be it technical, curatorial, or training for practitioners and researchers.
- Dust off your résumé and sharpen your Python skills: Rhizome seeks a Senior Backend Developer to work on Webrecorder’s backend infrastructure. Applications are due by January 16, 2018, and can be sent to email@example.com.
Happy December, Roundtablers! Here’s your Web Archiving Roundup for December 11, 2017:
- Sustaining the Software that Preserves Access to Web Archives: on Digital Preservation Day, Andrew Jackson took a look at open source tools that enable access to web archives, and asked us to think about what comes next.
- Speaking of moving forward — how do you move a web archive? On their blog, the National Archives details what went into moving 120 terabytes of data, on seventy drives, from Internet Memory Research’s data centre in Paris, to the Archives site in Kew, and, finally, to the Cloud. (Archived link.)
- For the Digital Preservation Coalition, David S. H. Rosenthal writes about how we might be Losing the Battle to Archive the Web.
- And, at the Atlantic, Alexis C. Madrigal writes that Future Historians Probably Won’t Understand Our Internet, and That’s Okay. Today, he notes: ‘there is more data about more people than ever before, however, the cultural institutions dedicated to preserving the memory of what it was to be alive in our time, including our hours on the internet, may actually be capturing less usable information than in previous eras.’ Still, as Nick Seaver says, ‘Is it terrible that not everything that happens right now will be remembered forever? Yeah, that’s crappy, but it’s historically quite the norm.’ (Archived link.)
- Web Archiving Histories and Futures: the International Internet Preservation Consortium has announced its Call for Papers for its annual conference, to be held at the National Library of New Zealand in Wellington from November 13-15, 2018. Abstracts should be 300 to 500 words in length, and may touch upon topics related to: building web archives, maintaining web archive content and operations, using and researching web archives, web archive histories and futures, and more. Proposals are due February 28, 2018.
Here’s your post-Thanksgiving Web Archiving Roundup for November 26, 2017:
On Thursday, November 2, it was announced that the online-only, city-centric news outlets Gothamist and DNAinfo had been abruptly shuttered — archives and all — by owner Joe Ricketts in response to the organization’s vote to unionize. Both online newspapers, Gothamist (and LAist, DCist, Chicagoist, and SFist) and DNAinfo were updated numerous times each day, with a focus on local news, events, food, and culture.
This special edition of the Web Archiving Roundup takes a look at what others are saying about Gothamist and DNAinfo — and online news — in the wake of their sudden shutdown.
- Archive, archive, archive: NiemanLab links to several external efforts to archive both Gothamist and DNAinfo, and reminds us of the risks of ‘billionaire-funded media.’ (Archived link.)
- What We Lose in the Disappearing Digital Archive: on Splinter, David Uberti writes: ‘It’s likely that additional existing [online] publications will close in the face of economic upheaval, leaving their sites vulnerable to technical failure without consistent upkeep.’ Uberti also speaks with Abbie Grotke, web archiving team lead at the Library of Congress, who discusses the difficulties of capturing online news. (Archived link.)
- When your server crashes, you could lose decades of digital news content — forever: in 2014, the Columbia Missourian suffered a server crash and ‘in less than a second, the newspaper’s digital archive of fifteen years of stories and seven years of photojournalism were gone forever.’ What’s worse, as Edward McCain writes, is that ‘very little is known about the policies and practices of news organizations when it comes to born-digital content.’ (Archived link.)
- If a Pulitzer-finalist 34-part series of investigative journalism can vanish from the web, anything can: written in 2015, ‘Raiders of the Lost Web‘ argues that ‘the web, as it appears at any one moment, is a phantasmagoria. It’s not a place in any reliable sense of the word. It is not a repository. It is not a library. It is a constantly changing patchwork of perpetual nowness. You can’t count on the web, okay? It’s unstable. You have to know this.’ (Archived link.)
Tools and additional links:
Conference alert: on November 15 and 16, follow along with Dodging the Memory Hole, a conference dedicated to the issue of preserving born-digital news content.
Here are a few quick links on recent web archiving topics:
- Remembering October 1. Multiple Las Vegas institutions are joining forces to document last month’s horrific mass shooting, its aftermath, and the community’s response using a multi-tech approach to web archiving. The project is actively accepting contributions from the general public. Live link
- History of Syria’s war at risk as YouTube reins in content. Excerpt: “Syrian activists fear all that history could be erased as YouTube moves to rein in violent content. In the past few months, the online video giant has implemented new policies to remove material considered graphic or supporting terrorism, and hundreds of thousands of videos from the conflict suddenly disappeared without notice. Activists say crucial evidence of human rights violations risks being lost — as well as an outlet to the world that is crucial for them.” Live link
- Archiving the Belgian web. The Royal Library of Belgium launched Preserving Online Multiple Information: towards a Belgian strategy (PROMISE) on 1 June 2017, and aims to develop a federal strategy for the preservation of the Belgian web. Live link
- Visualizing the changing web. With support from the National Endowment for the Humanities and the Institute of Museum and Library Services, the Web Science and Digital Libraries Research Group at Old Dominion University aims to visualize webpage changes over time. Live link
- Web archiving labor. Jessica Ogden explores digital labor in relation to web archiving in “Web Archiving as Maintenance and Repair.” Live link
- Evaluating a web archiving program. The Dutch National Library asks, “How can we improve our web collecting?” Live link
- Open call. Rhizome announces its open call for participation in its National Forum on Ethics and Archiving the Web. Proposals are due November 14, 2017: Live link
After a brief hiatus, the Web Archiving Roundup is back this month. Here are a few quick links on recent web archiving topics:
- How the Victoria and Albert Museum collected WeChat: “How do you collect an app? What is the thing you’re actually collecting? And what for?”
- Ashley Blewer asks, “How do web archiving frameworks work?” “If you wish to explain how web archiving works from a technical standpoint, you must first understand the ecosystem.”
- Collecting social media a bite at a time at the National Library of New Zealand: It “worries us that some of our documentary heritage may be lost if we don’t start collecting content” from social media.
- Can machine-learning models successfully identify content-rich PDF and Word documents from web archives? With support from the Institute of Museum and Library Services, the University of North Texas aims to find out.
- Rhizome to Host National Forum on Ethics and Archiving the Web: March 22-24, 2018, in conjunction with Documenting the Now and the New Museum in New York City.
- Is your organization involved in web archiving, or in the process of planning a web archive? If yes, and your organization is based in the United States, you have until November 17 to take this year’s NDSA Web Archiving Survey!
A few quick links on web archiving topics
- SAA joins 60 other organizations and signs letter to the Office of Management and Budget about removing online government information.
- Kalev Letaru, from the GDelt project, writes in Forbes about “Why aren’t we doing more with our web archives?“
- The recent presidential transition has led to multiple news stories about web pages and data being removed from government websites.
- Web archiving activity around government websites
Jefferson Bailey, Director, Web Archiving Programs at the Internet Archive will be presenting the first webinar of 2017 for the Web Archiving Section of the Society of American Archivists.
Description: This webinar will provide a basic introduction to the many existing, and emergent, APIs specific to web archives and web archiving. Topics covered will include an overview of the role of APIs in the web archiving lifecycle, examples of APIs that exist for querying public web archives, and examples of collection and content specific APIs for use by curators and researchers. The webinar will demonstrate some basic examples for querying APIs and associated tools. Lastly, the webinar will present the work of the IMLS-funded WASAPI project (Web Archiving Systems APIs) which is developing APIs for the exchange of preservation web data and exploring models for API-based systems interoperability in web archiving.
Day: March 8, 2017
Time: 1pmEST/12pm Central/10am PST
Where: Online via WebEx
If you are interested in attending the webinar, we ask that you RSVP via this online form so that we can plan accordingly. We will send registered attendees a link to access the webinar in advance of March 8, 2017.