Web Archiving Roundup: February 5, 2018

Happy February, Roundtablers! Here’s your Web Archiving Roundup for February 5, 2018:

  • Archiving the alternative presses threatened by wealthy buyers: in partnership with Archive-It, Freedom of the Press Foundation is launching an online archives collection, focused ‘on news outlets we deem to be especially vulnerable to the billionaire problem,’ and ‘aims to preserve sites in their entirety before their archives can be taken down or manipulated.’ (Archived link.)
  • A New Playback Tool for the UK Web Archive: the UK Web Archive will be working with Rhizome to build a version of pyweb (Python Wayback) that they ‘hope will greatly improve the quality of playback for access to our archived content.’ (Archived link.)
  • And, speaking of: Webrecorder has released an updated version of pyweb, ‘a major refactoring and improvement’ of the ‘core engine’ that powers Webrecorder. (Archived link.)
  • The International Internet Preservation Consortium Content Development Group would like your help to archive websites from around the world related to the 2018 Winter Olympic and Paralympic Games! Submit seeds via this Google Form.
Advertisements

Web Archiving Roundup: January 22, 2018

Here’s your Web Archiving Roundup for January 22, 2018:

  • A Case for Digital Squirrelsin First Monday, authors Lindsay Kistler Mattock, Colleen Theisen, and Jennifer Burek Pierce look at ‘the myth of YouTube as an archive’ and discuss their ‘recommendations for developing new practices for archiving YouTube content to support scholarly research.’ (Archived link.)
  • An update from Cobwebfrom the University of California Los Angeles, Harvard University, and California Digital Library — and, with a production launch in 2018 — Cobweb seeks to empower ‘specialists, digital curators, and researchers’ by allowing them to ‘establish thematic web archiving collecting projects; nominate web resources for capture; claim nominated web resources with an intention to capture them; and contribute descriptions of those web resources that have been captured.’ (Archived link.)
  • Rhizome receives $1 million from the Andrew W. Mellon Foundationthe money ‘will support Webrecorder’s implementation in institutional contexts, while upgrading capture and usability for all users.’ (Archived link.)
  • We’re all Bona Fideon On Archivy, Bergis Jules argues that preserving cultural heritage on the web should be an inclusive and community-centered effort. ‘Archiving social media content,’ he writes, ‘should be a shared professional and community responsibility because it not only stretches our resources further, but it can also help to ensure that the records we end up creating are more representative of marginalized people.’ (Archived link.)
  • You still have time to let the International Internet Preservation Consortium know what you need when it comes to web archiving training: fill out this survey, and help the Consortium in its quest to develop materials for all types of training, be it technical, curatorial, or training for practitioners and researchers.

Web Archiving Roundup: January 8, 2018

Happy New Year, Roundtablers! Here’s your Web Archiving Roundup for January 8, 2018:

  • Meet the Librarians Saving the Internet: at Science Friday, Lauren J. Young profiles a few of the digital librarians who ‘continue to preserve our history’ by navigating ‘through a labyrinth of dispersed personal accounts on the web that have come and gone through time. (Archived link.)
  • Read .supDigital’s interview with Dragan Espenchied, Preservation Director for Rhizome (and Webrecorder.io). Then, read more from Jasmine Mulliken and .supDigital on web archiving. (Archived link.)
  • On their blog, Old Dominion University urges you to Link to Web Archives, not Search Engine Caches. Why? Because ‘Search Engine caches are useful for covering transient errors in the live web, but they are not archives and thus not suitable for long-term access.’ (Archived link.)
  • The Library of Congress will no longer archive every public tweet. Read the Library’s ‘Update on the Twitter Archive at the Library of Congress’ here. (Archived link.)
  • The International Internet Preservation Consortium wants to know what you need when it comes to web archiving training. Fill out this survey and help the Consortium in its quest develop materials for all types of training: be it technical, curatorial, or training for practitioners and researchers.
  • Dust off your résumé and sharpen your Python skills: Rhizome seeks a Senior Backend Developer to work on Webrecorder’s backend infrastructure. Applications are due by January 16, 2018, and can be sent to webrecorderjobs@rhizome.org.

Web Archiving Roundup: December 11, 2017

Happy December, Roundtablers! Here’s your Web Archiving Roundup for December 11, 2017:

  • Sustaining the Software that Preserves Access to Web Archives: on Digital Preservation Day, Andrew Jackson took a look at open source tools that enable access to web archives, and asked us to think about what comes next.
  • Speaking of moving forward — how do you move a web archive? On their blog, the National Archives details what went into moving 120 terabytes of data, on seventy drives, from Internet Memory Research’s data centre in Paris, to the Archives site in Kew, and, finally, to the Cloud. (Archived link.)
  • For the Digital Preservation Coalition, David S. H. Rosenthal writes about how we might be Losing the Battle to Archive the Web.
  • And, at the Atlantic, Alexis C. Madrigal writes that Future Historians Probably Won’t Understand Our Internet, and That’s Okay. Today, he notes: ‘there is more data about more people than ever before, however, the cultural institutions dedicated to preserving the memory of what it was to be alive in our time, including our hours on the internet, may actually be capturing less usable information than in previous eras.’ Still, as Nick Seaver says, ‘Is it terrible that not everything that happens right now will be remembered forever? Yeah, that’s crappy, but it’s historically quite the norm.’ (Archived link.)
  • Web Archiving Histories and Futures: the International Internet Preservation Consortium has announced its Call for Papers for its annual conference, to be held at the National Library of New Zealand in Wellington from November 13-15, 2018. Abstracts should be 300 to 500 words in length, and may touch upon topics related to: building web archives, maintaining web archive content and operations, using and researching web archives, web archive histories and futures, and more. Proposals are due February 28, 2018. 

Web Archiving Roundup: November 27, 2017

Here’s your post-Thanksgiving Web Archiving Roundup for November 26, 2017:

Web Archiving Roundup: November 6, 2017

Here are a few quick links on recent web archiving topics:

  • Remembering October 1. Multiple Las Vegas institutions are joining forces to document last month’s horrific mass shooting, its aftermath, and the community’s response using a multi-tech approach to web archiving. The project is actively accepting contributions from the general public. Live link
  • History of Syria’s war at risk as YouTube reins in content. Excerpt: “Syrian activists fear all that history could be erased as YouTube moves to rein in violent content. In the past few months, the online video giant has implemented new policies to remove material considered graphic or supporting terrorism, and hundreds of thousands of videos from the conflict suddenly disappeared without notice. Activists say crucial evidence of human rights violations risks being lost — as well as an outlet to the world that is crucial for them.” Live link
  • Archiving the Belgian web. The Royal Library of Belgium launched Preserving Online Multiple Information: towards a Belgian strategy (PROMISE) on 1 June 2017, and aims to develop a federal strategy for the preservation of the Belgian web. Live link
  • Visualizing the changing web. With support from the National Endowment for the Humanities and the Institute of Museum and Library Services, the Web Science and Digital Libraries Research Group at Old Dominion University aims to visualize webpage changes over time.  Live link
  • Web archiving labor. Jessica Ogden explores digital labor in relation to web archiving in “Web Archiving as Maintenance and Repair.” Live link
  • Evaluating a web archiving program. The Dutch National Library asks, “How can we improve our web collecting?” Live link
  • Open call. Rhizome announces its open call for participation in its National Forum on Ethics and Archiving the Web. Proposals are due November 14, 2017: Live link

 

2016 Web Archiving RT Meeting Agenda!

Web Archiving Roundtable Meeting
Wednesday, August 3
4-5:30 PM, Salon D

Agenda:
4-4:15  Welcome and General Business Meeting (Kate Stratton and John Bence)
4:20-4:35 NDSA Survey update (Nicholas Taylor)
4:40-4:55 Internet Archive WASAPI project update (Jefferson Bailey)
5-5:30 OCLC Research Web Archiving and Metadata Working group update and discussion (Jackie Dooley)

We’re looking forward to seeing you there!

Guest Post on Web Archiving: Andrea Goethals

Variations of this post have also been published on the Harvard Library website, the Library of Congress’ Signal blog, and the IIPC’s blog.

In the last couple years, managing born-digital material, including content that originated on the Web, has been one of Harvard Library’s strategic priorities. Although the Library had been engaged in the collection, preservation and delivery of web content for several years, a strategy was needed to make this activity more scalable and sustainable at the university. The Library formed a Web Archive Working Group to gather information and make recommendations for a web archiving strategy for Harvard Library. One of the information-gathering activities the Working Group engaged in over the last year was an environmental scan of the current practices, issues and trends in web archiving nationally and internationally. Two members of the Working Group, Andrea Goethals and Abigail Bordeaux, worked closely with a consultant, Gail Truman of Truman Technologies, to conduct the five-month study and write the report. The study began in August 2015 and was made possible by the generous support of the Arcadia Fund. The final report is now available from Harvard’s open access repository, DASH.

The study included a series of interviews with web archiving practitioners from archives, museums and libraries worldwide; web archiving service providers; and researchers who use web archives. The interviewees were selected from the membership of several organizations, including the International Internet Preservation Consortium (IIPC), the Web Archiving Roundtable at the Society of American Archivists (SAA), the Internet Archive’s Archive-It Partner Community, the Ivy Plus institutions, Working with Internet archives for REsearch (Ruters/WIRE Group), and the Research infrastructure for the Study of Archived Web materials (RESAW).

The interviews of web archiving practitioners covered a wide range of areas, everything from how the institution is maintaining their web archiving infrastructure (e.g. outsourcing, staffing, location in the organization), to how they are (or aren’t) integrating their web archives with their other collections. From this data, profiles were created for 23 institutions, and the data was aggregated and analyzed to look for common themes, challenges and opportunities.

In the end, the environmental scan revealed 22 opportunities for future research and development. These opportunities are listed in Table 1 and described in more detail in the report. At a high level, these opportunities fall under four themes: (1) increase communication and collaboration, (2) focus on “smart” technical development, (3) focus on training and skills development, and (4) build local capacity.

 

22 Opportunities to Address Common Challenges

(the order has no significance)

1. Dedicate full-time staff to work in web archiving so that institutions can stay abreast of latest developments, best practices and fully engage in the web archiving community.
2. Conduct outreach, training and professional development for existing staff, particularly those working with more traditional collections, such as print, who are being asked to collect web archives.
3. Increase communication and collaboration across types of collectors since they might collect in different areas or for different reasons.
4. A funded collaboration program (bursary award, for example) to support researcher use of web archives by gathering feedback on requirements and impediments to the use of web archives.
5. Leverage the membership overlap between RESAW and European IIPC membership to facilitate formal researcher/librarian/archivist collaboration projects.
6. Institutional web archiving programs become transparent about holdings, indicating what material each has, terms of use, preservation commitment, plus curatorial decisions made for each capture.
7. Develop a collection development tool (e.g. registry or directory) to expose holdings information to researchers and other collecting institutions even if the content is viewable only in on-site reading rooms.
8. Conduct outreach and education to website developers to provide guidance on creating sites that can be more easily archived and described by web archiving practitioners.
9. IIPC, or similar large international organization, attempts to educate and influence tech company content hosting sites (e.g. Google/YouTube) on the importance of supporting libraries and archives in their efforts to archive their content (even if the content cannot be made immediately available to researchers).
10. Investigate Memento further, for example conduct user studies, to see if more web archiving institutions should adopt it as part of their discovery infrastructure.
11. Fund a collection development, nomination tool that can enable rapid collection development decisions, possibly building on one or more of the current tools that are targeted for open source deployment.
12. Gather requirements across institutions and among web researchers for next generation of tools that need to be developed.
13. Develop specifications for a web archiving API that would allow web archiving tools and services to be used interchangeably.
14. Train researchers with the skills they need to be able to analyze big data found in web archives.
15. Provide tools to make researcher analysis of big data found in web archives easier, leveraging existing tools where possible.
16. Establish a standard for describing the curatorial decisions behind collecting web archives so that there is consistent (and machine-actionable) information for researchers.
17. Establish a feedback loop between researchers and the librarians/archivists.
18. Explore how institutions can augment the Archive-It service and provide local support to researchers, possibly using a collaborative model.
19. Increase interaction with users, and develop deep collaborations with computer scientists.
20. Explore what, and how, a service might support running computing and software tools and infrastructure for institutions that lack their own onsite infrastructure to do so.
21. Service providers develop more offerings around the available tools to lower the barrier to entry and make them accessible to those lacking programming skills and/or IT support.
22. Work with service providers to help reduce any risks of reliance on them (e.g. support for APIs so that service providers could more easily be changed and content exported if needed).

Table 1: The 22 opportunities for further research and development that emerged from the environmental scan

One of the biggest takeaways is that the first theme, the need to radically increase communication and collaboration, among all individuals and organizations involved in some way in web archiving, was the most prevalent theme found by the scan. Thirteen of the 22 opportunities fell under this theme. Clearly much more communication and collaboration is needed between those collecting web content, but also between those who are collecting it and researchers who would like to use it.

This environmental scan has given us a great deal of insight into how other institutions are approaching web archiving, which will inform our own web archiving strategy at Harvard Library in the coming years. We hope that it has also highlighted key areas for research and development that need to be addressed if we are to build efficient and sustainable web archiving programs that result in complementary and rich collections that are truly useful to researchers.

 

 

Web archiving roundup: February 14, 2016

Happy Valentine’s Day! Here’s your web archiving roundup for February 14, 2016:

  • GDELT + Internet Archive’s Collaboration To Archive The World’s Online Journalism: GDELT, global news coverage, and the Internet Archive’s “No More 404” program.
  • A new, free tool that’s like x-ray glasses for political ads: The Internet Archive’s Political TV Ad Archive will house all the presidential ads expected to air in eight battleground states during the primaries. Plus, fact-checking!
  • Announcing Archive-It 5.0! What’s new in Archive-It’s version 5.0.
  • State of the WARC–Our Digital Preservation Survey Results: The takeaways from Archive-It’s June 2015 survey of local digital preservation activities involving WARC files.
  • Emulating Digital Art Works: A critique of Oya Rieger and Tim Murray’s recent white paper, Preserving and Emulating Digital Art Objects.
  • Compute Canada Support: “Web Archives for Longitudinal Knowledge”Breaking down the silos in Canadian web archiving.
  • On the Road: Some Upcoming Lectures and TalksIan Milligan’s upcoming slate of lectures on digital humanities/digital history/web archiving.
  • To ZIP or not to ZIP, that is the (web archiving) question: What trade-offs are made when we compress (or don’t compress) web archive files?
  • January 2016 Federal Cloud Computing Summit: An overview.

Web archiving roundup: January 22, 2016

Here’s your web archiving roundup for January 22, 2016!

  • Guest post–Ilya Kreymer on oldweb.todayIlya Kreymer explains how oldweb.today works.
  • The Internet is for CatsIf the most important content genre on the Internet is cat videos, how did the Internet work back when there was no video?
  • Political TV ad archive preserves lies for the agesThe Internet Archive will help you call out politicians who stretch the truth.
  • BowieNet: How David Bowie’s ISP foresaw the future of the internet.
  • The Top 10 Blog Posts of 2015 on The SignalIn case you missed them, here are the most popular posts from the Library of Congress’s digital preservation blog.
  • Rhizome Awarded $600,000 by The Andrew W. Mellon Foundation to build Webrecorder, a tool to archive the dynamic web.
  • Web Archives, Performance & CaptureChristie Peterson shares her talk from Web Archives 2015.
  • ‘From Clay to the Cloud’ examines human record: Museum exhibit urges us to consider the cultural record we create through the Internet and how that record is preserved.
  • Survey: How Do You Approach Web Archiving?Do you have fifteen minutes to tell the National Digital Stewardship Alliance about your organization’s web archiving activities?