Processing and Workflow Assessment Toolkit

Basic Data Collection

Aim to gather or track  processing metrics, which includes basic information about processing projects and the overall time spent on them.

Basic data to collect:

  • Title
  • Identifier
  • Date range
  • Collection extent (both “start” and “finish” extents, for both physical and digital volume)
  • Start date and end date of processing project
Additional Data Collection-Complexity

Gathering additional information  based on the complexity of processing a collection when there are several variables to consider helps make more accurate estimates of time and resources needed.

Complexity level of a collection  is the perceived amount of effort and time required to arrange and describe a collection.

  • The level of processing to be performed.
  • Staffing assigned to the project (full time/half time, use of interns, etc.)
  • Presence and volume of special formats such as audio-visual and physical storage media
Additional Data Collection-Time Spent Processing

Record time spent (to the quarter hour) on discrete processing tasks.

  • Surveying and planning
  • Rehousing (refoldering, reboxing, etc)
  • Inventorying
  • Physical arrangement
  • Preservation activities
  • Digital processing
  • Description: finding aid authoring
  • Description: MARC record creation
  • Box labeling and barcoding
  • Data entry/ingest into ArchivesSpace or other content management system
Scenarios

What collections should be our next processing priorities?

Data needed for all collections:

  • List of collections (titles and/or identifiers)
  • Processing status
  • Description status
  • Extent
  • Research value
  • Subject matter
  • Physical condition
  • Number of use requests (for unprocessed but discoverable collections)

How long will it take to process this new collection/our backlog/this set of collections for this grant?

Data needed for selected collections:

  • Processing status
  • Description status
  • Extent
  • Anticipated processing level
  • Collection complexity
  • Average processing rate (time spent and extent for past processing projects, ideally with similar complexity and processing levels as the selected collections)

What percentage of our new collections (yearly accessions) are made available for research immediately upon accession?

Data needed:

  • Number of collections accessioned each year
  • Processing status
  • Access status

Are we acquiring collections at a greater rate than we are processing them? (The answer may inform local processing practices as well as collection development plans and appraisal practices).

Data needed:

  • List of acquisitions (Collection title, accession number, etc)
  • Date of acquisition
  • Extent
  • List of processed collections (titles and/or identifiers)
  • Date processing completed
  • Extent

We’re experimenting with opening collections with minimal (collection-level only) description created at the point of accession. How does this approach impact discoverability and use? 

Data needed for each collection:

  • Level of processing
  • Date processing completed
  • Research value
  • Number of requests/uses

Is team processing more efficient than archival processing?

Data needed for completed processing projects:

  • Collection extent
  • Collection complexity level
  • Processing level
  • Staffing configuration
  • Total hours spent processing