Disaster Planning for Digital Repositories (Essay)

free essayDigital preservation presupposes the maintenance and management of digital objects so that they could retain their data integrity and be accessed by future users. These digital objects can be stored in a single file, as a group of records, an audio file, a visual file, or an audiovisual file. Consequently, it is necessary to think about the digital preservation techniques that will be used for this file right at the time it is created. Although the traditional print backups could stay for long without harm or threats of interference, it is not the case with digital objects. Digital objects tend to have shorter lifespans; therefore, there is an imperative to think about preservation methods early enough to minimize the stress that comes with trying to retrieve compromised data (Deshpande, Selvaraja, & Sarasvathy, 2011).

In the article “DSpace: An Open Source Dynamic Digital Repository”, Smith et al. (2003) focus on the procedures for disaster recovery and particularly, for audiovisual collections. If some audiovisual content needs to be preserved and accessible for further use, then digitization is the best available option. However, digitalization has a shorter lifespan as compared to traditional analog methods, used for storing audiovisual content (Smith et al., 2003). Therefore, when making financial decisions about storage of the media, risk management should also be included to ensure that this content is not compromised.

Get a price quote


A small collection of digital audiovisual content contains a massive quantity of data that, if lost, could have dire consequences for a project. Sometimes, a single file might contain up to 10 terabytes of data. This trend has caused a new set of problems that requires professionals to look for new and innovative skills to manage collections. According to the article, the best way to work towards disaster management is to plan proactively to control the impact caused in the case a risk occurs in the collection (Smith et al., 2003). The first step of the process is to analyze the possible risks, the probability of their occurrence, and their probable consequences, if they do occur. The majority of current audiovisual collections that are stored in significant volumes are developed from the digitization of analog to conventional formats that are easy to access. The storage format also determines the degree of risk for digital objects, and most formats, in which analog material is stored after digitization, increase the risk of data compromise. The digitization process of both audio and visual data should be scrutinized and compared to the original file to ensure that no corruption has occurred.
Save 25% on your ORDER Save 25% on your ORDER

Exclusive savings! Save 25% on your ORDER

Get 15% OFF your FIRST ORDER (code: leader15) + 10% OFF every order by receiving 300 words/page instead of 275 words/page


It is also important to identify the faults in the original collection as the might be transferred to the digitized content. However, some common issues occur as the result of compression, and they include degrading noises and transcoding errors. Thus, a recovery plan should ensure that there are no attempts to repair an issue, existing at the time of digitization. Technology changes rapidly, and one should look at the emerging trends that are adopted for storage purposes and ensure that there is the capacity to support them. Thus, the failure to store a collection in the formats that can be underpinned by the current technology can also translate into a disaster. A small system malfunction or a minor intrusion could lead to dire consequences for data (Smith et al., 2003). The power supply to where the collection is stored should also be stable because interruptions might cause data corruption.

The second process involves the mitigation of risks, and it must be addressed at an organization’s level because disaster planning occurs to almost all institutions regardless of their nature of business. Such risks as fires, security intrusions, and floods should be handled at the organizational level. Not only do the risks that are likely to occur on a collection need to be identified but they also should be ranked, using a matrix of the ones that are most probable to happen. For example, the risk of water entering the storage location might be high, but that of destroying it may be low. At the same time, the occurrence of a fire might have a low probability of occurring, but if it does occur, it might cause massive damage. However, the preservation strategy should have a provision for additional mitigation strategies in the case of other digital risks. The mitigation process should start after a quality check is done on the digitized content by adding a checksum to the file. The checksum is defined as a numerical value that is assigned, depending on the bits that are in the archive. When the accuracy of any stored data needs to be checked or shifted from one server to another, the checksum assigned is calculated again to check in comparison to the original one (Smith et al., 2003). If the figures march, there is a high probability that the data is still intact and at no major risk of a disaster. However, the number is unique to every file, and it changes if the file is edited or its format is changed.

Our Benefits

  • English-Speaking Writers
  • Plagiarism-Free Papers
  • Confidentiality Guaranteed
  • VIP Services
  • 300 Words/Page
  • Affordable Prices

Further, Craiger, Swauger, and Marberry (2005) in their work “Digital Evidence Obfuscation: Recovery Techniques” state that a good preservation strategy should ensure that the risk is spread by creating multiple copies of the same file, and traditionally, copies of a disk or tape would be stored in different locations. However, the case is different in the digital domain, and the copies are checked for errors before they are committed to Media Asset Management System (Craiger, Swauger, & Marberry, 2005). There is no limit or a clear rule for the copies that require being made, but most organizations prefer to make three of them of high quality or better resolution in the case of an audiovisual collection. The three duplicate copies are to be stored in discrete locations because it makes no sense to make the copies and then save them together or on the same server. To spread the risks, the copies should be stored in separate physical locations and the further they are from each other, the lower the risk of a disaster for data is (Craiger, Swauger, & Marberry, 2005). Consequently, the IT department of any organization should ensure that the three different locations are not prone to a similar physical or environmental risk.

The file format chosen for data storage should also be in accordance with the available support options, particularly if it is proprietary of open source. Proprietary formats are considered to have excellent features, but the support that can be accorded to the collection is provided solely at the discretion of the owner. Open source formats are offered support through a system that is formulated by a group of experts in a given sector. The system provides instructions on how a collection is to be opened, and since it is open sourced, several options can be used in recovery in the case a disaster occurs. The development and support of open source formats are overseen by a community of experts. Any proposed change has to be done after the consideration of different perspectives of the issue, and it has to be voted in by all members of a defined community (Craiger, Swauger, & Marberry, 2005). Thus, the collective decision ensures that any changes are comprehensive and properly scrutinized.

How It Works


Archives are not only necessary to an organization because of the data stored in them but also because of the nature of the records. According to Deshpande, Selvaraja, and Sarasvathy (2011), in their work “Digital Preservation: An Overview”, stored technical details regarding a record are also important. The characteristics are known as metadata or, as commonly known, the information about information. In the storage of digital files, metadata is necessary for a collection. Traditionally, the technical information was stored in a catalog or a database. This practice is also applicable to data that is stored on a digital platform; however, it would be almost impossible to manage a given collection without having to create a direct link between a file and its metadata (Deshpande, Selvaraja, & Sarasvathy, 2011). Metadata is a vital factor in digital storage to the point that if it is lost, it would be impossible to manage any collection further. Consequently, there should be enough metadata in the file’s structure so that if the external metadata on the files were corrupted, it would be still possible to identify and retrieve a file.

Thus, any digital preservation method requires an active maintenance program. Some organizations prefer to automate their maintenance through a server system that will verify checksums to check if there is any intrusion or detect a possibility of any risk. Among the necessary major support, one can name such techniques as the shifting of stored data to a new advanced hardware or the change of format to a current and more accessible one. The aim of the maintenance process is to follow the changes in technology. Moreover, during maintenance, power supplies should also be checked. Any system, including storage, will lose its data if they do not have a controlled shutdown procedure in place in the case of loss of power. If the power goes off before a file is written in the new storage hardware or stored in the new format, there is a risk of losing both the duplicate and the original copies of the file. Consequently, uninterrupted Power Supply (UPS) system is necessary to control the shut down and eventually mitigate the risk by preventing data loss. Nevertheless, storage systems should always produce a copy, and then delete the method after verification (Deshpande, Selvaraja, & Sarasvathy, 2011). The trend ensures that data is not completely lost and there is a provision for recovery in case of a disaster.

VIP VIP services

$2.00

Get extended REVISION

$3.00

Get SMS NOTIFICATIONS

$3.66

Get order Proofread by editor

$5.99

Get a full PDF plagiarism report

$4.40

Get order prepared by Top 10 writers

$11.55

Get VIP Support

VIP Services package 24.48 USD

VIP

The last procedure should be performed on the actual recovery process in the event of a disaster. The nature of the attack or disaster that occurs on file will define the recovery method to be used. If the disaster occurs as the result of a hardware malfunction, the recovery process could be as basic as correcting an error occurring in a RAID system. However, it might also involve the complexities of treating the tapes through computer forensics. The choice is defined by how well the preservation strategy is developed. No recovery plan can be said to work in all situations, as different disasters will vary in their scale and effect. However, several stages can be used on any digital preservation disaster to ensure that data integrity is upheld. It starts with the evaluation of the extent of the damage, continues with the salvaging process, stabilization, and ends with the employment of a particular recovery action (Deshpande, Selvaraja, & Sarasvathy, 2011). However, if the recovery is performed from a severely damaged disk, it might require additional skills from forensics professionals. The recovery process sometimes takes a long time especially if an organization has not invested in a good digital preservation strategy. If a workable plan was implemented right at the start, the recovery process might be as simple as obtaining data from the surviving copies, stored in separate locations.

Discount applied successfully