675 677




Handbook of Local Area Networks, 1998 Edition:LAN Management Click Here! Search the site:   ITLibrary ITKnowledge EXPERT SEARCH Programming Languages Databases Security Web Services Network Services Middleware Components Operating Systems User Interfaces Groupware & Collaboration Content Management Productivity Applications Hardware Fun & Games EarthWeb sites Crossnodes Datamation Developer.com DICE EarthWeb.com EarthWeb Direct ERP Hub Gamelan GoCertify.com HTMLGoodies Intranet Journal IT Knowledge IT Library JavaGoodies JARS JavaScripts.com open source IT RoadCoders Y2K Info Previous Table of Contents Next DATA COMPRESSION Another technique for making better use of disk space is to compress the data stored on it. This can be done automatically in the system software or in hardware on the disk controller. The penalty paid is in performance, but this can be minimized. One software technique is to compress only dormant files and to perform the compression automatically in off-peak hours. The data then can be expanded automatically when it is referenced. Since dormant data is referred rarely, there is little performance overhead. This technique is very similar to data migration except that data is not moved offline, it is simply compressed. This type of data compression may be built into future operating systems. There are many different compression algorithms, and usually they are suited to compressing different types of data. An algorithm that is designed for compressing image data may achieve ratios as high as 50 to 1; a good algorithm designed for compressing English text may achieve ratios of 8 to 1. General algorithms applied to data without regard to type usually average only about 2 to 1 (extravagant claims by vendors aside). How much space this technique can reclaim is dependent on the type of data and the compression algorithm. Ordinarily this is less than half the disk space. DISASTER PROTECTION In the broadest sense, the topic of disaster recovery is well beyond the scope of this chapter. However, the chapter does focus on reconstructing a file system, assuming it has been partially or completely destroyed. A general disaster recovery principle states that if a disaster occurs, the more accurately an environment can be reproduced the faster the system will be back up and running. The fewer changes that need to be made in bringing a system back up, the easier it will be. Disaster recovery companies dealing with mainframes specialize in keeping duplicate environments (called hot sites) running at a separate secure location so that recovery time is absolutely minimal in the event of a disaster. A more modest position is to make sure that the environment can be reconstructed quickly. Preparing a disaster plan is an important part of deciding what is the correct level of protection for any particular company. The first step is to have documentation that accurately catalogs the hardware environment. This means documenting the type of computer, storage media, add-on hardware, devices, network connections, and so on, as well as model numbers, time of purchase, the level of upgrade, in short, a complete journal of all the hardware for each server that has to be restored. The same applies to software. Maintaining a data base (which must be backed up regularly) is a good way of keeping this information, but hard copy reports should be kept off-site in the event of the data base’s destruction. The second step is to have a carefully thought out backup plan that includes off-site tape rotation. This implies installing a good set of software utilities and making disciplined use of those tools. It also includes the manual task of transporting tape off-site. When global networks are faster and less costly this too will be automated, but at the moment a manual system is the most cost effective. The first step in rebuilding the system after a disaster is to reconstruct the hardware as precisely as possible. This will mean replacing whole machines or faulty components and performing hardware system tests to ensure proper operation. After restoring the hardware environment, the network operating system has to be installed, at least the minimum system sufficient to support the backup software and hardware and the file system, including any volume partitions and logical drives. Workstation-based backups require a properly configured workstation and server support for both communications and the file system. The last step is to restore the file system to the most recent state backed up. This means restoring from the most recent full backup and subsequent incrementals or differentials, according to the backup scheme. Particularly important are the security attributes of the users, groups, work groups, and file servers. If the security and file attribute information cannot be restored, then all users, groups, and other objects specific to that server must be recreated from scratch. After the hardware, the system and data files, and their attributes are restored the server should be tested and reviewed before putting it on line. If it is put online with incorrect data or systems and customers begin using it in this configuration, there will be trouble reconciling the changes made against this restored system with the old system when its correct version is finally restored (and almost certainly it will be necessary to go back and do the correct restore). As a part of the normal backup scheme, weekend backups at least should be rotated off-site. For more security, all backups should go off-site nightly. Fires, water damage, earthquakes, hurricanes, and other disasters happen more frequently than might be expected. Disasters also can happen to user workstations on the network. In most environments there is at least some local storage. This storage is subject to the same risks as centralized storage. Fortunately, it also can take advantage of many of the same protective strategies. Most of today’s network data management tools provide some method of backing up and restoring local workstations. Critical local data should be treated just as server data is treated. SUMMARY This chapter has focused on some important techniques for preserving the integrity of data and managing system storage. There are many variations on the themes presented here and the terminology used by different tool and system vendors may vary slightly. This chapter should make it possible to sort through the different options available and choose and operate the ones that are most appropriate for a given network. Previous Table of Contents Next Use of this site is subject certain Terms & Conditions. Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.



Wyszukiwarka

Podobne podstrony:
677 679
675 Dopłaty wnoszone na pokrycie straty w spólce z o o
675 680
6 rozB 656 675
Program literacki romantyzmu Omów temat na podstawie wy~675
677 (2)
MaxCom KXT 675
672 675
Nuestro Circulo 677 AJEDREZ Y FILOSOFÍA, 15 de agosto de 2015
index (675)

więcej podobnych podstron