I will describe the issue we are facing and the action we plan to take. I am asking for input and ideas on the best course of action to arrive at the desired end.
Our company has users who create edit and delete data on a networked drive on a regular basis. An individual was released from service and decided to delete ~200 gigs worth of accumulated business data saved over the last ten years or so. Basically deleted the root directory everyone works in.
We paid a recovery service to recover what they could which is a complete mess, no file structure retained and maybe half of the data is corrupt and half isn't. A huge pain but not the focus of this question/ discussion. Would be nice if we could figure out a way to restore access to the data EASEUS software found because the file structure is there but it is 100% corrupt or inaccessible who knows. Anyone who could make that happen would be my savior.
The question/ discussion/ goal is to prevent from a second occurrence. Using RAID is not the goal, and simply will no achieve what is needed. Hardware failure is a much, much lower possibility than is user maliciousness or error. Third party software is undesired. Mirroring the drive is not going to help.
We want the data periodically copied either in whole or in part as this or that file is updated. The backup drive will not be shared and will be a separate hardware in the server. Since it will not be accessible by the user it will be protected from them. The problem is small changes in data are permitted while large overnight changes are not. So hopefully that gives you gurus the basic idea.
The plan is to write a Robocopy batch script that task scheduler will run once a week to copy new and adjusted data to the backup drive without deleting anything. Seems simple and effective.
What kind of switches are recommended?
We would like the script to exit and not run if say more than 10gigs have changed since the last run. Is that possible?
Since in general only new data and changed data will be copied and no old files will be deleted the backup may eventually exceed the space available on the drive. We plan to address this by simply copying the same script to a new batch file with the /purge switch. This script would run say every six months. and an admin would peruse the network drive to make sure everything was intact before it runs. On these six months we'll likely run the same commands to an external drive that will be kept off site. These are 1TB drives and the time required for the backup is irrelevant the server is on 24/7 so no one would notice or care if it took all day to rewrite/ copy the shared drive to a non shared drive.
So that's the idea. And I being responsible for implementing all this, am seeking advice, sample scripts, answers to these few questions and general input on how best to accomplish these goals.
Thanks in advance for your help
Our company has users who create edit and delete data on a networked drive on a regular basis. An individual was released from service and decided to delete ~200 gigs worth of accumulated business data saved over the last ten years or so. Basically deleted the root directory everyone works in.
We paid a recovery service to recover what they could which is a complete mess, no file structure retained and maybe half of the data is corrupt and half isn't. A huge pain but not the focus of this question/ discussion. Would be nice if we could figure out a way to restore access to the data EASEUS software found because the file structure is there but it is 100% corrupt or inaccessible who knows. Anyone who could make that happen would be my savior.
The question/ discussion/ goal is to prevent from a second occurrence. Using RAID is not the goal, and simply will no achieve what is needed. Hardware failure is a much, much lower possibility than is user maliciousness or error. Third party software is undesired. Mirroring the drive is not going to help.
We want the data periodically copied either in whole or in part as this or that file is updated. The backup drive will not be shared and will be a separate hardware in the server. Since it will not be accessible by the user it will be protected from them. The problem is small changes in data are permitted while large overnight changes are not. So hopefully that gives you gurus the basic idea.
The plan is to write a Robocopy batch script that task scheduler will run once a week to copy new and adjusted data to the backup drive without deleting anything. Seems simple and effective.
What kind of switches are recommended?
We would like the script to exit and not run if say more than 10gigs have changed since the last run. Is that possible?
Since in general only new data and changed data will be copied and no old files will be deleted the backup may eventually exceed the space available on the drive. We plan to address this by simply copying the same script to a new batch file with the /purge switch. This script would run say every six months. and an admin would peruse the network drive to make sure everything was intact before it runs. On these six months we'll likely run the same commands to an external drive that will be kept off site. These are 1TB drives and the time required for the backup is irrelevant the server is on 24/7 so no one would notice or care if it took all day to rewrite/ copy the shared drive to a non shared drive.
So that's the idea. And I being responsible for implementing all this, am seeking advice, sample scripts, answers to these few questions and general input on how best to accomplish these goals.
Thanks in advance for your help