A Quick and Dirty Backup Server

At my new internship, there’s a lot of computers. They’re all of varying age. Some of them have graphics cards and some of them have massive amounts of storage. What doesn’t vary between them is that at some point they will fail spectacularly.

Working in a hospital, spontaneous failure of machines is not acceptable. When shit hits the fan, you don’t want lives to be the mercy of this catastrophic hardware bust. Preemptive action is one of the few solutions but what this means is a constant rotation of machines.

Replacing the PCs before they get to the point of failure allows us to prevent the sort of data loss attributed to such an event and allows us to remove the possibility of hardware calls (the type that can’t be done remotely) in the future.

With PCs being backed up, formatted, and having that backup put on the new machine, in less than a few hours, I was striving to find a way to streamline the process.

Before

The current process looks a little something like this: Remove the old PC, install the new one. Bring the old PC up to the office and insert a USB with the backup software on it, along with an external hard drive.

Once the backup software is booted, create the folder for that machine’s backup on the external hard drive and image the hard drive using Macrium Reflect (it’s one of the few free solutions that does “intelligent” imaging, meaning the file size is only what was used on the partition being imaged).

After the backup is taken, remove the EHDD from the backup machine and plug it into a currently networked PC. Mount the backup and copy the files within to the new PC in an OldData folder on the C drive, permissions still intact.


The problem with this solution is that there’s a lot of walking around and there’s a lack of universal availability after the backup is taken. There’s a lot of moving parts and a potential for things to be lost in transit, not to mention X person using X hard drive for X other thing than backups and not being able to take a backup when the PC is available.

After

The proposed solution involves a few parts including DHCP, PXE, and NTFS Shares. So here’s what the new process looks like:

When the PC is removed, it’s brought up to our backup station. What this solution consists of is a single networked PC (“the server”), with two NICs, and a switch. One NIC is attached to the hospital’s enterprise network. The other is connected to the switch.

On the networked server there’s a DHCP server with PXE boot enabled. This DHCP daemon is only listening on the second NIC, the one attached to the switch. When the old PC is plugged into the switch and network booted, it will load a copy of Macrium Reflect live boot.

On the server, there will be several shared drives. Each drive will correspond to one of the techs in our department. When Macrium Reflect is accessed, the tech will use the hostname of the server and shared drive letter, and then the folder name (eg: \\hostname\e$\backup1).

Once the backup is completed, the tech can then access the drive from their own PC, again using the hostname, shared drive letter, and folder name, and transfer the files over the network.



This allows us to remove the need for an external hard drive and USB. The caveat of this is that there’s a limited speed for file transfers over the network so the restoration transfer might take a little longer than before. Considering that the transfer process is the least painful part of the process, the slower speed is an acceptable tradeoff.

Closing statements: I realize there’s no backup replication here and that there’s still a potential for our drives to fail within this server at some point. This is okay as we only keep these backups as a just in case and only keep these backup images for a month.

There’s certainly a potential to make this even more streamlined and with the right hardware and software, that would certainly possible. The reason, however, I haven’t decided on something more elaborate is:

  1. I’m an intern
  2. I might not be there anymore in less than three months
  3. I’m not allocated any budget
  4. We don’t have permission to modify the network itself

Lastly, I’ll be writing up a full guide on how to implement such a back up solution. Keep in mind, however, that this is indeed a quick and dirty backup solution. Prestige and valour is not to be expected.


My blog is self-hosted on a VPS running Ubuntu nested in Digital Ocean’s VPS service. If you want to get a VPS from Digital Ocean, I’d like to ask you to graciously use this referral link: https://m.do.co/c/fa082b6466bf. You’ll get $10 in free credit and once you’ve spent $25 of your own money, I’ll receive $25 myself, meaning that you’ll be indirectly supporting my blog.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.