Sign in with
Sign up | Sign in
Your question

Home Built Server

Tags:
  • Homebuilt
  • Dedicated Server
  • Servers
  • Systems
Last response: in Systems
Share
November 7, 2006 12:42:29 PM

Howdy!

I was looking at building a server for in my house. It will be used a couple different ways.

First and foremost it will be used as a dedicated server for hosting LAN's... thinking about 70ish people per LAN (just ballparking to give you an idea).

I also want this to be used as storage, I live with a few room mates who have lots of songs, video's, stuff they want to save basically. Redundancy is a must (whether it be on board RAID or a controller any of you know of it really doesn't matter)

I'm mostly a gamer and can build a computer no problem, but when I start building a computer for a specific purpose other than general use, or gaming I start getting lost. Also, because it is a server I'm not clear on the hardware specs that I will need.

So, what I'm asking from the general public here is a basic list of the components they would put in a server if they were using it for the same purposes as I.

Let's say a ball park figure is 2000 US that I have to spend.

More about : home built server

November 7, 2006 1:06:19 PM

Well, consider this. A server is nothing more than a client computer serving out requests.

Now, for hosting games, you'll want to maximize your bandwidth to the clients, so GigE is a must. If needed, you can always add another GigE card and bond them together for more bandwidth.

Now, for storage of videos, songs, etc. with redundancy, I'd go with RAID 5. No need for overkill here with really fast drives or anything. Simple, but large, 7200rpm drives will suffice.

Additionally, I don't think you'll need a lot of disk I/O for hosting the games either. However, in order to increase disk whenever needed, I'd recommend getting a good 8 port SATA RAID controller that allows you to expand the array when you add drives. I believe Promise manufactures some cards that will do this.

Also, CPU is something you can scale back on, although I wouldn't go back too much. I wouldn't go with anything 64-bit, as it's not required, and offers little benefit here. A P4 2.8 or so would be sufficient.

I'd go with 2GB of RAM in 2 x 1GB sticks, if you have a board with 4 slots. This will let you upgrade easily in the future, should the need arise.

Also, for your LAN party, make sure you have a decent switch to connect all the clients to. An under-powered switch will render your LAN useless.
November 7, 2006 1:10:47 PM

Ahh see there it is, as a server I just assume CPU power is a requirement and therfore needs a lot of cash pumped into it.

I work with Promise cards on a daily basis at work so I may look into buying one.

CPU I was thinking about purchasing a low-end Opteron?

The switches we will be using at the LAN parties are Cisco Series 2400(?) I can't remember. Dual GigE was definately something I was thinking about for scaling to larger LAN parties and such.

I still have a lot of interaction with my old high school, and as such I like to help them with fund raisers and they tend to like LAN parties with the usual CS, Halo, CS Source, UT2K4 etc. etc.
Related resources
November 7, 2006 1:16:15 PM

Quote:
Ahh see there it is, as a server I just assume CPU power is a requirement and therfore needs a lot of cash pumped into it.


Well, disk I/O is the server's main bottleneck. Think about it: what exactly will the CPU be processing? Yes, it may go up to 20%-50%, but who cares if it is 50% or 10%? It still has room to grow. Think disk, as that's what you are serving here.

Quote:
I work with Promise cards on a daily basis at work so I may look into buying one.

CPU I was thinking about purchasing a low-end Opteron?


I'd go with whatever costs the least. I don't see any games that require 64-bit, and file serving could be handled by a P4 2Ghz. Of course, if you want bragging rights, then Opteron is the way to go.


Quote:
The switches we will be using at the LAN parties are Cisco Series 2400(?) I can't remember. Dual GigE was definately something I was thinking about for scaling to larger LAN parties and such.

I still have a lot of interaction with my old high school, and as such I like to help them with fund raisers and they tend to like LAN parties with the usual CS, Halo, CS Source, UT2K4 etc. etc.


Dual GigE is even better. It's been awhile since I have looked at any Cisco specs, but just make sure your system can access the network at GigE (or 2Gb) speeds.
November 7, 2006 1:22:54 PM

all a server is is a really good PC. That being said I mean that it has to be able to stay turned on for a long time, remember it doesnt have to have any output except for the LAN. you have to get a Mobo with a gigabyte LAN controller as 100 probely wont be good enough though this depends on the Hub. and hosting 70 odd people that would require a pretty good switch/hub rather then a server.
you would need something like this:
http://www.ciao.co.uk/HP_ProCurve_Switch_4108GL__544382...
course you would have to look around for a better/cheaper one since this would use up your entire budget but it gives you an idea of what you need to get.

i wouldnt bother with built in RAID controllers as they arnt as good as a seperate solution, so an addon RAID card is important since you want redundency i'd say you would have to get a RAID 5 controller. that way if one of the drives fails you can hot swap another into it.
I think a better Idea for you is to make a cheap NAS with a 72 switch.
The NAS can be make with any old PC. what i'd do is run your OS on a Hdd seperate from the storage, buy a PCi RAID 5 controller get 4 500Gb hdd which will give you about 1.5 terribytes of storage, with full redundency. or failling that and you want to use it for hosting LANs then get a bitching PC with RAID 5 etc, etc.
but anyway i really dont think that you need to get an actual Server parts, since i doubt that its going to accessed by many ppl. an example of a server? check out http://www.lisa2.com/
November 7, 2006 1:32:23 PM

Quote:
all a server is is a really good PC. That being said I mean that it has to be able to stay turned on for a long time, remember it doesnt have to have any output except for the LAN. you have to get a Mobo with a gigabyte LAN controller as 100 probely wont be good enough though this depends on the Hub. and hosting 70 odd people that would require a pretty good switch/hub rather then a server.
you would need something like this:
http://www.ciao.co.uk/HP_ProCurve_Switch_4108GL__544382...
course you would have to look around for a better/cheaper one since this would use up your entire budget but it gives you an idea of what you need to get.


Remember that it is gigabit, not gigabyte. Big difference, but just a note about semantics.

Quote:
i wouldnt bother with built in RAID controllers as they arnt as good as a seperate solution, so an addon RAID card is important since you want redundency i'd say you would have to get a RAID 5 controller. that way if one of the drives fails you can hot swap another into it.


Hardware RAID does not mean hot swap. Hot swap requires a backplane that supports it or a hardware controller that allows it. Either way, I wouldn't go that route, since it adds cost. If a drive fails, power it down and swap it out.

Quote:
I think a better Idea for you is to make a cheap NAS with a 72 switch.


What is a 72 switch? Are you talking 72 ports? I'd just stack several 24-port or two 48-port switches together via a GigE link.

Quote:
The NAS can be make with any old PC. what i'd do is run your OS on a Hdd seperate from the storage, buy a PCi RAID 5 controller get 4 500Gb hdd which will give you about 1.5 terribytes of storage, with full redundency. or failling that and you want to use it for hosting LANs then get a bitching PC with RAID 5 etc, etc.
but anyway i really dont think that you need to get an actual Server parts, since i doubt that its going to accessed by many ppl. an example of a server? check out http://www.lisa2.com/


Well, he knows 70 people are accessing it and I agree that specific server components are not required for this build. I'd skimp on video (onboard is sufficient) and CPU and build up RAM, disk, and LAN.

Also, I'd put your OS on the RAID array. Otherwise, if the OS drive fails, you're stuck rebuilding. I see no benefit in moving the OS from the array.
November 7, 2006 1:32:51 PM

What I've used in the past for a server at these fund raisers (when I was in school) was a Dual P3 1.4 Ghz, 2GB memory, dual SCSI 36GB drives with dual Gigabit ethernet + onboard 100Mb. It worked awesome for CS 1.6 and about 16-20 people.

I could easily build another bitchin' system for a server. I just don't want to run into a LAN, host it then everyone being dissappointed because the server can't handle it.

You guys have given me some awesome feedback and I appreciate it greatly, definately gives me somewhere to go with this.

So now I will start shopping around for prices on:
Gigabit NIC's.
RAID controllers that will support 0, 1, 5, 10 (pretty much standard as I don't want to be limited)
2GB of memory (any difference in high end memory [i.e. Corsair XMS] vs low end [i.e. Corsair Value RAM]?) Also will ECC vs. Non-ECC be something I should take into consideration?
Hard drives + processor prices (I have a rough idea on these so I won't need to shop too much)
November 7, 2006 1:34:41 PM

A server can go with a cheap low end video card as well since you wont be running it for high end graphics.

I suggest if you are looking for really massive storage, you check out THG's reviews on RAID controllers. There are a couple of products there that look really good.
November 7, 2006 1:36:59 PM

What is a 72 switch? Are you talking 72 ports? I'd just stack several 24-port or two 48-port switches together via a GigE link.

What we traditionally do is link 24 port Cisco switches around an island setup of tables. Link the switches using a Gigabit link.
November 7, 2006 1:39:53 PM

Quote:
A server can go with a cheap low end video card as well since you wont be running it for high end graphics.

I suggest if you are looking for really massive storage, you check out THG's reviews on RAID controllers. There are a couple of products there that look really good.


Yea we definately are going with low end video to cut costs, no reason to have anything over integrated.

I was just wondering on a side note, do any of you know of a decent program that can be used to monitor network throughput/available bandwidth? I just think that would be something nice to have on a server so I can monitor how much is being utilized, know what I mean?
November 7, 2006 1:54:42 PM

Check over at www.majorgeeks.com They have any number of free and shareware utilities for all your needs.
November 7, 2006 1:57:31 PM

Just for the record I have to suggest that you go with an external RAID enclosure if you want hotswap availability...Im using a ReadyNAS NV unit with 2 TB of disks (4x500) 1.5TB is useable.
Its hotswap and quiet and draws only about 50 watts.

You may want to look at some other external solutions too with a lot of users you may want dual GigE adapters in the NAS.
November 7, 2006 2:11:28 PM

Thanks for the majorgeeks link.

I think I found the monitor that will work for me, it's free!

Also I think that I will skip the external RAID enclosure. I don't really need the hotswap ability. Plus I would imagine that external enclosure = added power consumption which is something else I don't need.
November 7, 2006 2:29:13 PM

Quote:
I was just wondering on a side note, do any of you know of a decent program that can be used to monitor network throughput/available bandwidth? I just think that would be something nice to have on a server so I can monitor how much is being utilized, know what I mean?


If you have SNMP enabled on your switches, I use MRTG. However, that requires a web server, perl, etc to run. Good product though.
November 7, 2006 2:36:03 PM

what i mean by hot swap is that you can change a drive while it is switched on, which is what you can do with a RAID 5, hot swap doesnt mean you have to have a cradle for the hdds.
and btw i never said that hardware RAID gives you that ability, RAID 5 however does.
and yes i did mean a 72 port switch/hub (i can never remember which is which, but they are more or less the same from what i remember)
and the reason i'd put the OS on a separate Hdd, or even a flash drive (it is possible to boot from a compact flash drive) is that you can back it up easier without messing with the files, and the whole array can be moved to another computer if things go tits up.
November 7, 2006 2:40:33 PM

Quote:

2GB of memory (any difference in high end memory [i.e. Corsair XMS] vs low end [i.e. Corsair Value RAM]?) Also will ECC vs. Non-ECC be something I should take into consideration?
Hard drives + processor prices (I have a rough idea on these so I won't need to shop too much)

sorry to create a new post but anyway, id go for more ram
and i doubt that it will make much difference between value and high end. also ECC is for the high end server stuff and if memory serves it wont work on normal mobos.
November 7, 2006 2:45:53 PM

Quote:
what i mean by hot swap is that you can change a drive while it is switched on, which is what you can do with a RAID 5, hot swap doesnt mean you have to have a cradle for the hdds.


I know what you meant by hot swap, and you're right, some drives do not have a cradle for them. However, you're not right about always being able to hot swap a drive in a RAID 5 array. Read on...

Quote:
and btw i never said that hardware RAID gives you that ability, RAID 5 however does.


Sorry, I misinterpreted your statement of "getting a RAID 5 controller" as implying hardware RAID gives you hot swap capability. On the other hand, RAID 5 does not give you the ability to hot swap. Consider a RAID 5 with 3 single-ended SCSI drives or even 3 ATA/100 drives. You cannot simply hot swap a drive in this configuration as the SCSI/IDE bus does not support that. You risk blowing the termination, at least in the SCSI config, on the card and/or drive by doing so. Hot swap capability is added by underlying hardware, as is the case with differential SCSI and USB and FireWire drives for that matter.

If you have the correct hardware, you could hot swap a drive regardless of whether an array exists (even a standalone hot swappable drive). If you have a fault tolerant array configured (i.e. anything but RAID 0), you can hot swap a drive and let the array rebuild. You can even hot swap a drive in a RAID 0, although, you'll need to rebuild the data manually. It's all about the hardware and has nothing to do with RAID levels.

Quote:
and yes i did mean a 72 port switch/hub (i can never remember which is which, but they are more or less the same from what i remember)


Hubs do not support full duplex operation; in other words, a node either transmits or receives. It cannot do both at the same time. Additionally, if 4 nodes are connected to a hub, if node 1 transmits a packet to node 2, all nodes see the packet.

A switch allows full duplex operation, so each device can transmit and receive at the same time. Additionally, after the switch learns which MAC address is connected at each port, it directs a packet sent from one node directly to the port of the other node, and all other nodes do not see the packet.

Quote:
and the reason i'd put the OS on a separate Hdd, or even a flash drive (it is possible to boot from a compact flash drive) is that you can back it up easier without messing with the files, and the whole array can be moved to another computer if things go tits up.


Well the array can be moved irregardless of whether an OS sits on it. I usually create one array, then two separate volumes on that array. Thus, you can wipe the one volume, and leave the data intact.
November 7, 2006 3:23:38 PM

Quote:

Also, CPU is something you can scale back on, although I wouldn't go back too much. I wouldn't go with anything 64-bit, as it's not required, and offers little benefit here. A P4 2.8 or so would be sufficient.


I wouldn´t go with a P4. P4 is dead money. If he´s going to invest a lot of money saving about 50$ on the processor isn´t a wise decision. Get a small Xeon, like the 3040 or 3050.
November 7, 2006 3:41:30 PM

Quote:

Also, CPU is something you can scale back on, although I wouldn't go back too much. I wouldn't go with anything 64-bit, as it's not required, and offers little benefit here. A P4 2.8 or so would be sufficient.


I wouldn´t go with a P4. P4 is dead money. If he´s going to invest a lot of money saving about 50$ on the processor isn´t a wise decision. Get a small Xeon, like the 3040 or 3050.

I thought about going with a small Opteron... don't think it would cost all that much.
November 7, 2006 3:57:37 PM

That is a reasonable idea too. Just make sure you have an upgrade path. You don´t want to buy a new server just because one of the game server software packages is a hog on memory or wastes CPU cycles like crazy.
November 7, 2006 3:59:10 PM

You could set up a very robust dual-core Opteron system on the 939 platform for just a few hundred dollars. I'm at work right now, so I can't give specific recommendations as to hardware or prices, but I will when I get home from work.

Please give me some idea as to the amount of usable drive space you're looking for, and I'll post my recommendations and prices and stay inside your $2000 budget.

-J
November 7, 2006 4:07:34 PM

My server has five drives - four in a Matrix Raid array. This array has a raid 5 partition and a raid 0 partition. The raid 5 partition is the system volume. The raid 0 partition is used by other computers on the network to write backps to. The fifth drive is used to back up the system partition.

It has a Pentium D 830 processor. No keyboard or mouse - we log into it through the network.

It has gigabit Ethernet as its main network connection and a fast Ethernet port I use as a service port

November 7, 2006 4:50:50 PM

Quote:
My server has five drives - four in a Matrix Raid array. This array has a raid 5 partition and a raid 0 partition. The raid 5 partition is the system volume. The raid 0 partition is used by other computers on the network to write backps to. The fifth drive is used to back up the system partition.


This makes no sense to me. Arrays are a product of a RAID. Then volumes are created on top of an array. Partitioning is done at the OS level.

Now, with four drives, you can't have a RAID 5 and a RAID 0 array. RAID 5 requires at least 3 drives and RAID 0 requires 2 drives (5 drives total). If the 5th drive is not in an array, how do you have this configured?

Also, if you are using a non fault tolerant array for backing up data, that just makes me cringe.
November 7, 2006 4:59:48 PM

Quote:

Also, CPU is something you can scale back on, although I wouldn't go back too much. I wouldn't go with anything 64-bit, as it's not required, and offers little benefit here. A P4 2.8 or so would be sufficient.


I wouldn´t go with a P4. P4 is dead money. If he´s going to invest a lot of money saving about 50$ on the processor isn´t a wise decision. Get a small Xeon, like the 3040 or 3050.

I meant that as an example, not a literal recommendation. I was giving a bare minimum I would go with.
November 7, 2006 5:00:13 PM

Quote:
My server has five drives - four in a Matrix Raid array. This array has a raid 5 partition and a raid 0 partition. The raid 5 partition is the system volume. The raid 0 partition is used by other computers on the network to write backps to. The fifth drive is used to back up the system partition.


This makes no sense to me. Arrays are a product of a RAID. Then volumes are created on top of an array. Partitioning is done at the OS level.

Now, with four drives, you can't have a RAID 5 and a RAID 0 array. RAID 5 requires at least 3 drives and RAID 0 requires 2 drives (5 drives total). If the 5th drive is not in an array, how do you have this configured?

Also, if you are using a non fault tolerant array for backing up data, that just makes me cringe.

Ever heard if Intel's Matrix Raid? I have a raid5 array and an raid 0 array on twp different partitions on the same set of four drives. The fifth drive is not connected to the array controller (ICH7R)

Here's some information on Matrix Raid:
http://www.intel.com/design/chipsets/matrixstorage_sb.h...

And here's my configuration:


Volume0 is the raid 5 volume and Volume1 is the raid 0 volume.
November 7, 2006 5:06:35 PM

Quote:
Ever heard if Intel's Matrix Raid? I have a raid5 array and an raid 0 array on twp different partitions on the same set of four drives. The fifth drive is not connected to the array controller (ICH7R)

Here's some information on Matrix Raid:
http://www.intel.com/design/chipsets/matrixstorage_sb.h...

And here's my configuration:

Volume0 is the raid 5 volume and Volume1 is the raid 0 volume.


Interesting setup, although not sure what performance impacts are seen by using the same spindles for different array types. I can't say that I would ever want to do that in a setup where performance is paramount, but interesting nonetheless.

Thanks for the link.
November 7, 2006 5:13:34 PM

Quote:
Ever heard if Intel's Matrix Raid? I have a raid5 array and an raid 0 array on twp different partitions on the same set of four drives. The fifth drive is not connected to the array controller (ICH7R)

Here's some information on Matrix Raid:
http://www.intel.com/design/chipsets/matrixstorage_sb.h...

And here's my configuration:

Volume0 is the raid 5 volume and Volume1 is the raid 0 volume.


Interesting setup, although not sure what performance impacts are seen by using the same spindles for different array types. I can't say that I would ever want to do that in a setup where performance is paramount, but interesting nonetheless.

Thanks for the link.

I'm not going for performance, but stability and efficient disk use. The raid 5 partition gives me the fault tolerance I need, but at a cost of disk overhead. I don't need fault tolerance in the backup partition because the backups get regenerated automatically every night, but I do need to maximize storage capacity. If I loose a drive, I'll loose all the backups (no big deal), but I won't loose the stuff I'm trying to protect. Furthermore, the raid 0 partition is only written in the middle of the night. The users logged in during the day are using the raid 5 partition, so there's no contention for the four spindles.
November 7, 2006 5:17:04 PM

Quote:
I'm not going for performance, but stability and efficient disk use. The raid 5 partition gives me the fault tolerance I need, but at a cost of disk overhead. I don't need fault tolerance in the backup partition because the backups get regenerated automatically every night, but I do need to maximize storage capacity. If I loose a drive, I'll loose all the backups (no big deal), but I won't loose the stuff I'm trying to protect. Furthermore, the raid 0 partition is only written in the middle of the night. The users logged in during the day are using the raid 5 partition, so there's no contention for the four spindles.


You overwrite your existing backups each night?

EDIT: I wasn't implying you were aiming for performance, it just came to mine. Although, not sure I'd ever think it was okay to lose backups.
November 7, 2006 5:25:52 PM

Quote:
I'm not going for performance, but stability and efficient disk use. The raid 5 partition gives me the fault tolerance I need, but at a cost of disk overhead. I don't need fault tolerance in the backup partition because the backups get regenerated automatically every night, but I do need to maximize storage capacity. If I loose a drive, I'll loose all the backups (no big deal), but I won't loose the stuff I'm trying to protect. Furthermore, the raid 0 partition is only written in the middle of the night. The users logged in during the day are using the raid 5 partition, so there's no contention for the four spindles.


You overwrite your existing backups each night?

EDIT: I wasn't implying you were aiming for performance, it just came to mine. Although, not sure I'd ever think it was okay to lose backups.

I take a full backup at 0200 Sundays, and a differential every night. The differentials overwrite each other. You probably already know that a differential captures anything that changed since your last full backup, so there's no need to save any but your most recent one - it just keeps getting bigger until you run another full backup. I manually delete the full backups when they get to be two weeks old.

Anyway - if any computer in the network looses its system drive, I can do a bare-metal recovery and put it back to the way it was at 0200 that morning. Saving more backups then necessary to accomplish that is extraneous. So, yes, its OK to loose backups as long as I can regenerate them.
November 7, 2006 5:28:18 PM

Quote:
You could set up a very robust dual-core Opteron system on the 939 platform for just a few hundred dollars. I'm at work right now, so I can't give specific recommendations as to hardware or prices, but I will when I get home from work.

Please give me some idea as to the amount of usable drive space you're looking for, and I'll post my recommendations and prices and stay inside your $2000 budget.

-J


I plan on 1.5TB for storage. I also would like another location specifically for hosting our games off of.
We were thinking 3 500GB drives in a RAID 5 array, then we're thinking about still what else we would like...
November 7, 2006 5:31:08 PM

Quote:
I take a full backup at 0200 Sundays, and a differential every night. The differentials overwrite each other. You probably already know that a differential captures anything that changed since your last full backup, so there's no need to save any but your most recent one - it just keeps getting bigger until you run another full backup. I manually delete the full backups when they get to be two weeks old.

Anyway - if any computer in the network looses its system drive, I can do a bare-metal recovery and put it back to the way it was at 0200 that morning. Saving more backups then necessary to accomplish that is extraneous. So, yes, its OK to loose backups as long as I can regenerate them.


Unless the client goes down during the backup, then you can only recover to the previous night at 2am. Also, we save more than one backup, including incrementals and differentials, just in case there is corruption problem in the file. I've seen it and don't take chances any longer.

At a system at our church, the guy running the system had a full backup each night, which overwrote the previous night's backup. The server took a dive during the backup, and left the backup file corrupted. Oh, what joy that was when I found out.

So what happens if your array loses a drive, and all backup files are gone, and a client needs a restore? By the way, please don't take this as picking on you or anything like that. Just friendly discussion... It's how we all learn, or at least how I learn.
November 7, 2006 5:53:08 PM

Mobo: this
Proc: Entry Level or Best Bang for the Buck
Mem: GSkill
HDs: 4 of these and 2 of these
RAID card: This promise card

Leaves you needing a case, optical, floppy, and OS.
Total: $1,713.42 with the Opty 165, or $1,789.42 with the Opty 175 (recommended). Should leave you with enough leftover cash to pick up those other parts. Also, you could skip the 320GB x 2 mirror that I had set up for game sharing, if you need to save some pennies.

Good luck on your upcoming build.

-J
November 7, 2006 5:57:31 PM

Quote:
I take a full backup at 0200 Sundays, and a differential every night. The differentials overwrite each other. You probably already know that a differential captures anything that changed since your last full backup, so there's no need to save any but your most recent one - it just keeps getting bigger until you run another full backup. I manually delete the full backups when they get to be two weeks old.

Anyway - if any computer in the network looses its system drive, I can do a bare-metal recovery and put it back to the way it was at 0200 that morning. Saving more backups then necessary to accomplish that is extraneous. So, yes, its OK to loose backups as long as I can regenerate them.


Unless the client goes down during the backup, then you can only recover to the previous night at 2am. Also, we save more than one backup, including incrementals and differentials, just in case there is corruption problem in the file. I've seen it and don't take chances any longer.

At a system at our church, the guy running the system had a full backup each night, which overwrote the previous night's backup. The server took a dive during the backup, and left the backup file corrupted. Oh, what joy that was when I found out.

So what happens if your array loses a drive, and all backup files are gone, and a client needs a restore? By the way, please don't take this as picking on you or anything like that. Just friendly discussion... It's how we all learn, or at least how I learn.

Still no big deal. The clients keep the stuff they want to keep on the Raid 5 array. I can wipe and load anyone's C: drive anytime - all the backup will do is restore their customizations. Besides, the likelihood of losing a drive in the array contemporaneously with someone else's C: drive is remote enough that I'm willing to risk it.

And, if a client tanks during the backup, I just rerun the backup. No big deal.
!