Yesterday morning, we had a component failure in our main database disk, making the entire server unresponsive. This was unexpected, as the drives were mostly new. All data on the drives was presumably lost, though we are exploring options to get it back. Attempts to swap to our backup server failed due to a strict security setting, preventing most browsers from viewing the alternate site. We are back up and running on the original server, although with a fair amount of data lost, and reduced performance. We've restored as much data as we had backed up, though more data may be restored down the road.
Going forward, we've ordered two new SSDs, and will be installing them in a RAID setup to add additional redundancy. Backup scripts are also far more comprehensive, as well as additional offsite locations for these backups. Full performance restoration will be completed within a week or two, barring any more unforeseen issues.