Archive for Drive Backup

A Good Backup Strategy: Your Best Defense Against Ransomware

By Tom Fedro
As seen in Security Magazine 2.2.17 –

Last year, cybercriminals attacked the California-based Hollywood Presbyterian Medical Center, encrypting files crucial in running the hospital’s operating systems and demanding a ransom to restore them to working order. The scam worked – after 10 days of futility, the hospital surrendered and paid $17,000 to regain system control.
Other hospitals, government agencies and businesses in the U.S. and abroad were targeted similarly last year, leading CNET to dub such ransomware scenarios as “the hot hacking trend of 2016.” And the numbers are truly staggering. Osterman Research estimates that nearly half of surveyed organizations have been hit with ransomware within the last year, and concludes that ransomware will amount to a $1 billion source of income for cyber criminals in 2016. In a recent report, Kaspersky Security states that in Q3 2016, a business was attacked by ransomware every 40 seconds, and that even after paying the ransom, one in five of them never got their data back.

Apple Users Now a Target

But while many ransomware instances go unreported due to embarrassment or the desire to not be targeted again, the attacks were thought to be largely focused on the Microsoft Windows software realm, leaving Apple users relatively unscathed. But that changed in 2016 when the first public ransomware targeting Apple systems was discovered by Palo Alto Networks, which found a popular BitTorrent client for Apple’s OS X software for Macs infected with ransomware. Known as “KeRanger,” the ransomware is delivered with a ransom note demanding 1 Bitcoin, which has a current market value over $700. Fixing the problem can also be complicated and time consuming.
Antivirus software also isn’t having an impact; by the time a computer is infected with ransomware, it’s likely that the antivirus software won’t detect it until it’s too late and the damage has been done. The encryption used by modern ransomware is often too good to crack, leading most security experts to conclude that the best approach to fighting ransomware is to avoid it in the first place.

Different Backup Approaches

It seems the most effective way for Apple users to safeguard their computer files from these nefarious attacks is through regular backups. And, in the event you are hit with ransomware, the solution would lie in simply restoring your system to the state it was before the malware hit your computer. There are several backup and restore approaches to consider for the Apple environment:
Time Machine is the backup software application distributed with the Apple operating system, introduced in Mac OS X Leopard. It was designed to work with various storage drives such as Time Capsule. But for Time Machine to be effective, files must be unlocked or closed, which may not be practical for those currently in use. In addition, there is the possibility of a two-step process within OS X that requires users to reinstall the operating system before retrieving the application and files from the backup image.
File System Snapshots simplify backup and recovery by taking a point-in-time virtual file system photo. But while this backup method can be employed to protect active operating systems, depending on files sizes, it can take significantly more time.
Disk Management Solutions can create image-based copies of a disk or partition (or multiple disks and partitions) whether active or inactive, at a specific point in time far more quickly. Such robust offerings have the advantage of being able to make consistent sector-level backups (also often referred to as Snapshots) even if data is being currently modified.

Thus, while there are different backup approaches to consider, the bottom line is that a regular, proactive backup strategy – potentially even a multi-pronged approach – is your best defense against crippling ransomware attacks. And while Apple users were once immune from such attacks, they too now need to join the rest of the computer world in being vigilant in protecting themselves. After all, like many things in life, when it comes to avoiding being held hostage by cybercriminals, an ounce of prevention is worth a pound of cure.

Paragon Software Survey Results Show That Performance Is the Most Important Consideration in Backup and Recovery Software

Additionally, Over 70 Percent of Respondents Are Still Utilizing Windows 2003 and Nearly 80 Percent Have Windows XP in Their Operating Environments

Disk drive partition misalignmentBy Tom Fedro

Paragon Software conducted a survey last quarter that consisted of both Paragon and non-Paragon customers. Respondents revealed a couple of interesting results: 1) a reluctance to upgrade older operating systems (OSs) with newer software platforms and 2) when considering a backup and recovery software solution, performance is more important than price or support.

Out of 580 respondents,

  • 70.8 percent of respondents are running Windows 2003 in their environment and 79.0 percent of the respondents are running Windows XP in their environment
  • In order of importance when selecting a backup and recovery solution, 77 percent of respondents rated performance as their highest priority (over price and support)

At first, the two do not seem related, but in fact they are. No matter the reason for keeping an older OS in operation (i.e., cost or functionality), system performance may become an issue due to the transition from the 512-byte sector to 4K-byte sector storage standard. As explained in Partition Alignment: Problems, Causes and Solutions written by storage guru Thomas Coughlin, “…older operating systems and utilities can misalign the logical sectors in the host device and the physical data on the HDD sectors resulting in a significant performance degradation…if there is misalignment of the 512 byte logical sectors to the 4K byte physical sectors, it forces the hard disk drive to perform an addition read operation…” Hence, if you use an older OS with a newer 4K disk drive, you will run into performance issues.

Luckily, Paragon Software has an easy solution to correctly align your partitions and eliminate redundant read/writes: the Paragon Alignment Tool (PAT). PAT is a powerful utility that automatically detects if the drive is misaligned and then properly realigns all existing partitions, including boot partitions (and any data they contain) to the 4K-sector boundaries. Additionally, if you are using virtual server technology or have recently upgraded to solid state drives, your partitions may be misaligned.

Without realignment, performance loss can range between 20 and 50 percent, which can cause catastrophic issues during peak times.


State Governor’s Office Ensures Reliable Backup in Case of Disaster

Paragon Backup Software for Disaster RecoveryBy Tom Fedro

After Hurricane Katrina hit the Gulf State region of the U.S. in 2005, IT departments in states based in the southern part of the U.S. became particularly sensitive to the potential loss of their critical data.

When the director of technical services started his new position in the govenor’s, one of the first tasks was to replace the old tape backup system with a more reliable, and cost-effective, image-based backup solution. After a lengthy and comprehensive evaluation process Paragon Software’s Hard Disk Manager (HDM) Server was selected to ensure that their files were safe in case disaster strikes.

Not long after the selection was made the office had its first test of the new backup system. Their RAID controller and the backplane on one of their servers failed, thankfully Paragon’s HDM solution rose to the challenge and not only ensured there was no data lost, but also had the office back up and running in record time. To read the case study in its entirety along with others, search by product or by market.

To view a video demonstration of our Drive Backup Server software (bundled with HDM for Servers), check us out on YouTube:



Mutli-Cast Image Deployment Management Software in an Education Environment

deployment management for educationBy Tom Fedro

Over the last few weeks, we have seen an uptick in inquiries from the educational sector for our deployment management software. To help IT professionals evaluate the best software for their needs, we published a situational case study about a typical secondary educational setting and how our software, Deployment Manager, speeds up deployments of new machines and simplifies desktop refreshes so that classroom staff can perform a refresh without the help of the IT department.

Its small IT staff of 15 spent a significant amount of time refreshing desktops in their computer training classrooms; as well as preparing hundreds of machines simultaneously for staff use. One of the requirements was ensuring that deployments could be conducted to bare-metal, dissimilar hardware. The second requirement was to develop an efficient method of processing new deployments quickly and easily. It needed to automate the process of installation/refreshment and wanted expert tools and advice to conduct the deployments quickly and cost-effectively.

Paragon Software’s Deployment Manager re-imaging software was selected in part because of the list of features, including

  • Automated or manual deployment of individual or hundreds of systems
  • Multicast and unicast support
  • Customizable Linux and WinPE-based boot media
  • ConstantCast – cyclic multicast deployment session  add systems to deployment session while it’s running
  • Initiate deployment directly from the shop floor
  • Adaptive Imaging Tools – deploy one image to several dissimilar systems
  • Pre/post deploy configuration options
  • Scripting
To read the case study, visit our Case Study page at

Disaster Recovery Software Decision-Making Criteria

By Tom Fedro

Tom Fedro discusses decision making criteria for disaster recoverySales of disaster recovery software have shown dramatic growth over the last several years as just about every company has come to rely on systems and data management for continued operation. Although the first mention of this kind of disaster recovery occurred in the 1970s, it wasn’t for decades that the importance was fully realized. Back then, technology really wasn’t intertwined with a company’s operations the way it is now. Now, most companies would find it strange to think of technology and business as independent, the way we find it strange when we watch a TV show from the 1980s and don’t see cell phones.

Although technology is still advancing at breakneck speed, the data protection industry is essentially mature, and a number of companies vie for market share. When a company decides to determine which solution is correct, there are some important and critical considerations that need to go into the decision-making process. First and foremost, what is the need?

Too often, this step is skipped. Companies tend to examine what’s available and make choices based on the four or five alternatives they come across. That’s the wrong way to do business. The smartest people in the world make mistakes like this one, but they shouldn’t! There are a couple of cardinal rules about shopping at the grocery store that come to mind. First, never shop hungry. You end up overbuying and typically unhealthily. In the same manner, don’t wait for a crisis to buy your software. You’ll end up buying more than you need in most cases and the pain of the urgency will get the better of you.

The second rule? Shop with a list. Without it, you end up buying food you don’t need and you forget food you do need. In the world of technology, your list is called a needs assessment. Sit down with your tech department and your operations and figure out what you need. Here are some conversation starters.

    1. How much data can we afford to lose in a given period of time? One week? One day? One hour? This answer will tell you how regular your backups will need to be, and thus how important the ease of backup and the interruption the procedures cause will become to your decision making.
    1. How reliant on the systems is each department? It’s possible your inside sales department could handle a few hours of downtime. On the other hand, it might cripple your accounting department. When you’ve got all the information, you not only have criteria to determine purchasing based on restore times but also a blueprint for which departments should receive first attention from your IT department in the event of catastrophic failure.
    1. Which particular elements of the system or the data are most critical? If your employees have a dramatic need for email but not other documents, you’ll want software that can provide tools for partial and immediate restoration of that critical information (commonly called granular restore) while the rest of the system comes on line.

Don’t fall into the “Ready, Fire, Aim!” trap. Make your technology decisions like you make other business decisions. Identify the correct solution first. Then go out and get it.


Optimizing the Recovery Point

By Tom Fedro

Tom Fedro discusses optimal recovery timeWhat’s the optimal recovery point in data backup?  Most tech professionals immediately jump to shout out as loudly as possible “continuous!” or “on the fly!”  Believe it or not, that’s just not correct.

Okay.  Take a deep breath.  I know it sounds like I’ve just committed techno-heresy, but I’m speaking from an operational standpoint here.  The reality is this—on the fly continuous backup is disruptive to most businesses and—brace yourself—unnecessary to most businesses.  The disruptive nature is fairly easy to understand.  Constant image-based backup uses resources and stops users from making changes.  System resource use alone would create a dramatic slowdown.

Does that make sense for a business that doesn’t deal with dynamically changing data?  What about businesses that regularly use but don’t regularly alter data?  Excessive data backup will sometimes cause more of a slowdown than minor data loss.

While nearly every business relies on data nowadays, not every business changes the data with enough frequency to justify the expense or the irritation of attempting to reach near-continual backup.  Companies ought to search for the optimal backup solution.  This solution will be based on the amount of data that needs protection and the frequency of modification in that data.  In some cases, a single backup procedure a few times per week is all that’s needed.  Some businesses will need daily backup, and some businesses will need consistent image-based backup with file-based backup at regular intervals.

There’s an optimal choice and its different for different organizations.  Don’t make the mistake of buying and implementing a solution that makes a whole lot of sense—for someone else’s company.  Determine your real risks and real needs.  Then, consider the impacts of the following:

  • The cost of the backup solution.
  • The costs of implementing the backup solution. (I’m talking about tech department payroll, here.)
  • The costs to the business operations of the implementation.

You may come to the conclusion that backup approaching on the fly consistency may indeed be what you need.  Don’t reach that conclusion because it’s the best available, though.  Reach that conclusion because it has the greatest operational impact on the business.  Somewhere between regular backup and constant backup is the right interval for most businesses.  Find out where on that timeline yours belongs and act accordingly.  Don’t fall into the trap of acting first.

Don’t Forget IT Resources Are Resources Indeed

Tom Fedro and Drive BackupBy Tom Fedro

There’s a great line in the first Jurassic Park movie.  Jeff Goldblum’s character says (and I’m paraphrasing), “You were so busy trying to figure out if you COULD do it that you never stopped to think about if you SHOULD do it.”  I have always liked that line.  Sometimes, technology is all about whether or not we can accomplish something, and the result is a complete dismissal of the purposes and the impacts of technology.  Anyone who’s spent time in the world of technology has seen a brilliant programmer or engineer literally shaking with excitement over something that—well, sure, it’s neat, but let’s face reality, here—has absolutely no value over and above the accomplishment of making it happen.

For business, an IT department ought to be all about the SHOULD and not the COULD.  We tend to forget that IT resources are just that, resources.  A good company will use its IT resources to ensure business continuity, to make sure that the company’s operations are served efficiently and effectively with minimal (the goal is none at all) interruption.  Sometimes, we let that goal cloud the fact that the resources spent to accomplish that are also coming out of the company coffers.

Case in point—the Salvation Army.  Jim Vizzacaro, who runs the Eastern Michigan Division’s technology worked over and above what anyone has the right to expect of a technology officer in order to ensure the organization could keep up with its goals and demands.  He and his team spent hours manually installing, re-installing, backing up, and deploying images to servers with the right and correct goal, to keep the money flowing to the people the Salvation Army helps.  When they acquired Paragon Software’s Drive Backup 10 Server Edition, the department’s workload was dramatically reduced.  The money the Salvation Army spent on hanging on could instead go to improvements, training, and other priorities.

The situation with the Salvation Army and drive backup isn’t unique.  The IT Administrator at Purdue saw a savings of three hours every single day.  That is three hours of the IT manager’s time.  Three hours.  I could repeat “three hours” eleven or twelve more times, and it wouldn’t lose the remarkable power of that statement.  What would the average IT professional give to get three hours back?  Dare I say…shudder…that three hours of an IT professional’s day is worth more than three hours spent elsewhere?  When you consider that those three hours can go to improvement rather than maintenance, I think you can see a good case for it.

Remember, management of data is all about minimizing loss.  That loss could come from downtime, data loss, system failure, data breach, or a number of other issues relating to data protection.  Let’s not forget the hidden losses, the ones that come from inefficient handling of the processes that protect the data and the business continuity in the first place.