By Tom Fedro
Companies today have become more and more dependent on the efficiency and security of their data. It’s amazing how often, though, I come across people with absolutely no data protection strategy in place. Any data loss hurts business. A large data loss interrupts the flow of work, and cause a loss of both profit and productivity. I tell people all the time that protection of data should be a top priority.
Of course, they counter by pointing out that I’m a data storage expert and focus my work on data. Maybe that does make me a more adamant advocate, but look at ways data loss can hurt you:
- Productivity. You use your data in the course of your day. How much work would you or your employees get done if you didn’t have access to it?
- Customer Loyalty. How many times will losing your customers’ information occur before they stop being your customer? Have you ever had that sinking feeling when a company you’ve used for a long time can’t locate your information?
- Intellectual Property. What if your data is your product? Do you want to lose code, lose ad copy, images, or publications? How much loss it there when an intellectual asset isn’t protected intelligently?
These are just three quick examples of data loss costs. There are a great many more that are possible and even probable. Your data is a tool and an asset, and you’re not serving your organization well if you’re failing to plan correctly for protection.
Of course, in the end, it might be impossible to figure the complete amount of financial loss that is possible when data protection fails. Consequences can vary from mere annoyance, to devastating, and there is no way to cope with the risk without figuring out a contingency plan. What has your company done in this area? How safe is your data? How are you managing your data storage? Action on these questions now can save a lot more than a few dollars in the future.
By Tom Fedro
The prevailing standard in hard drive format is Windows NTFS. This has presented some challenges with technology with other formats (like Linux) in that data and media stored in the NTFS format is incompatible for viewing and use. The Paragon Universal File System Driver (UFSD) technology is here to bridge the gap.
Paragon Software’s unique UFSD Technology provides full access including read, write, format, delete, copy. It works within nearly all of the major kernels including Linux, VxWorks, eCos, Mac, Windows, and DOS. It offers access to NTFS, HFS+, ext 2/3, FAT16, and FAT32 where the systems would not be supported otherwise. It’s small size and fast delivery makes it easy to manage and operate. There’s no need for other libraries and it supports all NTFS features.
Acer recently chose UFSD for its Iconia Tab A500 and Iconia TabA501 lines, allowing the Android product based on a Linux kernel to access data and media stored in the NTFS format. Ultimately, end users of the product will be able to have full access to file systems which otherwise would not be supported. Paragon’s UFSD-based NTFS for Android driver will enhance the capability of the tablets, since most hard drives are formatted for NTFS. Without the driver, a user would have to reformat to the older FAT file system in order to gain the access.
With the driver, a user can attach a hard drive formatted for Windows via the tablet USB port, making it a value-added feature for Acer. Users can download and stream multimedia products to their hard drive but only transfer immediately desired files via USB storage. UFSD Technology has been designed to provide the highest possible read/write performance as well, which means transferring data is fast and effortless. Importantly, the read/write performance also ensures that media playback is top-notch, which is critical for quality output of Hi Definition and Blu-Ray media.
We’re excited Acer chose this product to push their tablets a step above the competition, and we’re happy to see our efforts working in the marketplace.
By Tom Fedro
Sales of disaster recovery software have shown dramatic growth over the last several years as just about every company has come to rely on systems and data management for continued operation. Although the first mention of this kind of disaster recovery occurred in the 1970s, it wasn’t for decades that the importance was fully realized. Back then, technology really wasn’t intertwined with a company’s operations the way it is now. Now, most companies would find it strange to think of technology and business as independent, the way we find it strange when we watch a TV show from the 1980s and don’t see cell phones.
Although technology is still advancing at breakneck speed, the data protection industry is essentially mature, and a number of companies vie for market share. When a company decides to determine which solution is correct, there are some important and critical considerations that need to go into the decision-making process. First and foremost, what is the need?
Too often, this step is skipped. Companies tend to examine what’s available and make choices based on the four or five alternatives they come across. That’s the wrong way to do business. The smartest people in the world make mistakes like this one, but they shouldn’t! There are a couple of cardinal rules about shopping at the grocery store that come to mind. First, never shop hungry. You end up overbuying and typically unhealthily. In the same manner, don’t wait for a crisis to buy your software. You’ll end up buying more than you need in most cases and the pain of the urgency will get the better of you.
The second rule? Shop with a list. Without it, you end up buying food you don’t need and you forget food you do need. In the world of technology, your list is called a needs assessment. Sit down with your tech department and your operations and figure out what you need. Here are some conversation starters.
- How much data can we afford to lose in a given period of time? One week? One day? One hour? This answer will tell you how regular your backups will need to be, and thus how important the ease of backup and the interruption the procedures cause will become to your decision making.
- How reliant on the systems is each department? It’s possible your inside sales department could handle a few hours of downtime. On the other hand, it might cripple your accounting department. When you’ve got all the information, you not only have criteria to determine purchasing based on restore times but also a blueprint for which departments should receive first attention from your IT department in the event of catastrophic failure.
- Which particular elements of the system or the data are most critical? If your employees have a dramatic need for email but not other documents, you’ll want software that can provide tools for partial and immediate restoration of that critical information (commonly called granular restore) while the rest of the system comes on line.
Don’t fall into the “Ready, Fire, Aim!” trap. Make your technology decisions like you make other business decisions. Identify the correct solution first. Then go out and get it.
By Tom Fedro
Some time ago, I read Out of the Crisis, by Dr. W. Edwards Deming, the man generally credited as the Father of the Total Quality Management movement. This book (which came out in 1984) is just chock full of brilliant business insight, but I’ve carried one phrase with me throughout my career. He wrote that the effects of some decisions were “unknown and unknowable.”
So much of his book was an exhortation to run companies using statistical process control that the line seemed out of place. It wasn’t though. He was illustrating that effects that may not have a visible dollar sign attached to them are still worthy of management consideration. We tend to push aside concerns that we can’t label easily, and this is especially true in data protection and restoration.
Anyone in the tech sector hears about the nightmare situations where companies lose or compromise data that leads to gigantic expenses by way of lost contracts, fines, or settlements. For most companies, though, data loss doesn’t represent something that can be negotiated and pinned down. The truth is, several consequences of data loss have Dr. Deming’s unknown impacts. Today, I want to focus on two. There are a whole lot more, but those will come in later posts.
- Rework. If you bought motherboards from an overseas provider and they were delivered with flaws, you’d send them back. The manufacturer would take the defects and put them into “rework.” This is pretty straightforward. “Re” is the Latin root that means “again.” The work is done again. Everybody gets that this is a bad thing in manufacturing.Somehow, when it’s done with soft work like typing, accounting, and marketing collateral; management tends to forget that it’s still doing work again. Let me rephrase that—it’s doing work twice and getting the benefit once. The problem is that a manufacturer will show the rework in the accounting reports. There’s simply no way to do that effectively in other industries, it’s unknown.
- Inefficiency. This is one of my pet peeves. I like to compare it to writing with a pencil and a piece of paper. Imagine you had to get a five page document handwritten and you could get about two pages done per hour. No problem, two and a half hours, right. Okay, let’s add a curveball. The pencil lead is going to break. You’re going to need to wait a few minutes because only one or two people know how to operate the sharpener, and they’re sharpening other pencils right now.Let that happen a time or two and your two and a half hour job has taken an extra hour or two. Essentially, it’s a hidden and unknown expense. You can measure how many times the pencil breaks. You can even measure how long it takes to be sharpened, but you can’t really measure how it impacts an employee’s ability to write. It would be naïve to believe that the stopping and restarting doesn’t cause delay above the actual downtime.